You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hello @chenwydj
555~
I stuck in step one.....
My environment : Hardware:A100-40G Nvidia-Driver:450.xxx(forgot the details) CUDA11.0 python3.6.9 cuDNN8.0.4 TensorRT7.2.5.1
Before I run "train_search.py",I have checked all the requirements. The samples of TensorRT run well.
After I run "train_search.py", the "train" is fine,I get a folder named like "search-pretrain-256x512_F12.L16_batch3-20220608-xxxxxx",there are something in it. Also I can see the terminal show all the 20 epoch have finished
But when it comes to "validation",something wrong.
the terminal show "use TensorRT for latency test"
then comes"RuntimeError:CUDA error:API call is not supported in the installed CUDA driver"
Is update the driver the only way I can solve the problem、、、?
The text was updated successfully, but these errors were encountered:
Hello,l also encountered this problem. when l loaded my data in dataloader "for i, samples_batch in enumerate(data_loader): ", there will be an error shows"RuntimeError:CUDA error:API call is not supported in the installed CUDA driver".
l think it maybe caused by the use of Multiple Processes in Python, because when l delete the order "torch.multiprocessing.set_start_method('spawn')", and put all the data in cuda when it was in cpu,they no longer report errors.
Hope my answer can help you.
hello @chenwydj
555~
I stuck in step one.....
My environment : Hardware:A100-40G Nvidia-Driver:450.xxx(forgot the details) CUDA11.0 python3.6.9 cuDNN8.0.4 TensorRT7.2.5.1
Before I run "train_search.py",I have checked all the requirements. The samples of TensorRT run well.
After I run "train_search.py", the "train" is fine,I get a folder named like "search-pretrain-256x512_F12.L16_batch3-20220608-xxxxxx",there are something in it. Also I can see the terminal show all the 20 epoch have finished
But when it comes to "validation",something wrong.
the terminal show "use TensorRT for latency test"
then comes"RuntimeError:CUDA error:API call is not supported in the installed CUDA driver"
Is update the driver the only way I can solve the problem、、、?
The text was updated successfully, but these errors were encountered: