Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA out of memory #301

Open
Mgithus opened this issue Sep 1, 2023 · 2 comments
Open

RuntimeError: CUDA out of memory #301

Mgithus opened this issue Sep 1, 2023 · 2 comments

Comments

@Mgithus
Copy link

Mgithus commented Sep 1, 2023

(aug_29) test_user@ncaiirl-Z490-GAMING-X:~/Desktop/junken$ python /home/test_user/Desktop/junken/jul_25/swin_unetr_run_in_terminal_25th_jul.py
MONAI version: 1.2.0
Numpy version: 1.25.2
Pytorch version: 1.12.1+cu113
MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False
MONAI rev id: c33f1ba588ee00229a309000e888f9817b4f1934
MONAI file: /home/test_user/anaconda3/envs/aug_29/lib/python3.10/site-packages/monai/init.py

Optional dependencies:
Pytorch Ignite version: 0.4.11
ITK version: 5.3.0
Nibabel version: 5.1.0
scikit-image version: 0.21.0
Pillow version: 9.4.0
Tensorboard version: 2.14.0
gdown version: NOT INSTALLED or UNKNOWN VERSION.
TorchVision version: NOT INSTALLED or UNKNOWN VERSION.
tqdm version: 4.66.1
lmdb version: 1.4.1
psutil version: 5.9.5
pandas version: 2.0.3
einops version: 0.6.1
transformers version: NOT INSTALLED or UNKNOWN VERSION.
mlflow version: NOT INSTALLED or UNKNOWN VERSION.
pynrrd version: 1.0.0

For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies

/tmp/tmpnvwh2hz9
monai.transforms.io.dictionary LoadImaged.init:image_only: Current default value of argument image_only=False has been deprecated since version 1.1. It will be changed to image_only=True in version 1.3.
Fri Sep 1 05:51:11 2023 Epoch: 0
Traceback (most recent call last):
File "/home/test_user/Desktop/junken/jul_25/swin_unetr_run_in_terminal_25th_jul.py", line 379, in
) = trainer(
File "/home/test_user/Desktop/junken/jul_25/swin_unetr_run_in_terminal_25th_jul.py", line 254, in trainer
train_loss = train_epoch(
File "/home/test_user/Desktop/junken/jul_25/swin_unetr_run_in_terminal_25th_jul.py", line 176, in train_epoch
loss.backward()
File "/home/test_user/anaconda3/envs/aug_29/lib/python3.10/site-packages/torch/_tensor.py", line 388, in backward
return handle_torch_function(
File "/home/test_user/anaconda3/envs/aug_29/lib/python3.10/site-packages/torch/overrides.py", line 1498, in handle_torch_function
result = torch_func_method(public_api, types, args, kwargs)
File "/home/test_user/anaconda3/envs/aug_29/lib/python3.10/site-packages/monai/data/meta_tensor.py", line 276, in torch_function
ret = super().torch_function(func, types, args, kwargs)
File "/home/test_user/anaconda3/envs/aug_29/lib/python3.10/site-packages/torch/_tensor.py", line 1121, in torch_function
ret = func(*args, **kwargs)
File "/home/test_user/anaconda3/envs/aug_29/lib/python3.10/site-packages/torch/_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/test_user/anaconda3/envs/aug_29/lib/python3.10/site-packages/torch/autograd/init.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 9.78 GiB total capacity; 7.20 GiB already allocated; 122.06 MiB free; 7.63 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I am trying to reproduce following code from monai official website:
[https://github.com/Project-MONAI/tutorials/blob/main/3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb]

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment (please complete the following information):
- OS
(aug_29) test_user@ncaiirl-Z490-GAMING-X:~/Desktop/junken$ /home/test_user/anaconda3/envs/aug_29/bin/python
Python 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

import platform
import psutil
m": platform.system(),
"CPU Cores": psutil.cpu_count(logical=False),
"CPU Threads": psutil.cpu_count(logical=True),
"RAM (GB)": round(psut>>> system_info = {
... "Operating System": platform.system(),
... "CPU Cores": psutil.cpu_count(logical=False),
... "CPU Threads": psutil.cpu_count(logical=True),
... "RAM (GB)": round(psutil.virtual_memory().total / (1024 ** 3), 2),
... }

print("System Specifications:")
System Specifications:
for key, value in system_info.items():
... print(f"{key}: {value}")
...
Operating System: Linux
CPU Cores: 8
CPU Threads: 16
RAM (GB): 7.67

- Python version
- MONAI version [e.g. git commit hash]
- CUDA/cuDNN version

import monai
on__)
import torch
print("CUDA Version:", torch.version.cuda)
print("cuDNN Version:", torch.backends.cudnn.version())
print("MONAI Version:", monai.version)
MONAI Version: 1.2.0
import torch
print("CUDA Version:", torch.version.cuda)
CUDA Version: 11.3
print("cuDNN Version:", torch.backends.cudnn.version())
cuDNN Version: 8302

- GPU models and configuration

import torch
print("GPU Models and Configuration:")
GPU Models and Configuration:
for i in range(torch.cuda.device_count()):
... print(f"GPU {i}: {torch.cuda.get_device_name(i)}")
...
GPU 0: NVIDIA GeForce RTX 3080

def get_gpu_info():
... num_gpus = torch.cuda.device_count()
... gpu_info = []
... for i in range(num_gpus):
... gpu_name = torch.cuda.get_device_name(i)
... gpu_memory = round(torch.cuda.get_device_properties(i).total_memory / (1024 ** 3), 2)
... gpu_info.append(f"GPU {i}: {gpu_name} (VRAM {gpu_memory} GB)")
... return gpu_info
...
print("GPU Information:")
GPU Information:
for info in get_gpu_info():
... print(info)
...
GPU 0: NVIDIA GeForce RTX 3080 (VRAM 9.78 GB)

nvidia-smi shows the following output after running tutorial:
(base) test_user@ncaiirl-Z490-GAMING-X:~$ nvidia-smi
Fri Sep 1 06:32:31 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.199.02 Driver Version: 470.199.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A |
| 0% 44C P8 32W / 370W | 9857MiB / 10014MiB | 3% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 44287 G /usr/lib/xorg/Xorg 35MiB |
| 0 N/A N/A 93228 G /usr/lib/xorg/Xorg 131MiB |
| 0 N/A N/A 93945 G /usr/bin/gnome-shell 53MiB |
| 0 N/A N/A 94329 G ...RendererForSitePerProcess 71MiB |
| 0 N/A N/A 97625 C ...a3/envs/aug_29/bin/python 9549MiB |
+-----------------------------------------------------------------------------+

nvtop shows following output after running above mentionedabove-mentioned:

aaaaaaaaaaaaaaaa

this tutorial run oky on google colab.

@Navee402
Copy link

Navee402 commented Feb 1, 2024

Hi, Im also encountering the same issue when trying to run in my system with nvidia rtx3090 gpu. Have you found the solution for this? If yes, then can you please share? Thank you so much

@Mgithus
Copy link
Author

Mgithus commented Mar 15, 2024

I tried to but didn't work. So went for google colab.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants