First stage finetuning on multi GPU #283
Unanswered
callendeamen
asked this question in
Q&A
Replies: 1 comment
-
As far as I know, not finetuning nor 2nd stage training work on multiple GPUs. See this section of the repo readme. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
I'm fairly new to StyleTTS2 and I've been trying to finetune a mildly pruned version of StyleTTS2 in a multiGPU instance (AWS g5.12xlarge) to no avail. Running on torch
'2.4.1+cu121'
, batch_size = 8 and max_len = 100 (both configs work on an unpruned model)I keep getting:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight_arr in method wrapper_CUDA___cudnn_rnn_flatten_weight)
I know this is not directly related to the base repo, but if anyone has any input on this it's much appreciated. I have already checked both the model and the inputs are sent to device accordingly, my guess it's some issue related to having multiple GPUs, but I'm new to paralelization
Thanks in advance
Beta Was this translation helpful? Give feedback.
All reactions