You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
on 4 nodes, each nodes has 8 NVIDIA H20 GPUs. However, the training speed is much slower: I trained 200 Epochs on ImageNet 256x256 dataset with more than 2 days. Is there any accelerate methods in the training? Thank you.
By the way, the model after training 175 epochs can not generate good samples. Here are some results without cfg:
I wonder to know whether the results are reasonable or not. Thank you!
The text was updated successfully, but these errors were encountered:
Thank you for your great work! I tried to train the model with command
on 4 nodes, each nodes has 8 NVIDIA H20 GPUs. However, the training speed is much slower: I trained 200 Epochs on ImageNet 256x256 dataset with more than 2 days. Is there any accelerate methods in the training? Thank you.
By the way, the model after training 175 epochs can not generate good samples. Here are some results without cfg:
I wonder to know whether the results are reasonable or not. Thank you!
The text was updated successfully, but these errors were encountered: