-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CLASS_IDX and SPLIT_IDX mean? #4
Comments
Hello, The flag --target_class_idx is for parallelizing the generation across different GPUs, as the process happens separately for each class in the cls-wise version of training the DataDream weights, and for both cls- and dset-wise generation. i.e. if you want to split N classes among M GPUs, this is how you can assign individual classes to GPUs. e.g. to put class 0 on GPU 1, you could use To generate the full dataset, you would need to execute this code for each individual class target_class_idx = 0 - (N - 1). To use CLASS_IDX, you would specify each class individually (this could be convenient if you are using slurm). On the other hand, SPLIT_IDX provides a way to split the classes evenly among M available GPUs. In bash_run.sh, SET_SPLIT defines M (currently set to M = 5). It will allocate 1 / 5th of the jobs to a given GPU. |
Thank you so much for your reply and help. I would also like to ask if you are using the SD2.1 version, because I used the stabilityai/stable-diffusion-2-1-base in cars only have acc 91.07, stabilityai/stable-diffusion-2-1-base, stabilityai/stable-diffusion-2-1 or stabilityai/stable-diffusion-2-1-unclip, I really look forward to knowing the details of how you implemented it |
I dont know how to how to set the CLASS_IDX and SPLIT_IDX,can you give a example,please
The text was updated successfully, but these errors were encountered: