Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try start train on CPU Intel #9

Open
PetroRudyi opened this issue May 8, 2023 · 2 comments
Open

Try start train on CPU Intel #9

PetroRudyi opened this issue May 8, 2023 · 2 comments

Comments

@PetroRudyi
Copy link

Hello

I tried to run train_fill50k.py, but I get this error. I removed --mixed_precision="fp16" from the launch command and got this result. Running the original command on the GPU in the collab also returns this result.

05/08/2023 14:04:05 - INFO - __main__ - ***** Running training *****
05/08/2023 14:04:05 - INFO - __main__ -   Num examples = 50000
05/08/2023 14:04:05 - INFO - __main__ -   Num Epochs = 100
05/08/2023 14:04:05 - INFO - __main__ -   Instantaneous batch size per device = 1
05/08/2023 14:04:05 - INFO - __main__ -   Total train batch size (w. parallel, distributed & accumulation) = 1
05/08/2023 14:04:05 - INFO - __main__ -   Gradient Accumulation steps = 1
05/08/2023 14:04:05 - INFO - __main__ -   Total optimization steps = 5000000
Steps:   0%|          | 0/5000000 [00:00<?, ?it/s]
Steps:   0%|          | 0/5000000 [00:00<?, ?it/s]/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/schedulers/scheduling_ddpm.py:172: FutureWarning: Accessing `num_train_timesteps` directly via scheduler.num_train_timesteps is deprecated. Please use `  instead`
  deprecate(
Traceback (most recent call last):
  File "/Users/petro/PycharmProjects/ControlLoRA/train_text_to_image_control_lora.py", line 1006, in <module>
    main()
  File "/Users/petro/PycharmProjects/ControlLoRA/train_text_to_image_control_lora.py", line 782, in main
    model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/unet_2d_condition.py", line 695, in forward
    sample, res_samples = downsample_block(
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/unet_2d_blocks.py", line 867, in forward
    hidden_states = attn(
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/transformer_2d.py", line 265, in forward
    hidden_states = block(
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/attention.py", line 294, in forward
    attn_output = self.attn1(
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/attention_processor.py", line 243, in forward
    return self.processor(
  File "/Users/petro/PycharmProjects/ControlLoRA/models.py", line 230, in __call__
    attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/attention_processor.py", line 302, in prepare_attention_mask
    deprecate(
  File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/utils/deprecation_utils.py", line 18, in deprecate
    raise ValueError(
ValueError: The deprecation tuple ('batch_size=None', '0.0.15', 'Not passing the `batch_size` parameter to `prepare_attention_mask` can lead to incorrect attention mask preparation and is deprecated behavior. Please make sure to pass `batch_size` to `prepare_attention_mask` when preparing the attention_mask.') should be removed since diffusers' version 0.15.0 is >= 0.0.15
@HighCWu
Copy link
Owner

HighCWu commented May 8, 2023

maybe you should use diffusers==0.13.0

@PetroRudyi
Copy link
Author

Heh

Thanks, something started\

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants