Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expected scalar type Half but found BFloat16 #15

Open
nadora35 opened this issue Oct 31, 2024 · 7 comments
Open

expected scalar type Half but found BFloat16 #15

nadora35 opened this issue Oct 31, 2024 · 7 comments

Comments

@nadora35
Copy link

expected scalar type Half but found BFloat16

2024-10-31 04_46_52-_Unsaved Workflow - ComfyUI

@zaccheus
Copy link

same error

@sipie800
Copy link
Owner

sipie800 commented Nov 4, 2024

This error may be that comfyui autocasts data type to bf16, but bf16 is supported only by a few newer nvidia GPUs.
They mistake your GPU to the bf16 ones.
However even though comfyui may make it running with fp16 which is supported by every GPU, the fp16 needs larger VRAM than bf16. FLUX PULID may just fail to run because even with bf16, it consumes huge VRAM which is nearly exploding a 4090.
Please report to comfyui. Or my best suggestion will be updating your GPU.

@nadora35
Copy link
Author

nadora35 commented Nov 4, 2024

but from weak ago i test it and it works .... how is that !?

@sipie800
Copy link
Owner

sipie800 commented Nov 4, 2024

Basically, I never updated anything that could relates to bf16 or such things.

@anwoflow
Copy link

@nadora35 Have you found a solution for this?

@wandrzej
Copy link

Basically, I never updated anything that could relates to bf16 or such things.

You literally have this in the code :)

        device = comfy.model_management.get_torch_device()
        # Why should I care what args say, when the unet model has a different dtype?!
        # Am I missing something?!
        #dtype = comfy.model_management.unet_dtype()
        dtype = model.model.diffusion_model.dtype
        # For 8bit use bfloat16 (because ufunc_add_CUDA is not implemented)
        if dtype in [torch.float8_e4m3fn, torch.float8_e5m2]:
            dtype = torch.bfloat16

@stranger-games
Copy link

Basically, I never updated anything that could relates to bf16 or such things.

You literally have this in the code :)

        device = comfy.model_management.get_torch_device()
        # Why should I care what args say, when the unet model has a different dtype?!
        # Am I missing something?!
        #dtype = comfy.model_management.unet_dtype()
        dtype = model.model.diffusion_model.dtype
        # For 8bit use bfloat16 (because ufunc_add_CUDA is not implemented)
        if dtype in [torch.float8_e4m3fn, torch.float8_e5m2]:
            dtype = torch.bfloat16

I am not sure if that's where specifically the problem is because the same error happens even when using the flux1-dev rather than the flux1-dev-fp8

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants