Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No CPU fallback? #1222

Open
JeLuF opened this issue May 4, 2023 · 2 comments
Open

No CPU fallback? #1222

JeLuF opened this issue May 4, 2023 · 2 comments

Comments

@JeLuF
Copy link

JeLuF commented May 4, 2023

Issue description

In our software, we install the ROCm enabled pytorch if we detect an AMD GPU on Linux. Otherwise, we install the CUDA-enabled pytorch. If the AMD GPU is not supported by ROCm, the users get an error hipErrorNoBinaryForGpu: Unable to find code object for all current devices!.

Is there a way to fallback to CPU mode in the ROCm enabled pytorch?

Code example

>>> import torch
>>> torch.cuda.is_available()
"hipErrorNoBinaryForGpu: Unable to find code object for all current devices!"
Aborted (core dumped)
@YellowRoseCx
Copy link

Maybe something like this?

import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if device.type == 'cuda':
    try:
        # Try to use ROCm-enabled PyTorch
        model.to(device)
    except RuntimeError as error:
        if 'Unable to find code object for all current devices' in str(error):
            # Fallback to CPU mode
            device = torch.device('cpu')
        else:
            # Re-raise the error if it's not the one we expect
            raise error
else:
    # Use CPU mode
    model.to(device)

@JeLuF
Copy link
Author

JeLuF commented May 7, 2023

Maybe something like this?

The problem is that torch.cuda.is_available() already core dumps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants