-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
no module named inference, openfold with pytorch2 and cuda12 #259
Comments
Question for anyone out there -- does I tried the following: |
UPDATE: DiffDock ran my example I just submitted, running out of the diffdock parent directory:
So success, right?!?! Can anyone tell me what the difference is in the .sdf files that are produced?
I couldn't find any clues in the preprint or the README here. Thanks. |
Hi so for my issue, I manually download all the file from the website, and I can run the diffdock calculation later on successfully. However, when I try to use the new one as you, with pytorch 2.5.1 and cuda 12.1, it takes me over 30 mins stuck at the first step of the new model. How long did you wait for the downloading parameter? |
Yes, your results are correct, be aware that the confidence score here are negative, and you can read the confidence score descrption from the README:
|
After working on setting up the proper environment for diffdock, I thought I had a system that would allow me to roll with pytorch 2 and cuda 12. I went to run a pdb/ligand pair out of the example folder and got the following error:
I found one other closed issue #175 that had the exact same error message but no expalnation for how it was resolved, so I am opening up a new issue. To take this from the beginning.....
My cloud environment is using cuda 12:
and this was not downgradeable to cuda11. After looking through the many issues trying to install OpenFold to get DiffDock to work (and failing several times myself to set up a DiffDock environment first and then installing OpenFold), I decided to try to get a working version of OpenFold on my cluster. Following the meticulous work and advice from another contributor on the OpenFold git page, I was able to get a working version of OpenFold based on pytorch2 and cuda12 installed.
I then cloned my OpenFold environment into a DiffDock environment so I could work on getting DiffDock running successfully.
Next step was to pull in the remaining dependencies for DiffDock:
I specified the versions of ProDy, e3nn, rdkit, and gradio to stay consstent with the environment.yml in the DiffDock source code. I specified the versions for the torch_ dependencies based on my installed version of pytorch (2.1.2). I had to pull a forked version of esm-fold to stay compatible with a pytorch2/cuda12 installation.
The following is my list of conda packages for this environment:
And here are my modules:
You'll notice that inference is conspicuosly absent from my module list. Which brings up a couple of questions:
Thanks for the help -- been banging my head against this problem for nearly 3 weeks and thought I was THISCLOSE to a working install after getting OpenFold to work.
The text was updated successfully, but these errors were encountered: