-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove dependency on torch_scatter and torch_sparse #229
Conversation
Manually implement message passing in all architectures, removing dependency on pytorch_geometric
cc @guillemsimeon @AntonioMirarchi. Correctness tests are passing, but for this PR I had to reimplement the message passing of ET, T, and GN. I do not want to merge it before a long train has been carried out to compare nothing broke. |
For the ET, the MD17 is fine. For the T, I would directly remove the architecture from TorchMD-NET, and I do not know any case where it has been used. For the GN, you can perhaps use the ones here https://arxiv.org/pdf/2212.07492.pdf |
@giadefa, do you agree with removing the Transformer architecture altogether? Its less dev burden for sure... |
Raul, could you also train TensorNet on QM9 U0 with seed 1 as part of these tests? The yaml is in the repo. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it looks good to me
Removes dependency on torch_sparse by implementing message passing operations with manually written ones.
Removes dependency on torch_scatter by using torch.index_add and torch.scatter_reduce.