Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

beam_search fix for running with torch.use_deterministic_algorithms(True) #1096

Merged
merged 2 commits into from
Sep 7, 2023

Conversation

Jehovan
Copy link
Contributor

@Jehovan Jehovan commented Sep 7, 2023

Context:
Pytorch has some bugs related to setting values for tensors to scalars when using torch.use_deterministic_algorithms(True) that were only addressed in most recent Pytorch versions. See:

The scalar attribution in beam_search.forward would throw:

    RuntimeError: linearIndex.numel()*sliceSize*nElemBefore == value.numel()INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/cuda/Indexing.cu":250, please report a bug to PyTorch. number of flattened indices did not match number of elements in the value tensor31

when running with torch.use_deterministic_algorithms(True).

The change in the PR did not have any changes in performance, but works around the 2 bugs mentioned above.

Testing done:

  • Ran with a custom model, using torch.use_deterministic_algorithms(True)

Pull Request Checklist

  • Changes are complete (if posting work-in-progress code, prefix your pull request title with '[WIP]'
    until you can check this box.
  • Unit tests pass (pytest)
  • Were system tests modified? If so did you run these at least 5 times to account for the variation across runs?
  • System tests pass (pytest test/system)
  • Passed code style checking (./style-check.sh)
  • You have considered writing a test
  • Updated major/minor version in sockeye/__init__.py. Major version bump if this is a backwards incompatible change.
  • Updated CHANGELOG.md

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

…rue)

Context:
Pytorch has some bugs related to setting values for tensors to scalars when using torch.use_deterministic_algorithms that were only addressed in most recent Pytorch versions. See:
* pytorch/pytorch@2d4b1ae
* pytorch/pytorch#68525

The scalar attribution in beam_search.forward would throw:
```
    RuntimeError: linearIndex.numel()*sliceSize*nElemBefore == value.numel()INTERNAL ASSERT FAILED at "/pytorch/aten/src/ATen/native/cuda/Indexing.cu":250, please report a bug to PyTorch. number of flattened indices did not match number of elements in the value tensor31
```
when running with torch.use_deterministic_algorithms(True).

The change in the CR did not have any changes in performance, but works around the 2 bugs mentioned above.

Testing done:
* Ran with a custom model, using `torch.use_deterministic_algorithms(True)`
@mjdenkowski mjdenkowski merged commit 2d80b2a into awslabs:main Sep 7, 2023
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants