Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The experimental results have a large gap with the one in README #13

Open
Sun-Happy-YKX opened this issue Dec 8, 2022 · 1 comment
Open

Comments

@Sun-Happy-YKX
Copy link

I can not get the corresponding result in readme, and does anyone else has once get the corresponding result(that is, BLUE ≈ 26)
image
I wonder if author's language pack version used is different from mine, which leads to a large gap between the experimental results and the one in README .
image

@tanjeffreyz
Copy link

Are you evaluating the saved model without re-training? This could be happening because the code doesn't seem to be seeded using torch.manual_seed, so the token embeddings are randomized every time you load the model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants