Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TransformerXL: Using SGD instead of LAMB optimizer (Hyperparameter request) #1414

Open
HariSeldon11988 opened this issue Aug 2, 2024 · 0 comments

Comments

@HariSeldon11988
Copy link

Training the TransformerXL model with default LAMB optimizer leads to the shown results in the readme.

But I need the SGD optimizer for a project. Using SGD leads to a significantly worse result as can be seen below.

Hast someone tested this and improved maybe the hyperparameters, or can provide some informations, how to improve the training result?

I would appreciate any help or input.

LAMBvsSGD

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant