- train (92k) - for PPO
- test(83.6k) - for testing
- valid1(33k) - for fine tuning the base model
- valid2(50k) - for the rewards model
We will be using the T5-base model (220M params): Being an encoder-decoder, seems a better option for summarization. Check the notebook on how we applied SFT on this model.
Here are the trained weights:
The target model we will be the same as the Base (fine-tuned): T5-base model (220M params): Being an encoder-decoder, seems a better option for summarization.
The rewards model: We will be using Bert, as an encoder is more appropriate to produce a reward or a penalty based on the input.
Weights
- Weights in HF: JuanKO/rlhf
- Weights at: https://drive.google.com/drive/folders/1BKtlHKiv60unMdaXt5IEBnOgzF_6cSux?usp=sharing
Weights should be downloaded to your local computer from this link and once there they can be used from the notebook. The notebook has a couple of lines to load the model:
model = AutoModelForSequenceClassification.from_pretrained("./model_bert_hf_experiment2/")
tokenizer = AutoTokenizer.from_pretrained("./model_bert_hf_experiment2/")