Skip to content

viraj-s15/ScientificLlama

Repository files navigation

Llama 2 Science MCQ Solver

Fine Tuned Llama-2-7b to solve Science MCQ's

Table of Contents

About

Got inspired for this from this kaggle competition. The training data for that exact kaggle competition has been used to fine tune the model.

The model has been trained using PEFT and LoRA, weights and biases api was used to measure and analyse the resource consumption and loss of the model. It has been fine tuned till a loss of 0.058200 which seems good enoguh for real life purposes. The submission.csv for the kaggle competition will also be uploaded here in the root directory

Getting Started

  • Create a new environment
  • Activate it
  • Install deps using the requirements.txt
python3 -m venv venv
source venv/source/bin
pip install -r requirements.txt

Usage

This model has been uploaded to Hugging Face. It will be accessible from and compatible with all hugging face apis.


If you publish this model in your work please use this BibTex citation

@misc {viraj_shah_2023,
	author       = { {Viraj Shah} },
	title        = { llama2-science-mcq-solver (Revision baa10d4) },
	year         = 2023,
	url          = { https://huggingface.co/Veer15/llama2-science-mcq-solver },
	doi          = { 10.57967/hf/1038 },
	publisher    = { Hugging Face }
}

About

Fine tuned Llama2 for answering science questions

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published