Skip to content

Explore the AI Singapore SEA-LION model with a chatbot 🤖 built with Chainlit and Ollama

License

Notifications You must be signed in to change notification settings

aisingapore/sealion-chainlit-ollama

Repository files navigation

Explore the AI Singapore SEA-LION model with Chainlit and Ollama

Overview

Note

This project is designed for local environments. Do not run it in production.

Meet the Cast

Getting Started

Prerequisites

Run the app

  • Install Ollama, if it is not already installed.
  • Pull the model.
    ollama pull aisingapore/gemma2-9b-cpt-sea-lionv3-instruct:q4_k_m
  • In the project directory, create a virtual environment.
    python -m venv venv
  • Activate the virtual environment.
    source venv/bin/activate
  • Copy .env and update the values, if necessary:
    cp .env.example .env
  • Install the packages.
    pip install -r requirements.txt
    
  • Run the app.
    chainlit run src/main.py -w
  • Navigate to http://localhost:8000 to access the chatbot. image

Getting Started with Docker

Note

At the time of writing, GPU support in Docker Desktop is only available on Windows with the WSL2 backend.

Prerequisites

  • Docker
    • For the default model, set the memory limit to 6GB or more.
    • If a larger model is used, or if there are other active Docker containers in the environment, increase the memory limit further to take into account their memory requirements. docker_resources

Run the app with Docker

  • Copy .env and update the values, if necessary:
    cp .env.example .env
  • Start the services:
    docker compose up
  • Pull the SEA-LION model with Ollama:
    docker compose exec ollama ollama pull aisingapore/gemma2-9b-cpt-sea-lionv3-instruct:q4_k_m
  • Navigate to http://localhost:8000 to access the chatbot. image

Default Model

Customisations

Acknowledgements

About

Explore the AI Singapore SEA-LION model with a chatbot 🤖 built with Chainlit and Ollama

Topics

Resources

License

Stars

Watchers

Forks