Welcome to NOVA, a dynamic AI assistant for VRChat, designed to interact with users in various moods and respond to voice commands. This README provides an overview of the script, setup instructions, and usage guidelines.
NOVA is an AI assistant tailored for VRChat, integrating with OpenAI's API via LM Studio and utilizing Whisper for speech-to-text functionality. The script manages different moods, processes user input, and handles various commands to customize the assistant's behavior.
- Voice Commands: Accepts voice commands to change moods, restart the program, and more.
- Moods: Switches between different modes such as normal, argument, drunk, and more.
- Text-to-Speech: Converts text responses into speech using
pyttsx3
. - Speech Recognition: Transcribes user speech to text using Whisper.
- OpenAI Integration: Utilizes OpenAI’s API with LM Studio for generating responses.
Ensure you have the following Python libraries installed:
openai
pyttsx3
whisper
pydub
pyautogui
keyboard
python-osc
pyaudio
Install these dependencies using pip:
pip install openai pyttsx3 pyaudio whisper-openai pydub pyautogui keyboard python-osc
- Install LM Studio here.
- Search for "lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF" and download the appropriate model.
- Navigate to the Local Inference Server page and configure the settings:
- Tokens to generate: 250
- CPU threads: 14
- Enable "Keep entire model in RAM"
- Click the "Start Server" button.
Install virtual audio cables with at least 2 cables. Example: Virtual Audio Cable A+B.
Configure VRChat and Windows audio settings:
- VRChat default mic: Cable B
- Windows default input/output: Cable A
- Set the input mic to audio cable B.
- Set the computer's default output to cable A.
- Enable OSC in VRChat settings.
- Install dependencies using the pip command above.
- Replace placeholders in the code with your system information (e.g., local IP, VRChat port).
- Set up the audio device index in
audio_device_indexes.py
and updatemain.py
.
- Start VRChat and LM Studio.
- Run
main.py
and check the terminal for successful startup messages.
- Create a new system prompt file (e.g.,
mad_system_prompt.txt
). - Update the
mood_prompts
dictionary in the code. - Add new commands in
command_catcher()
andai_system_command_catcher()
. - Update the
additional_system_prompt.txt
file with the new mode details.
For development and troubleshooting, refer to the comments and documentation within the code. If you encounter errors, please contact me for help.
This project is licensed under the MIT License. See the LICENSE file for details.