From 6d30abe14c65885da1852fc4010611569378a12b Mon Sep 17 00:00:00 2001
From: "github-actions[bot]" To run LLM-based applications, you need an API key for the LLM used. The most commonly used LLM is OpenAI. To use it, create an OpenAI API Key and set it as an environment variable in the terminal using the following command: If you want to use a different LLM provider, follow this guide. Alternatively, you can skip this step and set the LLM API key as an environment variable later in the devcontainer's terminal. If you open the project in Visual Studio Code using GUI, you will need to manually set the environment variable in the devcontainer's terminal. Open the generated project in Visual Studio Code with the following command:
Once the project is opened, you will get the following option to reopen it in a devcontainer:
After reopening the project in devcontainer, you can verify that the setup is correct by running the provided tests with the following command:
Once the project is opened, you will get the following option to reopen it in a devcontainer:
After reopening the project in devcontainer, you can verify that the setup is correct by running the provided tests with the following command:
You should get the following output if everything is correctly setup.
=================================== test session starts ===================================
platform linux -- Python 3.12.7, pytest-8.3.3, pluggy-1.5.0
rootdir: /workspaces/my_fastagency_app
diff --git a/0.3/tutorials/mesop_template/index.html b/0.3/tutorials/mesop_template/index.html
index 7ca50be4..81b25f70 100644
--- a/0.3/tutorials/mesop_template/index.html
+++ b/0.3/tutorials/mesop_template/index.html
@@ -59,7 +59,7 @@
└── pyproject.toml
To run LLM-based applications, you need an API key for the LLM used. The most commonly used LLM is OpenAI. To use it, create an OpenAI API Key and set it as an environment variable in the terminal using the following command:
If you want to use a different LLM provider, follow this guide.
Alternatively, you can skip this step and set the LLM API key as an environment variable later in the devcontainer's terminal. If you open the project in Visual Studio Code using GUI, you will need to manually set the environment variable in the devcontainer's terminal.
Open the generated project in Visual Studio Code with the following command:
Once the project is opened, you will get the following option to reopen it in a devcontainer:
After reopening the project in devcontainer, you can verify that the setup is correct by running the provided tests with the following command:
Once the project is opened, you will get the following option to reopen it in a devcontainer:
After reopening the project in devcontainer, you can verify that the setup is correct by running the provided tests with the following command:
You should get the following output if everything is correctly setup.
=================================== test session starts ===================================
platform linux -- Python 3.12.7, pytest-8.3.3, pluggy-1.5.0
rootdir: /workspaces/my_fastagency_app
@@ -97,6 +97,6 @@
[2024-10-10 13:19:18 +0530] [23635] [INFO] Listening at: http://127.0.0.1:8000 (23635)
[2024-10-10 13:19:18 +0530] [23635] [INFO] Using worker: sync
[2024-10-10 13:19:18 +0530] [23645] [INFO] Booting worker with pid: 23645
-
The command will launch a web interface where users can input their requests and interact with the agents (in this case http://localhost:8000)
Note
Ensure that your OpenAI API key is set in the environment, as the agents rely on it to interact using GPT-4o. If the API key is not correctly configured, the application may fail to retrieve LLM-powered responses.