-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
134 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,134 @@ | ||
--- | ||
title: Agent Monitoring with OpenLIT | ||
description: Quickly start monitoring your Agents in just a single line of code with OpenTelemetry. | ||
icon: chart-line | ||
--- | ||
|
||
# OpenLIT Overview | ||
|
||
[OpenLIT](https://github.com/openlit/openlit?src=crewai-docs) is an open-source tool that makes it simple to monitor the performance of AI agents, LLMs, VectorDBs, and GPUs with just **one** line of code. | ||
It offers OpenTelemetry-native tracing and metrics to track important parameters like cost, latency, interactions and task sequences. | ||
This setup enables you to track hyperparameters and monitor for performance issues, helping you find ways to enhance and fine-tune your agents over time. | ||
|
||
![Overview of a select series of agent session runs](/images/langtrace1.png) | ||
![Overview of agent traces](/images/langtrace2.png) | ||
![Overview of llm traces in details](/images/langtrace3.png) | ||
|
||
### Features | ||
|
||
- **Analytics Dashboard**: Monitor your Agents health and performance with detailed dashboards that track metrics, costs, and user interactions. | ||
- **OpenTelemetry-native Observability SDK**: Vendor-neutral SDKs to send traces and metrics to your existing observability tools like Grafana, DataDog and more. | ||
- **Cost Tracking for Custom and Fine-Tuned Models**: Tailor cost estimations for specific models using custom pricing files for precise budgeting. | ||
- **Exceptions Monitoring Dashboard**: Quickly spot and resolve issues by tracking common exceptions and errors with a monitoring dashboard. | ||
- **Compliance and Security**: Detect potential threats such as profanity and PII leaks. | ||
- **Prompt Injection Detection**: Identify potential code injection and secret leaks. | ||
- **API Keys and Secrets Management**: Securely handle your LLM API keys and secrets centrally, avoiding insecure practices. | ||
- **Prompt Management**: Manage and version Agent prompts using PromptHub for consistent and easy access across Agents. | ||
- **Model Playground** Test and compare different models for your CrewAI agents before deployment. | ||
|
||
## Setup Instructions | ||
|
||
<Steps> | ||
<Step title="Deploy OpenLIT"> | ||
<Steps> | ||
<Step title="Git Clone OpenLIT Repository"> | ||
```shell | ||
git clone [email protected]:openlit/openlit.git | ||
``` | ||
</Step> | ||
<Step title="Start Docker Compose"> | ||
From the root directory of the [OpenLIT Repo](https://github.com/openlit/openlit), Run the below command: | ||
```shell | ||
docker compose up -d | ||
``` | ||
</Step> | ||
</Steps> | ||
</Step> | ||
<Step title="Install OpenLIT SDK"> | ||
```shell | ||
pip install openlit | ||
``` | ||
</Step> | ||
<Step title="Initialize OpenLIT in Your Application"> | ||
Add the following two lines to your application code: | ||
<Tabs> | ||
<Tab title="Setup using function arguments"> | ||
<Tabs> | ||
<Tab title="Python"> | ||
```python | ||
import openlit | ||
openlit.init(otlp_endpoint="http://127.0.0.1:4318") | ||
``` | ||
|
||
Example Usage for monitoring `OpenAI` Usage: | ||
|
||
```python | ||
from openai import OpenAI | ||
import openlit | ||
openlit.init(otlp_endpoint="http://127.0.0.1:4318") | ||
client = OpenAI( | ||
api_key="YOUR_OPENAI_KEY" | ||
) | ||
chat_completion = client.chat.completions.create( | ||
messages=[ | ||
{ | ||
"role": "user", | ||
"content": "What is LLM Observability", | ||
} | ||
], | ||
model="gpt-3.5-turbo", | ||
) | ||
``` | ||
</Tab> | ||
</Tabs> | ||
|
||
</Tab> | ||
<Tab title="Setup using Environment Variables"> | ||
|
||
<Tabs> | ||
<Tab title="Python"> | ||
Add the following two lines to your application code: | ||
```python | ||
import openlit | ||
openlit.init() | ||
``` | ||
|
||
Run the following command to configure the OTEL export endpoint: | ||
```shell | ||
export OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4318" | ||
``` | ||
|
||
Example Usage for monitoring `OpenAI` Usage: | ||
|
||
```python | ||
from openai import OpenAI | ||
import openlit | ||
openlit.init() | ||
client = OpenAI( | ||
api_key="YOUR_OPENAI_KEY" | ||
) | ||
chat_completion = client.chat.completions.create( | ||
messages=[ | ||
{ | ||
"role": "user", | ||
"content": "What is LLM Observability", | ||
} | ||
], | ||
model="gpt-3.5-turbo", | ||
) | ||
``` | ||
</Tab> | ||
</Tabs> | ||
</Tab> | ||
</Tabs> | ||
Refer to OpenLIT [Python SDK repository](https://github.com/openlit/openlit/tree/main/sdk/python) for more advanced configurations and use cases. | ||
</Step> | ||
</Steps> | ||
|