diff --git a/docs/how-to/openlit-observability.mdx b/docs/how-to/openlit-observability.mdx new file mode 100644 index 0000000000..df0be8871a --- /dev/null +++ b/docs/how-to/openlit-observability.mdx @@ -0,0 +1,134 @@ +--- +title: Agent Monitoring with OpenLIT +description: Quickly start monitoring your Agents in just a single line of code with OpenTelemetry. +icon: chart-line +--- + +# OpenLIT Overview + +[OpenLIT](https://github.com/openlit/openlit?src=crewai-docs) is an open-source tool that makes it simple to monitor the performance of AI agents, LLMs, VectorDBs, and GPUs with just **one** line of code. +It offers OpenTelemetry-native tracing and metrics to track important parameters like cost, latency, interactions and task sequences. +This setup enables you to track hyperparameters and monitor for performance issues, helping you find ways to enhance and fine-tune your agents over time. + +![Overview of a select series of agent session runs](/images/langtrace1.png) +![Overview of agent traces](/images/langtrace2.png) +![Overview of llm traces in details](/images/langtrace3.png) + +### Features + +- **Analytics Dashboard**: Monitor your Agents health and performance with detailed dashboards that track metrics, costs, and user interactions. +- **OpenTelemetry-native Observability SDK**: Vendor-neutral SDKs to send traces and metrics to your existing observability tools like Grafana, DataDog and more. +- **Cost Tracking for Custom and Fine-Tuned Models**: Tailor cost estimations for specific models using custom pricing files for precise budgeting. +- **Exceptions Monitoring Dashboard**: Quickly spot and resolve issues by tracking common exceptions and errors with a monitoring dashboard. +- **Compliance and Security**: Detect potential threats such as profanity and PII leaks. +- **Prompt Injection Detection**: Identify potential code injection and secret leaks. +- **API Keys and Secrets Management**: Securely handle your LLM API keys and secrets centrally, avoiding insecure practices. +- **Prompt Management**: Manage and version Agent prompts using PromptHub for consistent and easy access across Agents. +- **Model Playground** Test and compare different models for your CrewAI agents before deployment. + +## Setup Instructions + + + + + + ```shell + git clone git@github.com:openlit/openlit.git + ``` + + + From the root directory of the [OpenLIT Repo](https://github.com/openlit/openlit), Run the below command: + ```shell + docker compose up -d + ``` + + + + + ```shell + pip install openlit + ``` + + + Add the following two lines to your application code: + + + + + ```python + import openlit + openlit.init(otlp_endpoint="http://127.0.0.1:4318") + ``` + + Example Usage for monitoring `OpenAI` Usage: + + ```python + from openai import OpenAI + import openlit + + openlit.init(otlp_endpoint="http://127.0.0.1:4318") + + client = OpenAI( + api_key="YOUR_OPENAI_KEY" + ) + + chat_completion = client.chat.completions.create( + messages=[ + { + "role": "user", + "content": "What is LLM Observability", + } + ], + model="gpt-3.5-turbo", + ) + ``` + + + + + + + + + Add the following two lines to your application code: + ```python + import openlit + + openlit.init() + ``` + + Run the following command to configure the OTEL export endpoint: + ```shell + export OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4318" + ``` + + Example Usage for monitoring `OpenAI` Usage: + + ```python + from openai import OpenAI + import openlit + + openlit.init() + + client = OpenAI( + api_key="YOUR_OPENAI_KEY" + ) + + chat_completion = client.chat.completions.create( + messages=[ + { + "role": "user", + "content": "What is LLM Observability", + } + ], + model="gpt-3.5-turbo", + ) + ``` + + + + + Refer to OpenLIT [Python SDK repository](https://github.com/openlit/openlit/tree/main/sdk/python) for more advanced configurations and use cases. + + +