Skip to content

Commit

Permalink
add llm providers accordion group
Browse files Browse the repository at this point in the history
  • Loading branch information
tonykipkemboi committed Oct 30, 2024
1 parent 5f46ff8 commit f83156d
Showing 1 changed file with 180 additions and 85 deletions.
265 changes: 180 additions & 85 deletions docs/concepts/llms.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,52 +25,53 @@ By default, CrewAI uses the `gpt-4o-mini` model. It uses environment variables i
- `OPENAI_API_BASE`
- `OPENAI_API_KEY`

### 2. String Identifier
<Tabs>
<Tab title="String Identifier">
```python Code
agent = Agent(llm="gpt-4o", ...)
```
</Tab>

```python Code
agent = Agent(llm="gpt-4o", ...)
```

### 3. LLM Instance

List of [more providers](https://docs.litellm.ai/docs/providers).

```python Code
from crewai import LLM
<Tab title="LLM Instance">
```python Code
from crewai import LLM

llm = LLM(model="gpt-4", temperature=0.7)
agent = Agent(llm=llm, ...)
```
llm = LLM(model="gpt-4", temperature=0.7)
agent = Agent(llm=llm, ...)
```
</Tab>
</Tabs>

### 4. Custom LLM Objects
### 3. Custom LLM Objects

Pass a custom LLM implementation or object from another library.

## Connecting to OpenAI-Compatible LLMs

You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:

1. Using environment variables:

```python Code
import os

os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
```

2. Using LLM class attributes:

```python Code
from crewai import LLM

llm = LLM(
model="custom-model-name",
api_key="your-api-key",
base_url="https://api.your-provider.com/v1"
)
agent = Agent(llm=llm, ...)
```
<Tabs>
<Tab title="Using Environment Variables">
```python Code
import os

os.environ["OPENAI_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
```
</Tab>
<Tab title="Using LLM Class Attributes">
```python Code
from crewai import LLM

llm = LLM(
model="custom-model-name",
api_key="your-api-key",
base_url="https://api.your-provider.com/v1"
)
agent = Agent(llm=llm, ...)
```
</Tab>
</Tabs>

## LLM Configuration Options

Expand All @@ -97,55 +98,149 @@ When configuring an LLM for your agent, you have access to a wide range of param
| **api_key** | `str` | Your API key for authentication. |


## OpenAI Example Configuration

```python Code
from crewai import LLM

llm = LLM(
model="gpt-4",
temperature=0.8,
max_tokens=150,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42,
base_url="https://api.openai.com/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```

## Cerebras Example Configuration

```python Code
from crewai import LLM

llm = LLM(
model="cerebras/llama-3.1-70b",
base_url="https://api.cerebras.ai/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```

## Using Ollama (Local LLMs)

CrewAI supports using Ollama for running open-source models locally:

1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure agent:

```python Code
from crewai import LLM

agent = Agent(
llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
...
)
```
These are examples of how to configure LLMs for your agent.

<AccordionGroup>
<Accordion title="OpenAI">

```python Code
from crewai import LLM

llm = LLM(
model="gpt-4",
temperature=0.8,
max_tokens=150,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42,
base_url="https://api.openai.com/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>

<Accordion title="Cerebras">

```python Code
from crewai import LLM

llm = LLM(
model="cerebras/llama-3.1-70b",
base_url="https://api.cerebras.ai/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>

<Accordion title="Ollama (Local LLMs)">

CrewAI supports using Ollama for running open-source models locally:

1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure agent:

```python Code
from crewai import LLM

agent = Agent(
llm=LLM(
model="ollama/llama3.1",
base_url="http://localhost:11434"
),
...
)
```
</Accordion>

<Accordion title="Groq">

```python Code
from crewai import LLM

llm = LLM(
model="groq/llama3-8b-8192",
base_url="https://api.groq.com/openai/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>

<Accordion title="Anthropic">

```python Code
from crewai import LLM

llm = LLM(
model="anthropic/claude-3-5-sonnet-20241022",
base_url="https://api.anthropic.com/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>

<Accordion title="Fireworks">

```python Code
from crewai import LLM

llm = LLM(
model="fireworks/meta-llama-3.1-8b-instruct",
base_url="https://api.fireworks.ai/inference/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>

<Accordion title="Gemini">

```python Code
from crewai import LLM

llm = LLM(
model="gemini/gemini-1.5-flash",
base_url="https://api.gemini.google.com/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>

<Accordion title="Perplexity AI (pplx-api)">

```python Code
from crewai import LLM

llm = LLM(
model="perplexity/mistral-7b-instruct",
base_url="https://api.perplexity.ai/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>

<Accordion title="IBM watsonx.ai">

```python Code
from crewai import LLM

llm = LLM(
model="watsonx/ibm/granite-13b-chat-v2",
base_url="https://api.watsonx.ai/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
</AccordionGroup>

## Changing the Base API URL

Expand Down

0 comments on commit f83156d

Please sign in to comment.