Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix JSONDecodeError for Ollama and hide LiteLLM vision support warning #1524

Open
wants to merge 2 commits into
base: development
Choose a base branch
from

Conversation

CyanideByte
Copy link
Contributor

@CyanideByte CyanideByte commented Nov 5, 2024

Describe the changes you have made:

  • Disabled function calling for ollama which fixes "json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)"
  • Hides a LiteLLM warning that the model doesn't support vision

Reference any relevant issues (e.g. "Fixes #000"):

Pre-Submission Checklist (optional but appreciated):

  • I have included relevant documentation updates (stored in /docs)
  • I have read docs/CONTRIBUTING.md
  • I have read docs/ROADMAP.md

OS Tests (optional but appreciated):

  • Tested on Windows
  • Tested on MacOS
  • Tested on Linux

@Notnaton
Copy link
Collaborator

Notnaton commented Nov 12, 2024

#1514

@Notnaton
Copy link
Collaborator

Does litellm have a dict of which model/provider supports function calling?

If function call fail, should we set the --no-supports_function_calling flag automatically?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants