Skip to content

Releases: mudler/LocalAI

v2.12.4

11 Apr 10:36
Compare
Choose a tag to compare

Patch release to include #1985

v2.12.3

10 Apr 09:15
d692b2c
Compare
Choose a tag to compare

I'm happy to announce the v2.12.3 LocalAI release is out!

🌠 Landing page and Swagger

Ever wondered what to do after LocalAI is up and running? Integration with a simple web interface has been started, and you can see now a landing page when hitting the LocalAI front page:

Screenshot from 2024-04-07 14-43-26

You can also now enjoy Swagger to try out the API calls directly:

swagger

🌈 AIO images changes

Now the default model for CPU images is https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF - pre-configured for functions and tools API support!
If you are an Intel-GPU owner, the Intel profile for AIO images is now available too!

🚀 OpenVINO and transformers enhancements

Now there is support for OpenVINO and transformers got token streaming support now thanks to @fakezeta!

To try OpenVINO, you can use the example available in the documentation: https://localai.io/features/text-generation/#examples

🎈 Lot of small improvements behind the scenes!

Thanks for our outstanding community, we have enhanced several areas:

  • The build time of LocalAI was speed up significantly! thanks to @cryptk for the efforts in enhancing the build system
  • @thiner worked hardly to get Vision support for AutoGPTQ
  • ... and much more! see down below for a full list, be sure to star LocalAI and give it a try!

📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

What's Changed

Bug fixes 🐛

  • fix: downgrade torch by @mudler in #1902
  • fix(aio): correctly detect intel systems by @mudler in #1931
  • fix(swagger): do not specify a host by @mudler in #1930
  • fix(tools): correctly render tools response in templates by @mudler in #1932
  • fix(grammar): respect JSONmode and grammar from user input by @mudler in #1935
  • fix(hermes-2-pro-mistral): add stopword for toolcall by @mudler in #1939
  • fix(functions): respect when selected from string by @mudler in #1940
  • fix: use exec in entrypoint scripts to fix signal handling by @cryptk in #1943
  • fix(hermes-2-pro-mistral): correct stopwords by @mudler in #1947
  • fix(welcome): stable model list by @mudler in #1949
  • fix(ci): manually tag latest images by @mudler in #1948
  • fix(seed): generate random seed per-request if -1 is set by @mudler in #1952
  • fix regression #1971 by @fakezeta in #1972

Exciting New Features 🎉

  • feat(aio): add intel profile by @mudler in #1901
  • Enhance autogptq backend to support VL models by @thiner in #1860
  • feat(assistant): Assistant and AssistantFiles api by @christ66 in #1803
  • feat: Openvino runtime for transformer backend and streaming support for Openvino and CUDA by @fakezeta in #1892
  • feat: Token Stream support for Transformer, fix: missing package for OpenVINO by @fakezeta in #1908
  • feat(welcome): add simple welcome page by @mudler in #1912
  • fix(build): better CI logging and correct some build failure modes in Makefile by @cryptk in #1899
  • feat(webui): add partials, show backends associated to models by @mudler in #1922
  • feat(swagger): Add swagger API doc by @mudler in #1926
  • feat(build): adjust number of parallel make jobs by @cryptk in #1915
  • feat(swagger): update by @mudler in #1929
  • feat: first pass at improving logging by @cryptk in #1956
  • fix(llama.cpp): set better defaults for llama.cpp by @mudler in #1961

📖 Documentation and examples

  • docs(aio-usage): update docs to show examples by @mudler in #1921

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.11.0...v2.12.3

v2.12.1

09 Apr 13:46
cc3d601
Compare
Choose a tag to compare

I'm happy to announce the v2.12.1 LocalAI release is out!

🌠 Landing page and Swagger

Ever wondered what to do after LocalAI is up and running? Integration with a simple web interface has been started, and you can see now a landing page when hitting the LocalAI front page:

Screenshot from 2024-04-07 14-43-26

You can also now enjoy Swagger to try out the API calls directly:

swagger

🌈 AIO images changes

Now the default model for CPU images is https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF - pre-configured for functions and tools API support!
If you are an Intel-GPU owner, the Intel profile for AIO images is now available too!

🚀 OpenVINO and transformers enhancements

Now there is support for OpenVINO and transformers got token streaming support now thanks to @fakezeta!

To try OpenVINO, you can use the example available in the documentation: https://localai.io/features/text-generation/#examples

🎈 Lot of small improvements behind the scenes!

Thanks for our outstanding community, we have enhanced several areas:

  • The build time of LocalAI was speed up significantly! thanks to @cryptk for the efforts in enhancing the build system
  • @thiner worked hardly to get Vision support for AutoGPTQ
  • ... and much more! see down below for a full list, be sure to star LocalAI and give it a try!

📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

What's Changed

Bug fixes 🐛

  • fix: downgrade torch by @mudler in #1902
  • fix(aio): correctly detect intel systems by @mudler in #1931
  • fix(swagger): do not specify a host by @mudler in #1930
  • fix(tools): correctly render tools response in templates by @mudler in #1932
  • fix(grammar): respect JSONmode and grammar from user input by @mudler in #1935
  • fix(hermes-2-pro-mistral): add stopword for toolcall by @mudler in #1939
  • fix(functions): respect when selected from string by @mudler in #1940
  • fix: use exec in entrypoint scripts to fix signal handling by @cryptk in #1943
  • fix(hermes-2-pro-mistral): correct stopwords by @mudler in #1947
  • fix(welcome): stable model list by @mudler in #1949
  • fix(ci): manually tag latest images by @mudler in #1948
  • fix(seed): generate random seed per-request if -1 is set by @mudler in #1952
  • fix regression #1971 by @fakezeta in #1972

Exciting New Features 🎉

  • feat(aio): add intel profile by @mudler in #1901
  • Enhance autogptq backend to support VL models by @thiner in #1860
  • feat(assistant): Assistant and AssistantFiles api by @christ66 in #1803
  • feat: Openvino runtime for transformer backend and streaming support for Openvino and CUDA by @fakezeta in #1892
  • feat: Token Stream support for Transformer, fix: missing package for OpenVINO by @fakezeta in #1908
  • feat(welcome): add simple welcome page by @mudler in #1912
  • fix(build): better CI logging and correct some build failure modes in Makefile by @cryptk in #1899
  • feat(webui): add partials, show backends associated to models by @mudler in #1922
  • feat(swagger): Add swagger API doc by @mudler in #1926
  • feat(build): adjust number of parallel make jobs by @cryptk in #1915
  • feat(swagger): update by @mudler in #1929
  • feat: first pass at improving logging by @cryptk in #1956
  • fix(llama.cpp): set better defaults for llama.cpp by @mudler in #1961

📖 Documentation and examples

  • docs(aio-usage): update docs to show examples by @mudler in #1921

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.11.0...v2.12.1

v2.12.0

09 Apr 07:03
Compare
Choose a tag to compare

I'm happy to announce the v2.12.0 LocalAI release is out!

🌠 Landing page and Swagger

Ever wondered what to do after LocalAI is up and running? Integration with a simple web interface has been started, and you can see now a landing page when hitting the LocalAI front page:

Screenshot from 2024-04-07 14-43-26

You can also now enjoy Swagger to try out the API calls directly:

swagger

🌈 AIO images changes

Now the default model for CPU images is https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF - pre-configured for functions and tools API support!
If you are an Intel-GPU owner, the Intel profile for AIO images is now available too!

🚀 OpenVINO and transformers enhancements

Now there is support for OpenVINO and transformers got token streaming support now thanks to @fakezeta!

To try OpenVINO, you can use the example available in the documentation: https://localai.io/features/text-generation/#examples

🎈 Lot of small improvements behind the scenes!

Thanks for our outstanding community, we have enhanced several areas:

  • The build time of LocalAI was speed up significantly! thanks to @cryptk for the efforts in enhancing the build system
  • @thiner worked hardly to get Vision support for AutoGPTQ
  • ... and much more! see down below for a full list, be sure to star LocalAI and give it a try!

📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

What's Changed

Bug fixes 🐛

  • fix: downgrade torch by @mudler in #1902
  • fix(aio): correctly detect intel systems by @mudler in #1931
  • fix(swagger): do not specify a host by @mudler in #1930
  • fix(tools): correctly render tools response in templates by @mudler in #1932
  • fix(grammar): respect JSONmode and grammar from user input by @mudler in #1935
  • fix(hermes-2-pro-mistral): add stopword for toolcall by @mudler in #1939
  • fix(functions): respect when selected from string by @mudler in #1940
  • fix: use exec in entrypoint scripts to fix signal handling by @cryptk in #1943
  • fix(hermes-2-pro-mistral): correct stopwords by @mudler in #1947
  • fix(welcome): stable model list by @mudler in #1949
  • fix(ci): manually tag latest images by @mudler in #1948
  • fix(seed): generate random seed per-request if -1 is set by @mudler in #1952
  • fix regression #1971 by @fakezeta in #1972

Exciting New Features 🎉

  • feat(aio): add intel profile by @mudler in #1901
  • Enhance autogptq backend to support VL models by @thiner in #1860
  • feat(assistant): Assistant and AssistantFiles api by @christ66 in #1803
  • feat: Openvino runtime for transformer backend and streaming support for Openvino and CUDA by @fakezeta in #1892
  • feat: Token Stream support for Transformer, fix: missing package for OpenVINO by @fakezeta in #1908
  • feat(welcome): add simple welcome page by @mudler in #1912
  • fix(build): better CI logging and correct some build failure modes in Makefile by @cryptk in #1899
  • feat(webui): add partials, show backends associated to models by @mudler in #1922
  • feat(swagger): Add swagger API doc by @mudler in #1926
  • feat(build): adjust number of parallel make jobs by @cryptk in #1915
  • feat(swagger): update by @mudler in #1929
  • feat: first pass at improving logging by @cryptk in #1956
  • fix(llama.cpp): set better defaults for llama.cpp by @mudler in #1961

📖 Documentation and examples

  • docs(aio-usage): update docs to show examples by @mudler in #1921

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.11.0...v2.12.0

v2.11.0

26 Mar 17:18
1395e50
Compare
Choose a tag to compare

Introducing LocalAI v2.11.0: All-in-One Images!

Hey everyone! 🎉 I'm super excited to share what we've been working on at LocalAI - the launch of v2.11.0. This isn't just any update; it's a massive leap forward, making LocalAI easier to use, faster, and more accessible for everyone.

🌠 The Spotlight: All-in-One Images, OpenAI in a box

Imagine having a magic box that, once opened, gives you everything you need to get your AI project off the ground with generative AI. A full clone of OpenAI in a box. That's exactly what our AIO images are! Designed for both CPU and GPU environments, these images come pre-packed with a full suite of models and backends, ready to go right out of the box.

Whether you're using Nvidia, AMD, or Intel, we've got an optimized image for you. If you are using CPU-only you can enjoy even smaller and lighter images.

To start LocalAI, pre-configured with function calling, llm, tts, speech to text, and image generation, just run:

docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu

## Do you have a Nvidia GPUs? Use this instead
## CUDA 11
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-cuda-11
## CUDA 12
# docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-cuda-12

❤️ Why You're Going to Love AIO Images:

  • Ease of Use: Say goodbye to the setup blues. With AIO images, everything is configured upfront, so you can dive straight into the fun part - hacking!
  • Flexibility: CPU, Nvidia, AMD, Intel? We support them all. These images are made to adapt to your setup, not the other way around.
  • Speed: Spend less time configuring and more time innovating. Our AIO images are all about getting you across the starting line as fast as possible.

🌈 Jumping In Is a Breeze:

Getting started with AIO images is as simple as pulling from Docker Hub or Quay and running it. We take care of the rest, downloading all necessary models for you. For all the details, including how to customize your setup with environment variables, our updated docs have got you covered here, while you can get more details of the AIO images here.

🎈 Vector Store

Thanks to the great contribution from @richiejp now LocalAI has a new backend type, "vector stores" that allows to use LocalAI as in-memory Vector DB (#1792). You can learn more about it here!

🐛 Bug fixes

This release contains major bugfixes to the watchdog component, and a fix to a regression introduced in v2.10.x which was not respecting --f16, --threads and --context-size to be applied as model's defaults.

🎉 New Model defaults for llama.cpp

Model defaults has changed to automatically offload maximum GPU layers if a GPU is available, and it sets saner defaults to the models to enhance the LLM's output.

🧠 New pre-configured models

You can now run llava-1.6-vicuna, llava-1.6-mistral and hermes-2-pro-mistral, see Run other models for a list of all the pre-configured models available in the release.

📣 Spread the word!

First off, a massive thank you (again!) to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsors can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI!

🔗 Links

🎁 What's More in v2.11.0?

Bug fixes 🐛

  • fix(config): pass by config options, respect defaults by @mudler in #1878
  • fix(watchdog): use ShutdownModel instead of StopModel by @mudler in #1882
  • NVIDIA GPU detection support for WSL2 environments by @enricoros in #1891
  • Fix NVIDIA VRAM detection on WSL2 environments by @enricoros in #1894

Exciting New Features 🎉

  • feat(functions/aio): all-in-one images, function template enhancements by @mudler in #1862
  • feat(aio): entrypoint, update workflows by @mudler in #1872
  • feat(aio): add tests, update model definitions by @mudler in #1880
  • feat(stores): Vector store backend by @richiejp in #1795
  • ci(aio): publish hipblas and Intel GPU images by @mudler in #1883
  • ci(aio): add latest tag images by @mudler in #1884

🧠 Models

  • feat(models): add phi-2-chat, llava-1.6, bakllava, cerbero by @mudler in #1879

📖 Documentation and examples

  • ⬆️ Update docs version mudler/LocalAI by @localai-bot in #1856
  • docs(mac): improve documentation for mac build by @tauven in #1873
  • docs(aio): Add All-in-One images docs by @mudler in #1887
  • fix(aio): make image-gen for GPU functional, update docs by @mudler in #1895

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.10.1...v2.11.0

v2.10.1

18 Mar 18:44
ed5734a
Compare
Choose a tag to compare

What's Changed

Bug fixes 🐛

  • fix(llama.cpp): fix eos without cache by @mudler in #1852
  • fix(config): default to debug=false if not set by @mudler in #1853
  • fix(config-watcher): start only if config-directory exists by @mudler in #1854

Exciting New Features 🎉

  • deps(whisper.cpp): update, fix cublas build by @mudler in #1846

Other Changes

Full Changelog: v2.10.0...v2.10.1

v2.10.0

16 Mar 15:43
8967ed1
Compare
Choose a tag to compare

LocalAI v2.10.0 Release Notes

Excited to announce the release of LocalAI v2.10.0! This version introduces significant changes, including breaking changes, numerous bug fixes, exciting new features, dependency updates, and more. Here's a summary of what's new:

Breaking Changes 🛠

  • The trust_remote_code setting in the YAML config file of the model are now consumed for enhanced security measures also for the AutoGPTQ and transformers backend, thanks to @dave-gray101's contribution (#1799). If your model relied on the old behavior and you are sure of what you are doing, set trust_remote_code: true in the YAML config file.

Bug Fixes 🐛

  • Various fixes have been implemented to enhance the stability and performance of LocalAI:
    • SSE no longer omits empty finish_reason fields for better compatibility with the OpenAI API, fixed by @mudler (#1745).
    • Functions now correctly handle scenarios with no results, also addressed by @mudler (#1758).
    • A Command Injection Vulnerability has been fixed by @ouxs-19 (#1778).
    • OpenCL-based builds for llama.cpp have been restored, thanks to @cryptk's efforts (#1828, #1830).
    • An issue with OSX build default.metallib has been resolved, which should now allow running the llama-cpp backend on Apple arm64, fixed by @dave-gray101 (#1837).

Exciting New Features 🎉

  • LocalAI continues to evolve with several new features:
    • Ongoing implementation of the assistants API, making great progress thanks to community contributions, including an initial implementation by @christ66 (#1761).
    • Addition of diffusers/transformers support for Intel GPU - now you can generate images and use the transformer backend also on Intel GPUs, implemented by @mudler (#1746).
    • Introduction of Bitsandbytes quantization for transformer backend enhancement and a fix for transformer backend error on CUDA by @fakezeta (#1823).
    • Compatibility layers for Elevenlabs and OpenAI TTS, enhancing text-to-speech capabilities: Now LocalAI is compatible with Elevenlabs and OpenAI TTS, thanks to @mudler (#1834).
    • vLLM now supports stream: true! This feature was introduced by @golgeek (#1749).

Dependency Updates 👒

  • Our continuous effort to keep dependencies up-to-date includes multiple updates to ggerganov/llama.cpp, donomii/go-rwkv.cpp, mudler/go-stable-diffusion, and others, ensuring that LocalAI is built on the latest and most secure libraries.

Other Changes

  • Several internal changes have been made to improve the development process and documentation, including updates to integration guides, stress reduction on self-hosted runners, and more.

Details of What's Changed

Breaking Changes 🛠

Bug fixes 🐛

  • fix(sse): do not omit empty finish_reason by @mudler in #1745
  • fix(functions): handle correctly when there are no results by @mudler in #1758
  • fix(tests): re-enable tests after code move by @mudler in #1764
  • Fix Command Injection Vulnerability by @ouxs-19 in #1778
  • fix: the correct BUILD_TYPE for OpenCL is clblas (with no t) by @cryptk in #1828
  • fix: missing OpenCL libraries from docker containers during clblas docker build by @cryptk in #1830
  • fix: osx build default.metallib by @dave-gray101 in #1837

Exciting New Features 🎉

  • fix: vllm - use AsyncLLMEngine to allow true streaming mode by @golgeek in #1749
  • refactor: move remaining api packages to core by @dave-gray101 in #1731
  • Bump vLLM version + more options when loading models in vLLM by @golgeek in #1782
  • feat(assistant): Initial implementation of assistants api by @christ66 in #1761
  • feat(intel): add diffusers/transformers support by @mudler in #1746
  • fix(config): set better defaults for inferencing by @mudler in #1822
  • fix(docker-compose): update docker compose file by @mudler in #1824
  • feat(model-help): display help text in markdown by @mudler in #1825
  • feat: Add Bitsandbytes quantization for transformer backend enhancement #1775 and fix: Transformer backend error on CUDA #1774 by @fakezeta in #1823
  • feat(tts): add Elevenlabs and OpenAI TTS compatibility layer by @mudler in #1834
  • feat(embeddings): do not require to be configured by @mudler in #1842

👒 Dependencies

Other Changes

New Contributors

Thank you to all contributors and users for your continued support and feedback, making LocalAI better with each release!

Full Changelog: v2.9.0...v2.10.0

v2.9.0

24 Feb 10:47
ff88c39
Compare
Choose a tag to compare

This release brings many enhancements, fixes, and a special thanks to the community for the amazing work and contributions!

We now have sycl images for Intel GPUs, ROCm images for AMD GPUs,and much more:

  • You can find the AMD GPU images tags between the container images available - look for hipblas. For example, master-hipblas-ffmpeg-core. Thanks to @fenfir for this nice contribution!
  • Intel GPU images are tagged with sycl. You can find images with two flavors, sycl-f16 and sycl-f32 respectively. For example, master-sycl-f16. Work is in progress to support also diffusers and transformers on Intel GPUs.
  • Thanks to @christ66 first efforts in supporting the Assistant API were made, and we are planning to support the Assistant API! Stay tuned for more!
  • Now LocalAI supports the Tools API endpoint - it also supports the (now deprecated) functions API call as usual. We now also have support for SSE with function calling. See #1726 for more
  • Support for Gemma models - did you hear? Google released OSS models and LocalAI supports it already!
  • Thanks to @dave-gray101 in #1728 to put efforts in refactoring parts of the code - we are going to support soon more ways to interface with LocalAI, and not only restful api!

Support the project

First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsorship program can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀

What's Changed

Bug fixes 🐛

Exciting New Features 🎉

👒 Dependencies

Other Changes

New Contributors

Full Changelog: v2.8.2...v2.9.0

v2.8.2

15 Feb 20:54
e690bf3
Compare
Choose a tag to compare

What's Changed

Bug fixes 🐛

  • fix(tts): fix regression when supplying backend from requests by @mudler in #1713

Full Changelog: v2.8.1...v2.8.2

v2.8.1

15 Feb 09:51
5e155fb
Compare
Choose a tag to compare

This is a patch release, mostly containing minor patches and bugfixes from 2.8.0.

Most importantly it contains a bugfix for #1333 which made the llama.cpp backend to get stuck in some cases where the model starts to hallucinate and fixes to the python-based backends.

Spread the word!

First off, a massive thank you to each and every one of you who've chipped in to squash bugs and suggest cool new features for LocalAI. Your help, kind words, and brilliant ideas are truly appreciated - more than words can say!

And to those of you who've been heros, giving up your own time to help out fellow users on Discord and in our repo, you're absolutely amazing. We couldn't have asked for a better community.

Just so you know, LocalAI doesn't have the luxury of big corporate sponsors behind it. It's all us, folks. So, if you've found value in what we're building together and want to keep the momentum going, consider showing your support. A little shoutout on your favorite social platforms using @LocalAI_OSS and @mudler_it or joining our sponsorship program can make a big difference.

Also, if you haven't yet joined our Discord, come on over! Here's the link: https://discord.gg/uJAeKSAGDy

Every bit of support, every mention, and every star adds up and helps us keep this ship sailing. Let's keep making LocalAI awesome together!

Thanks a ton, and here's to more exciting times ahead with LocalAI! 🚀

What's Changed

Bug fixes 🐛

Exciting New Features 🎉

  • feat(tts): respect YAMLs config file, add sycl docs/examples by @mudler in #1692
  • ci: add cuda builds to release by @sozercan in #1702

Other Changes

Full Changelog: v2.8.0...v2.8.1