v0.10.0
alexsherstinsky
released this
22 Feb 19:20
·
36 commits
to master
since this release
What's Changed
- Add Phi-2 to model presets by @arnavgarg1 in #3912
- Add default LoRA target modules for Phi-2 by @arnavgarg1 in #3911
- Add support for prompt lookup decoding during generation by @arnavgarg1 in #3917
- Pin pyarrow to < 15.0.0 by @arnavgarg1 in #3918
- Add unet encoder-decoder and image output feature by @vijayi1 in #3913
- fix: Add Nested quantization check by @jeffkinnison in #3916
- fix typo in save_dequantized_base_model log statement by @arnavgarg1 in #3923
- Add example for base model dequantization/upscaling by @arnavgarg1 in #3924
- fix: Always return a list of quantization bits values from
get_quantization
by @jeffkinnison in #3926 - fix: set
use_reentrant
toTrue
to fixMixtral-7b
bug by @geoffreyangus in #3928 - Disabling AdaptionPrompt till PEFT is fixed. by @alexsherstinsky in #3935
- Add default LoRA target modules for Gemma by @arnavgarg1 in #3936
- Pinning transformers to 4.38.1 or above in order to ensure support for Gemma by @alexsherstinsky in #3940
- Ludwig release version change by @alexsherstinsky in #3941
New Contributors
Full Changelog: v0.9.3...v0.10.0