Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(deps): update dependency bitsandbytes to ^0.44.0 #169

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Oct 30, 2024

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
bitsandbytes ^0.43.1 -> ^0.44.0 age adoption passing confidence

Release Notes

TimDettmers/bitsandbytes (bitsandbytes)

v0.44.1

Compare Source

What's Changed

Full Changelog: bitsandbytes-foundation/bitsandbytes@0.44.0...0.44.1

v0.44.0: : New AdEMAMix optimizer, Embeddings quantization, and more!

Compare Source

New optimizer: AdEMAMix

The AdEMAMix optimizer is a modification to AdamW which proposes tracking two EMAs to better leverage past gradients. This allows for faster convergence with less training data and improved resistance to forgetting.

We've implemented 8bit and paged variations: AdEMAMix, AdEMAMix8bit, PagedAdEMAMix, and PagedAdEMAMix8bit. These can be used with a similar API to existing optimizers.

import bitsandbytes as bnb

optimizer = bnb.optim.PagedAdEMAMix8bit(
    model.parameters(),
    lr=1e-4,
    betas=(0.9, 0.999, 0.9999),
    alpha=5.0,
    eps=1e-8,
    weight_decay=1e-2,
    alpha=5.0,
)

8-bit Optimizers Update

The block size for all 8-bit optimizers has been reduced from 2048 to 256 in this release. This is a change from the original implementation proposed in the paper which improves accuracy.

CUDA Graphs support

A fix to enable CUDA Graphs capture of kernel functions was made in #​1330. This allows for performance improvements with inference frameworks like vLLM. Thanks @​jeejeelee!

Quantization for Embeddings

The trend of LLMs to use larger vocabularies continues. The embeddings can take up a significant portion of a quantized model's footprint. We now have an implementation of Embedding4bit and Embedding8bit thanks to @​galqiwi!

Example usage:

import torch
import torch.nn as nn

from bitsandbytes.nn import Embedding4bit

fp16_module = nn.Embedding(128, 64)
quantized_module = Embedding4bit(128, 64)

quantized_module.load_state_dict(fp16_module.state_dict())

quantized_module = quantized_module.to(0)

Continuous Builds

We are now building binary wheels for each change on main. These builds can be used to preview upcoming changes.

🚤 Continuous Build

What's Changed

New Contributors

Full Changelog: bitsandbytes-foundation/bitsandbytes@0.43.3...v0.44.0


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot force-pushed the renovate/bitsandbytes-0.x branch from c5ef113 to 02ba822 Compare November 6, 2024 09:18
@renovate renovate bot force-pushed the renovate/bitsandbytes-0.x branch from 02ba822 to 6665494 Compare November 19, 2024 16:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants