Change interface to causal_conv1d_update for continuous batching #29
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Currently, the conv_state passed to
causal_conv1d_update
must be represented a single tensor. This leads to difficulties with the management of the state for Jamba models in vLLM. This is because with continuous batching, the items in the current batch have arrived at different times, and their associated state is therefore allocated at different times and therefore likely different places.The initial Jamba support in vLLM vllm-project/vllm#4115 dealt with this by allocating two buffers and copied the state for the current batch into a contiguous tensor. A subsequent PR vllm-project/vllm#6739 added some bookkeeping to remove that extra memory and to reduce overhead from copying the state.
This PR adds the ability to pass a list of indices to
causal_conv_1d_update
, so each element in the batch can come from a different location in a larger conv_state tensor.