Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[shardformer] update transformers (hpcaitech#5583)
* flash_attention forward upgrade * llama_model_forward * remove useless comment * update the requirements.txt * add the transformers version requirements * remove the LATEST VERSION try * [shardformer] update bloom model (hpcaitech#5518) * update bloom model * remove the version restriction * [shardformer] update_falcon (hpcaitech#5520) * [shardformer] update mistral model (hpcaitech#5511) * [shardformer] update gpt2 (hpcaitech#5502) * [shardformer] update gptj model (hpcaitech#5503) * [shardformer] update opt (hpcaitech#5522) * [shardformer] update t5 model (hpcaitech#5524) * [shardformer] update whisper model (hpcaitech#5529) * [shardformer] update vit model (hpcaitech#5530) * update vit model * remove the output_hidden_states * [shardformer] fix llama modeling * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [zero] support multiple (partial) backward passes (hpcaitech#5596) * [zero] support multiple (partial) backward passes * [misc] update requirements * [zero] support multiple (partial) backward passes (hpcaitech#5596) * [zero] support multiple (partial) backward passes * [misc] update requirements * fix conflicts * [doc] fix ColossalMoE readme (hpcaitech#5599) * fix readme * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * merge with main * merge with main * llama_model_forward * remove useless comment * remove the LATEST VERSION try * [shardformer] update bloom model (hpcaitech#5518) * update bloom model * remove the version restriction * [shardformer] update mistral model (hpcaitech#5511) * [shardformer] update opt (hpcaitech#5522) * [shardformer] update whisper model (hpcaitech#5529) * [shardformer] fix llama modeling * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [hotfix] Fix examples no pad token & auto parallel codegen bug; (hpcaitech#5606) * fix no pad token bug * fixed some auto parallel codegen bug, but might not run on torch 2.1 --------- Co-authored-by: Edenzzzz <[email protected]> * [shardformer] fix pipeline grad ckpt (hpcaitech#5620) * [shardformer] fix pipeline grad ckpt * [shardformer] fix whisper (hpcaitech#5628) * [test] fix llama model test * fix the opt upgrade (hpcaitech#5634) * [shardformer] fix attn replacement (hpcaitech#5636) * [shardformer] update flashattention replacement (hpcaitech#5637) * update transformers update transformers fix fix * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [test] fix llama test (hpcaitech#5638) * [gemini] fix buffer cast (hpcaitech#5639) * Fix shardformer upgrade (hpcaitech#5640) * fix llama model * fix the mistral * fix the shardformer model * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [shardformer]support pipeline parallelism for mistral. (hpcaitech#5642) * [shardformer] fix attn replacement (hpcaitech#5636) * [shardformer] update flashattention replacement (hpcaitech#5637) * update transformers update transformers fix fix * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [Feature] Support LLaMA-3 CPT and ST (hpcaitech#5619) * support LLaMA-3 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Run pre-commit --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * [exampe] update llama example (hpcaitech#5626) * [plugin] support dp inside for hybriad parallel * [example] update llama benchmark * [example] update llama benchmark * [example] update llama readme * [example] update llama readme * [example] llama3 (hpcaitech#5631) * release llama3 * [release] llama3 * [release] llama3 * [release] llama3 * [release] llama3 * [test] fix llama test (hpcaitech#5638) * [gemini] fix buffer cast (hpcaitech#5639) * support pp for mistral * fix * fix fix fix * fix --------- Co-authored-by: Hongxin Liu <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Tong Li <[email protected]> Co-authored-by: binmakeswell <[email protected]> --------- Co-authored-by: Hongxin Liu <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Camille Zhong <[email protected]> Co-authored-by: Edenzzzz <[email protected]> Co-authored-by: Edenzzzz <[email protected]> Co-authored-by: flybird11111 <[email protected]> Co-authored-by: Tong Li <[email protected]> Co-authored-by: binmakeswell <[email protected]>
- Loading branch information