Releases: lucaslie/torchprune
v2.2.0: Sparse Flow (NeurIPS 2021) release and updates
The new release contains code for a new paper with accompanying code, comparison methods, models, and datasets.
In addition to the previous papers that were covered by this codebase (ALDS, PFP, SiPP, Lost), we also extended the repository to include our latest paper on pruning neural ODEs , which was presented at NeurIPS 2021:
Sparse Flows: Pruning Continuous-depth Models
Check out the READMEs for more info.
v2.1.0: ALDS (NeurIPS 2021) release and updates
The new release contains code for a new paper with accompanying code, comparison methods, models, and datasets.
In addition to the previous papers that were covered by this codebase (PFP, SiPP, Lost), we also extended the repository to include our latest paper on pruning, which will be presented at NeurIPS 2021:
Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition
Check out the READMEs for more info.
Detailed release update:
ALDS
algorithm intorchprune
.- Various tensor decomposition methods as comparisons for ALDS.
- More network and dataset support, including Glue Benchmark and huggingface transformers.
- Experiment code, visualization, and paper reproducibility for
ALDS
.
v2.0.0: Major updates and new papers
The new release contains major overhauls and improvements to the code base.
In addition to the previous two papers that were covered by this code base (PFP and SiPP), we also extended the code base to include our latest paper on pruning presented at MLSys 2021:
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
Check out the READMEs for more info.
Minor improvements and updates
Bug fixes, visualization updates, better logging, improved readability, simplified compression sub-module
Improvements to distributed training
There was a bug in distributed training when using more than one GPU causing training to stall at the end of the last epoch.
Initial code publication
This is the version of the code as originally published for the ICLR'20 paper Provable Filter Pruning for Efficient Neural Networks.