Skip to content

Releases: lucaslie/torchprune

v2.2.0: Sparse Flow (NeurIPS 2021) release and updates

16 Nov 07:18
Compare
Choose a tag to compare

The new release contains code for a new paper with accompanying code, comparison methods, models, and datasets.

In addition to the previous papers that were covered by this codebase (ALDS, PFP, SiPP, Lost), we also extended the repository to include our latest paper on pruning neural ODEs , which was presented at NeurIPS 2021:

Sparse Flows: Pruning Continuous-depth Models

Check out the READMEs for more info.

v2.1.0: ALDS (NeurIPS 2021) release and updates

12 Oct 15:38
Compare
Choose a tag to compare

The new release contains code for a new paper with accompanying code, comparison methods, models, and datasets.

In addition to the previous papers that were covered by this codebase (PFP, SiPP, Lost), we also extended the repository to include our latest paper on pruning, which will be presented at NeurIPS 2021:

Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition

Check out the READMEs for more info.

Detailed release update:

  • ALDS algorithm in torchprune.
  • Various tensor decomposition methods as comparisons for ALDS.
  • More network and dataset support, including Glue Benchmark and huggingface transformers.
  • Experiment code, visualization, and paper reproducibility for ALDS.

v2.0.0: Major updates and new papers

09 Apr 18:27
Compare
Choose a tag to compare

The new release contains major overhauls and improvements to the code base.

In addition to the previous two papers that were covered by this code base (PFP and SiPP), we also extended the code base to include our latest paper on pruning presented at MLSys 2021:

Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy

Check out the READMEs for more info.

Minor improvements and updates

20 Oct 13:38
Compare
Choose a tag to compare

Bug fixes, visualization updates, better logging, improved readability, simplified compression sub-module

Improvements to distributed training

08 May 20:17
Compare
Choose a tag to compare

There was a bug in distributed training when using more than one GPU causing training to stall at the end of the last epoch.

Initial code publication

08 May 15:44
283155a
Compare
Choose a tag to compare

This is the version of the code as originally published for the ICLR'20 paper Provable Filter Pruning for Efficient Neural Networks.