Skip to content
This repository has been archived by the owner on Sep 1, 2020. It is now read-only.

AI model compression using weight pruning and 8-bit quantization

Notifications You must be signed in to change notification settings

Atul-Kashyap/AI-Model-Compression

Repository files navigation

AI-Model-Compression

AI model compression using weight pruning and 8-bit quantization

In this module the input model is pruned and retrained. For the input model different sparsity is set, for each sparsity the pruning operations are set. These pruning operations are applied on the model. After applying pruning operations, the model is retrained so that the layers are adapted to the pruning operation. For each sparsity the model is saved, best model is chosen accounting the sparsity vs accuracy trade-off. The selected model is saved as pruned model and give to the next module for quantization

In this module the pruned model is extracted and converted to tflite file. Quantization is applied on the converted model and delivered as the compressed model.

About

AI model compression using weight pruning and 8-bit quantization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published