This python package is the result of my bachelor thesis (https://github.com/SirBubbls/pruning-ba). It contains pruning operation implementations for artificial neural network models.
- @SirBubbls
pip install condense
python -m pip install git+https://github.com/SirBubbls/condense.git
Please refer to the official docsify documentation for a detailed guide.
There is also an API documentation available.
import condense
import kreas
# Load your model
model = keras.models.load_model('...')
# Apply the PruningWrapper automatically to all possible layers of the model
pruned = condense.keras.wrap_model(model,
# constant 50% sparsity target
condense.optimizer.sparsity_functions.Constant(0.5))
# You need to recompile your model after pruning it
pruned.compile('adam', 'mse')
# Either train your model from scratch or one-shot prune it.
# For both approaches you need to call the fit() operation.
# fit() triggers the PruningCallback and the callback calls the pruning operation of each layer
pruned.fit(data_generator,
epochs=30, # 1 for a 'kind of one-shot' approach
steps_per_epoch=1,
callbacks=[condense.keras.PruningCallback()]) # Important
# weights are now pruned
import condense
import kreas
# Load your model
model = keras.models.load_model('...')
# Prune the model with a 30% sparsity target
pruned = condense.one_shot(model, 0.3)
# weights are now pruned
A more suffisticated approach to pruning is, to first train and prune the model M. After the first training run the model gets reset to its initial parameter configuration and the sparsity mask of step one is applied. We train this smaller network P ⊂ M on the same training data and it should yield better results than the original network.
This is an implementation of the lottery ticket hypothesis (arXiv.org).
import keras
import condense
# Prints out information about the training process
import logging
condense.logger.setLevel(logging.INFO)
model = ...
train_generator = ... # Train data
test_generator = ... # Test data
# the target sparsity is 80% for training
trainer = condense.keras.Trainer(model, 0.8)
trainer.train(train_generator,
epochs=50,
steps_per_epoch=2,
eval_data=test_generator)
pruned_model = trainer.training_model # Training Model
masks = trainer.mask