Neural Network Optimization with Weight Evolution

Published in ICML 2023 Workshop Neural Compression: From Information Theory to Applications, 2023

This paper introduces a new method for neural network optimization that emphasizes the evolution of weights throughout the training process, rather than focusing solely on magnitude pruning, which typically removes insignificant parameters at the end of training. The proposed approach tracks the importance of each parameter from the start of training to the final epoch, calculating a weighted average that prioritizes values closer to the completion of training.

Experiments conducted on popular deep neural networks such as AlexNet, VGGNet, ResNet, and DenseNet, using benchmark datasets like CIFAR10 and Tiny ImageNet, demonstrate that this method can achieve higher levels of compression with less accuracy loss compared to traditional magnitude pruning techniques. This research was presented at the ICML 2023 Workshop on Neural Compression: From Information Theory to Applications, contributing to the ongoing development of more efficient neural network models.

Recommended citation: Belhaouari, S. B., & Islam, A. (2023). "Neural Network Optimization with Weight Evolution." In ICML 2023 Workshop Neural Compression: From Information Theory to Applications.
Download Paper