Entry Date:
August 6, 2018

Dense-Sparse-Dense Training (DSD)

Principal Investigator Song Han


Dense-Sparse-Dense Training (DSD)A critical issue for training large neural networks is to prevent overfitting while at the same time providing enough model capacity. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks to achieve higher accuracy. DSD training can improve the prediction accuracy of a wide range of neural networks: CNN, RNN and LSTMs on the tasks of image classification, caption generation and speech recognition. DSD training flow produces the same model architecture and doesn’t incur any inference time overhead.