2020 - Frontiers of AI:ML - Michael Carbin

Conference Video|Duration: 43:15
July 14, 2020
Please login to view this video.
  • Video details
    Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start and, instead, training must first begin with large, overparameterized networks.

    In this talk, I’ll present our work on The Lottery Ticket Hypothesis, showing that at a standard pruning technique, iterative magnitude pruning, naturally uncovers subnetworks that are capable of training effectively from early in training.  These subnetworks hold out the promise of more efficient machine learning methods, including inference, fine-tuning of pre-trained networks, and sparse training.

Locked Interactive transcript
Please login to view this video.
  • Video details
    Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start and, instead, training must first begin with large, overparameterized networks.

    In this talk, I’ll present our work on The Lottery Ticket Hypothesis, showing that at a standard pruning technique, iterative magnitude pruning, naturally uncovers subnetworks that are capable of training effectively from early in training.  These subnetworks hold out the promise of more efficient machine learning methods, including inference, fine-tuning of pre-trained networks, and sparse training.

Locked Interactive transcript