Tutorial on Hardware Accelerators for Deep Neural Networks


Email: eyeriss at mit dot edu


Welcome to the DNN tutorial website!

  • A summary of all DNN related papers from our group can be found here.
  • DNN related websites and resources can be found here.
  • To find out more about the Eyeriss project, please go here.
  • To find out more about other on-going research in the Energy-Efficient Multimedia Systems (EEMS) group at MIT, please go here.
  • or subscribe to our mailing list for updates on the Tutorial (e.g., notification of when slides will be posted or updated)

Recent News

  • 11/11/2019

    We will be giving a two day short course on Designing Efficient Deep Learning Systems at MIT in Cambridge, MA on July 20-21, 2020. To find out more, please visit MIT Professional Education.

  • 9/22/2019

    Slides for ICIP tutorial on Efficient Image Processing with Deep Neural Networks available here.

  • 9/20/2019

    Code released for NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications here.

  • All News

Overview


Deep neural networks (DNNs) are currently widely used for many AI applications including computer vision, speech recognition, robotics, etc. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, designing efficient hardware architectures for deep neural networks is an important step towards enabling the wide deployment of DNNs in AI systems.

This tutorial provides a brief recap on the basics of deep neural networks and is for those who are interested in understanding how those models are mapping to hardware architectures. We will provide frameworks for understanding the design space for deep neural network accelerators including managing data movement, handling sparsity, and importance of flexibility. This is an intermediate-level tutorial that will go beyond the material in the previous incarnations of this tutorial.

Register for the tutorial here.

An overview paper based on the tutorial "Efficient Processing of Deep Neural Networks: A Tutorial and Survey" is available here.


Participant Takeaways

  • Understand the key design considerations for DNN
  • Be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics
  • Understand the tradeoffs between various architectures and platforms
  • Assess the utility of various optimization approaches
  • Understand recent implementation trends and opportunities


Slides from ISCA Tutorial (June 22, 2019)

  • Overview of Deep Neural Networks [ slides ]
  • Popular DNNs and Datasets [ slides ]
  • Benchmarking Metrics [ slides ]
  • DNN Kernel Computation [ slides ]
  • DNN Accelerators (part 1) [ slides ]
  • DNN Accelerators (part 2) [ slides ]
  • DNN Model and Hardware Co-Design (precision) [ slides ]
  • DNN Processing Near/In Memory [ slides ]
  • DNN Model and Hardware Co-Design (sparsity) [ slides ]
  • Sparse DNN Accelerators [ slides ]
  • Tutorial Summary [ slides ]

Slides from Previous Versions of Tutorial

ISCA 2017, CICS/MTL 2017, MICRO 2016


Video


BibTeX


@article{2017_dnn_piee,
  title={Efficient processing of deep neural networks: A tutorial and survey},
  author={Sze, Vivienne and Chen, Yu-Hsin and Yang, Tien-Ju and Emer, Joel},
  journal={Proceedings of the IEEE},
  year={2017}
}                

Related Papers

  • Y.-H. Chen, T.-J Yang, J. Emer, V. Sze, "Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices," to appear in IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS), June 2019. [ paper PDF | earlier version arXiv ]
  • T.-J. Yang, A. Howard, B. Chen, X. Zhang, A. Go, V. Sze, H. Adam, "NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications," European Conference on Computer Vision (ECCV), September 2018. [ paper arXiv ]
  • Y.-H. Chen*, T.-J. Yang*, J. Emer, V. Sze, "Understanding the Limitations of Existing Energy-Efficient Design Approaches for Deep Neural Networks," SysML Conference, February 2018. [ paper PDF | talk video ] Selected for Oral Presentation
  • V. Sze, T.-J. Yang, Y.-H. Chen, J. Emer, "Efficient Processing of Deep Neural Networks: A Tutorial and Survey," Proceedings of the IEEE, vol. 105, no. 12, pp. 2295-2329, December 2017. [ paper PDF ]
  • T.-J. Yang, Y.-H. Chen, J. Emer, V. Sze, "A Method to Estimate the Energy Consumption of Deep Neural Networks," Asilomar Conference on Signals, Systems and Computers, Invited Paper, October 2017. [ paper PDF | slides PDF ]
  • T.-J. Yang, Y.-H. Chen, V. Sze, "Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [ paper arXiv | poster PDF | DNN energy estimation tool LINK | DNN models LINK ] Highlighted in MIT News
  • Y.-H. Chen, J. Emer, V. Sze, "Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators," IEEE Micro's Top Picks from the Computer Architecture Conferences, May/June 2017. [ PDF ]
  • A. Suleiman*, Y.-H. Chen*, J. Emer, V. Sze, "Towards Closing the Energy Gap Between HOG and CNN Features for Embedded Vision," IEEE International Symposium of Circuits and Systems (ISCAS), Invited Paper, May 2017. [ paper PDF | slides PDF | talk video ]
  • V. Sze, Y.-H. Chen, J. Emer, A. Suleiman, Z. Zhang, "Hardware for Machine Learning: Challenges and Opportunities," IEEE Custom Integrated Circuits Conference (CICC), Invited Paper, May 2017. [ paper arXiv | slides PDF ] Received Outstanding Invited Paper Award
  • Y.-H. Chen, T. Krishna, J. Emer, V. Sze, "Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks," IEEE Journal of Solid State Circuits (JSSC), ISSCC Special Issue, Vol. 52, No. 1, pp. 127-138, January 2017. [ PDF ]
  • Y.-H. Chen, J. Emer, V. Sze, "Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks," International Symposium on Computer Architecture (ISCA), pp. 367-379, June 2016. [ paper PDF | slides PDF ] Selected for IEEE Micro’s Top Picks special issue on "most significant papers in computer architecture based on novelty and long-term impact" from 2016
  • Y.-H. Chen, T. Krishna, J. Emer, V. Sze, "Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks," IEEE International Conference on Solid-State Circuits (ISSCC), pp. 262-264, February 2016. [ paper PDF | slides PDF | poster PDF | demo video | project website ] Highlighted in EETimes and MIT News.
* Indicates authors contributed equally to the work


Related Websites and Resources

  • Eyeriss Project Website [ LINK ]
  • DNN Energy Estimation Website [ LINK ]
  • DNN Processor Benchmarking Website [ LINK ]