Tutorial on Hardware Architectures for Deep Neural Networks
ISCA 2017 (Full Day: June 24, 2017) and MICRO-50 (Full Day: October 15, 2017)


Email: eyeriss at mit dot edu


Welcome to the DNN tutorial website!

  • A summary of all DNN related papers from our group can be found here. Other related websites and resources can be found here.
  • To find out more about the Eyeriss project, please go here.
  • To find out more about other on-going research in the Energy-Efficient Multimedia Systems (EEMS) group at MIT, please go here.

Updates

or subscribe to our mailing list for updates on the Tutorial (e.g., notification of when slides will be posted or updated)

  • 05/22/2017

    We will be giving an updated version of our tutorial at MICRO-50.

  • 03/27/2017

    Updated slides posted here from the CICS/MTL tutorial.

  • 03/27/2017

    New paper on "Efficient Processing of Deep Neural Networks: A Tutorial and Survey" available on arXiv. [ LINK ]

  • 03/25/2017

    DNN Energy Estimation Website available online. [ LINK ]

  • 01/21/2017

    We will be giving an updated version of our tutorial at ISCA 2017.

  • 01/17/2017

    New paper on “Hardware for Machine Learning: Challenges and Opportunities” will be presented at IEEE CICC 2017. Available on arXiv. [ LINK ]

  • 11/15/2016

    New paper on “Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning” will be presented at CVPR 2017. Available on arXiv. [ LINK ]

  • 11/12/2016

    DNN Tutorial Slides now available online.

  • 10/16/2016

    Full day tutorial held at MICRO-49


  • Overview


    Deep neural networks (DNNs) are currently widely used for many AI applications including computer vision, speech recognition, robotics, etc. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, designing efficient hardware architectures for deep neural networks is an important step towards enabling the wide deployment of DNNs in AI systems.

    In this tutorial, we will provide an overview of DNNs, discuss the tradeoffs of the various architectures that support DNNs including CPU, GPU, FPGA and ASIC, and highlight important benchmarking/comparison metrics and design considerations. We will then describe recent techniques that reduce the computation cost of DNNs from both the hardware architecture and network algorithm perspective. Finally, we will discuss the different hardware requirements for inference and training.

    Register for the ISCA 2017 tutorial (June 24, 2017) here.
    Register for the MICRO-50 tutorial (October 15, 2017) here.


    Slides from CICS/MTL Tutorial (March 27, 2017)

    • Background of Deep Neural Networks [ slides ]
    • Survey of DNN Development Resources [ slides ]
    • Survey of DNN Hardware [ slides ]
    • DNN Accelerator Architectures [ slides ]
    • Network and Hardware Co-Design [ slides ]

    Entire Tutorial [ slides ]


    Slides from MICRO-49 Tutorial (Oct 16, 2016)

    • Background of Deep Neural Networks [ slides ]
    • Survey of DNN Development Resources [ slides ]
    • Survey of DNN Hardware [ slides ]
    • DNN Accelerator Architectures [ slides ]
    • Advanced Technology Opportunities [ slides ]
    • Network and Hardware Co-Design [ slides ]
    • Benchmarking Metrics [ slides ]
    • Hardware Requirements for Training [ slides ]
    • References [ slides ]

    Entire Tutorial [ slides ]


    BibTeX

    
    @misc{micro_2016_dnn_tutorial,
        author      = {{Emer, Joel and Sze, Vivienne and Chen, Yu-Hsin}},
        title       = {{Tutorial on Hardware Architectures for Deep Neural Networks}},
        publisher   = {{IEEE/ACM International Symposium on Microarchitecture (MICRO-49)}},
        year        = {{2016}},
        howpublished= {\url{http://eyeriss.mit.edu/tutorial.html}},
    }
                    

    Participant Takeaways

    • Understand the key design considerations for DNN
    • Be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics
    • Understand the tradeoffs between various architectures and platforms
    • Assess the utility of various optimization approaches
    • Understand recent implementation trends and opportunities