Eyeriss: An Energy-Efficient Reconfigurable Accelerator
for Deep Convolutional Neural Networks
IEEE ISSCC 2016


Email: eyeriss at mit dot edu


Welcome to the Eyeriss Project website!

  • A summary of all related papers can be found here. Other related websites and resources can be found here.
  • or subscribe to our mailing list for updates on the Eyeriss Project.
  • To find out more about other on-going research in the Energy-Efficient Multimedia Systems (EEMS) group at MIT, please go here.

News

  • 06/25/2017

    Updated slides posted here from ISCA 2017.

  • 03/27/2017

    Updated slides posted here from the CICS/MTL tutorial.

  • 03/27/2017

    New paper on "Efficient Processing of Deep Neural Networks: A Tutorial and Survey" available on arXiv. [ LINK ]

  • 03/25/2017

    DNN Energy Estimation Website available online. [ LINK ]

  • 01/21/2017

    We will be giving a updated version of our tutorial on Hardware Architectures for Deep Neural Networks at ISCA 2017.

  • 01/17/2017

    New paper on “Hardware for Machine Learning: Challenges and Opportunities” will be presented at IEEE CICC 2017. Available on arXiv. [ LINK ]

  • 11/15/2016

    New paper on “Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning” will be presented at CVPR 2017. Available on arXiv. [ LINK ]

  • 11/12/2016

    DNN Tutorial Slides now available online. [ LINK ]

  • 11/08/2016

    Our paper on “Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks” has been accepted for publication in the Journal of Solid-State Circuits (JSSC) Special Issue on the 2016 International Solid State Circuits Conference (ISSCC). [ PDF ]

  • 11/07/2016

    DNN Processor Benchmarking Website available online. [ LINK ]

  • 07/10/2016

    We will be giving a tutorial on Hardware Architectures for Deep Neural Networks at MICRO-49. More info here.

  • 06/21/2016

    Slides from Eyeriss dataflow talk at ACM/IEEE ISCA 2016. [ PDF ]

  • 05/02/2016

    Yu-Hsin to present the work on "Building Energy-Efficient Accelerators for Deep Learning" at Deep Learning Summit Boston 2016.

  • 04/04/2016

    Yu-Hsin presents poster on Eyeriss at GTC 2016.

  • 03/08/2016

    Yu-Hsin to present paper on Eyeriss dataflow design at ACM/IEEE ISCA 2016. [ PDF ]


Abstract

Eyeriss is an energy-efficient deep convolutional neural network (CNN) accelerator that supports state-of-the-art CNNs, which have many layers, millions of filter weights, and varying shapes (filter sizes, number of filters and channels). The test chip features a spatial array of 168 processing elements (PE) fed by a reconfigurable multicast on-chip network that handles many shapes and minimizes data movement by exploiting data reuse. Data gating and compression are used to reduce energy consumption. The chip has been fully integrated with the Caffe deep learning framework. The video below demonstrates a real-time 1000-class image classification task using pre-trained AlexNet that runs on our Eyeriss Caffe system. The chip can run the convolutions in AlexNet at 35 fps with 278 mW power consumption, which is 10 times more energy efficient than mobile GPUs.


Eyeriss Architecture

Eyeriss Architecture

Chip Die Photo

Die Photo


Video


Press Coverage



BibTeX


@inproceedings{isscc_2016_chen_eyeriss,
    author      = {{Chen, Yu-Hsin and Krishna, Tushar and Emer, Joel and Sze, Vivienne}},
    title       = {{Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks}},
    booktitle   = {{IEEE International Solid-State Circuits Conference, ISSCC 2016, Digest of Technical Papers}},
    year        = {{2016}},
    pages       = {{262-263}},
}
                

Related Papers

  • V. Sze, T.-J. Yang, Y.-H. Chen, J. Emer, "Efficient Processing of Deep Neural Networks: A Tutorial and Survey," arXiv, 2017. [ preprint arXiv ]
  • T.-J. Yang, Y.-H. Chen, V. Sze, "Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [ paper arXiv | poster PDF | DNN energy estimation tool LINK | DNN models LINK ] Highlighted in MIT News
  • Y.-H. Chen, J. Emer, V. Sze, "Using Dataflow to Optimize Energy Efficiency of Deep Neural Network Accelerators," IEEE Micro's Top Picks from the Computer Architecture Conferences, May/June 2017. [ PDF ]
  • A. Suleiman*, Y.-H. Chen*, J. Emer, V. Sze, "Towards Closing the Energy Gap Between HOG and CNN Features for Embedded Vision," IEEE International Symposium of Circuits and Systems (ISCAS), Invited Paper, May 2017. [ paper PDF | slides PDF | talk video ]
  • V. Sze, Y.-H. Chen, J. Emer, A. Suleiman, Z. Zhang, "Hardware for Machine Learning: Challenges and Opportunities," IEEE Custom Integrated Circuits Conference (CICC), Invited Paper, May 2017. [ paper arXiv | slides PDF ]
  • Y.-H. Chen, T. Krishna, J. Emer, V. Sze, "Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks," IEEE Journal of Solid State Circuits (JSSC), ISSCC Special Issue, Vol. 52, No. 1, pp. 127-138, January 2017. [ PDF ]
  • Y.-H. Chen, J. Emer, V. Sze, "Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks," International Symposium on Computer Architecture (ISCA), pp. 367-379, June 2016. [ paper PDF | slides PDF ] Selected for IEEE Micro’s Top Picks special issue on "most significant papers in computer architecture based on novelty and long-term impact" from 2016
  • Y.-H. Chen, T. Krishna, J. Emer, V. Sze, "Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks," IEEE International Conference on Solid-State Circuits (ISSCC), pp. 262-264, February 2016. [ paper PDF | slides PDF | poster PDF | demo video | project website ] Highlighted in EETimes and MIT News.
* Indicates authors contributed equally to the work


Related Websites and Resources

  • DNN Tutorial Slides [ LINK ]
  • DNN Processor Benchmarking Website [ LINK ]
  • DNN Energy Estimation Website [ LINK ]


Acknowledgement

This work is funded by the DARPA YFA grant N66001-14-1-4039, MIT Center for Integrated Circuits & Systems, and gifts from Intel and Nvidia.