BitBlade: Area and Energy-Efficient Precision-Scalable Neural Network Accelerator with Bitwise Summation.

التفاصيل البيبلوغرافية
العنوان: BitBlade: Area and Energy-Efficient Precision-Scalable Neural Network Accelerator with Bitwise Summation.
المؤلفون: Sungju Ryu, Hyungjun Kim, Wooseok Yi, Jae-Joon Kim
المصدر: DAC: Annual ACM/IEEE Design Automation Conference; 2019, Issue 56, p277-282, 6p
مصطلحات موضوعية: ARTIFICIAL neural networks, MACHINE learning, TRANSLATING machines, ADAPTIVE computing systems, ENERGY consumption
مستخلص: Deep Neural Networks (DNNs) have various performance requirements and power constraints depending on applications. To maximize the energy-efficiency of hardware accelerators for different applications, the accelerators need to support various bit-width configurations. When designing bit-reconfigurable accelerators, each PE must have variable shift-addition logic, which takes a large amount of area and power. This paper introduces an area and energy efficient precision-scalable neural network accelerator (BitBlade), which reduces the control overhead for variable shift-addition using bitwise summation method. The proposed BitBlade, when synthesized in a 28nm CMOS technology, showed reduction in area by 41% and in energy by 36-46% compared to the state-of-the-art precision-scalable architecture [14]. [ABSTRACT FROM AUTHOR]
Copyright of DAC: Annual ACM/IEEE Design Automation Conference is the property of Association for Computing Machinery and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
قاعدة البيانات: Complementary Index
الوصف
تدمد:0738100X
DOI:10.1145/3316781.3317784