10
Toshiba Memory Corporation Develops High-Speed and High-Energy-Efficiency Algorithm and Hardware Architecture for Deep Learning Processor

Toshiba Memory Corporation Develops High-Speed and High-Energy-Efficiency Algorithm and Hardware Architecture for Deep Learning Processor

6 years ago
Anonymous $yysEBM5EYi

https://www.businesswire.com/news/home/20181106005345/en/

TOKYO--(BUSINESS WIRE)--Nov 6, 2018--Toshiba Memory Corporation, the world leader in memory solutions, today announced the development of a high-speed and high-energy-efficiency algorithm and hardware architecture for deep learning processing with less degradations of recognition accuracy. The new processor for deep learning implemented on an FPGA [1] achieves 4 times energy efficiency compared to conventional ones. The advance was announced at IEEE Asian Solid-State Circuits Conference 2018 (A-SSCC 2018) in Taiwan on November 6.

Deep learning calculations generally require large amounts of multiply-accumulate (MAC) operations, and it has resulted in issues of long calculation time and large energy consumption. Although techniques reducing the number of bits to represent parameters (bit precision) have been proposed to reduce the total calculation amount, one of proposed algorithm reduces the bit precision down to one or two bit, those techniques cause degraded recognition accuracy. Toshiba Memory developed the new algorithm reducing MAC operations by optimizing the bit precision of MAC operations for individual filters [2] in each layer of a neural network. By using the new algorithm, the MAC operations can be reduced with less degradation of recognition accuracy.

Toshiba Memory Corporation Develops High-Speed and High-Energy-Efficiency Algorithm and Hardware Architecture for Deep Learning Processor

Nov 6, 2018, 9:25am UTC
https://www.businesswire.com/news/home/20181106005345/en/ > TOKYO--(BUSINESS WIRE)--Nov 6, 2018--Toshiba Memory Corporation, the world leader in memory solutions, today announced the development of a high-speed and high-energy-efficiency algorithm and hardware architecture for deep learning processing with less degradations of recognition accuracy. The new processor for deep learning implemented on an FPGA [1] achieves 4 times energy efficiency compared to conventional ones. The advance was announced at IEEE Asian Solid-State Circuits Conference 2018 (A-SSCC 2018) in Taiwan on November 6. > Deep learning calculations generally require large amounts of multiply-accumulate (MAC) operations, and it has resulted in issues of long calculation time and large energy consumption. Although techniques reducing the number of bits to represent parameters (bit precision) have been proposed to reduce the total calculation amount, one of proposed algorithm reduces the bit precision down to one or two bit, those techniques cause degraded recognition accuracy. Toshiba Memory developed the new algorithm reducing MAC operations by optimizing the bit precision of MAC operations for individual filters [2] in each layer of a neural network. By using the new algorithm, the MAC operations can be reduced with less degradation of recognition accuracy.