Paper
1 December 2021 A low-cost implementation method on deep neural network using stochastic computing
Ya Dong, Xingzhong Xiong, Tianyu Li, Lin Zhang, Jienan Chen
Author Affiliations +
Proceedings Volume 12079, Second IYSF Academic Symposium on Artificial Intelligence and Computer Engineering; 120792B (2021) https://doi.org/10.1117/12.2622719
Event: 2nd IYSF Academic Symposium on Artificial Intelligence and Computer Engineering, 2021, Xi'an, China
Abstract
The deep neural network (DNN) as the computing core of the multiplication operation consumes a lot of hardware resources, which is not conducive to the development of DNN on a hardware platform with limited resources. In order to solve the above problems, this paper proposes an acceleration method of basing on a deep neural network (DNN) to accelerate multiplication operations on resource-constrained hardware platforms. This method can support sparse calculations to optimize calculation delays. At the same time, inspired by neurosynaptic plasticity and stochastic computing (SC), an acceleration method using simple logic gates for reasoning tasks is proposed. Experimental results show that the hardware resource consumption of the proposed acceleration method is 1.4 times lower than that of the traditional method solution in DNN.
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ya Dong, Xingzhong Xiong, Tianyu Li, Lin Zhang, and Jienan Chen "A low-cost implementation method on deep neural network using stochastic computing", Proc. SPIE 12079, Second IYSF Academic Symposium on Artificial Intelligence and Computer Engineering, 120792B (1 December 2021); https://doi.org/10.1117/12.2622719
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Stochastic processes

Binary data

Neurons

Computer networks

Data conversion

Data storage

Back to Top