Please wait a minute...
Tsinghua Science and Technology  2020, Vol. 25 Issue (04): 479-486    doi: 10.26599/TST.2019.9010019
    
Hardware Implementation of Spiking Neural Networks on FPGA
Jianhui Han, Zhaolin Li*, Weimin Zheng, Youhui Zhang*
Jianhui Han is with the Institute of Microelectronics, Tsinghua University, Beijing 100084, China. E-mail: hanjh16@mails.tsinghua.edu.cn.
Zhaolin Li is with the Research Institute of Information Technology, Tsinghua University, Beijing 100084, China.
Weimin Zheng and Youhui Zhang are with the Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China. E-mail: zwm-dcs@mail.tsinghua.edu.cn.
Download: PDF (1112 KB)      HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

Inspired by real biological neural models, Spiking Neural Networks (SNNs) process information with discrete spikes and show great potential for building low-power neural network systems. This paper proposes a hardware implementation of SNN based on Field-Programmable Gate Arrays (FPGA). It features a hybrid updating algorithm, which combines the advantages of existing algorithms to simplify hardware design and improve performance. The proposed design supports up to 16 384 neurons and 16.8 million synapses but requires minimal hardware resources and archieves a very low power consumption of 0.477 W. A test platform is built based on the proposed design using a Xilinx FPGA evaluation board, upon which we deploy a classification task on the MNIST dataset. The evaluation results show an accuracy of 97.06% and a frame rate of 161 frames per second.



Key wordsSpiking Neural Network (SNN)      Field-Programmable Gate Arrays (FPGA)      digital circuit      low-power      MNIST     
Received: 26 March 2019      Published: 13 January 2020
Corresponding Authors: Zhaolin Li,Youhui Zhang   
Cite this article:

Jianhui Han, Zhaolin Li, Weimin Zheng, Youhui Zhang. Hardware Implementation of Spiking Neural Networks on FPGA. Tsinghua Science and Technology, 2020, 25(04): 479-486.

URL:

http://tst.tsinghuajournals.com/10.26599/TST.2019.9010019     OR     http://tst.tsinghuajournals.com/Y2020/V25/I04/479

Fig. 1 Architecture of the proposed system.
Fig. 2 Topology of the benchmark SNN model.
Fig. 3 Structure of the test platform.
ComponentCells usedUtilization (%)
LUT53812.46
FF73091.67
BRAM40.57.43
BUFG13.13
Table 1 Hardware utilization for ZC706 board.
Fig. 4 Power consumption breakdown.
Fig. 5 Classification accuracy vs. bit-widths on MNIST.
Fig. 6 Classification accuracy vs. sparsity on MNIST.
[1]   Lecun Y., Bottou L., Bengio Y., and Haffner P., Gradient-based learning applied to document recognition, Proc. IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[2]   Izhikevich E. M., Simple model of spiking neurons, IEEE Trans. Neural Netw., vol. 14, no. 6, pp. 1569-1572, 2003.
[3]   Merolla P. A., Arthur J. V., Alvarez-Icaza R., Cassidy A. S., Sawada J., Akopyan F., Jackson B. L., Imam N., Guo C., Nakamura Y., et al., A million spiking-neuron integrated circuit with a scalable communication network and interface, Science, vol. 345, no. 6197, pp. 668-673, 2014.
[4]   Khan M. M., Lester D. R., Plana L. A., Rast A., Jin X., Painkras E., and Furber S. B., SpiNNaker: Mapping neural networks onto a massively-parallel chip multiprocessor, in 2008 IEEE Int. Joint Conf. on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 2008, pp. 2849-2856.
[5]   Moore S. W., Fox P. J., Marsh S. J. T., Markettos A. T., and Mujumdar A., Bluehive—A field-programable custom computing machine for extreme-scale real-time neural network simulation, in 2012 IEEE 20th Int. Symp. on Field-Programmable Custom Computing Machines, Toronto, Canada, 2012, pp. 133-140.
[6]   Neil D. and Liu S. C., Minitaur, an event-driven FPGA-based spiking network accelerator, IEEE Trans. Very Large Scale Integr. (VLSI) Syst., vol. 22, no. 12, pp. 2621-2628, 2014.
[7]   Cheung K., Schultz S. R., and Luk W., A large-scale spiking neural network accelerator for FPGA systems, in Artificial Neural Networks and Machine Learning - ICANN 2012, Villa A. E. P., Duch W., érdi P., Masulli F., and Palm G., eds. Springer, 2012, pp. 113-120.
[8]   Farquhar E., Gordon C., and Hasler P., A field programmable neural array, in 2006 IEEE Int. Symp. on Circuits and Systems, Island of Kos, Greece, 2006, pp. 4114-4117.
[9]   Liu M., Yu H., and Wang W., FPAA based on integration of CMOS and nanojunction devices for neuromorphic applications, in Int. Conf. on Nano-Networks, M. Cheng, ed. Springer, 2009, pp. 44-48.
[10]   Benjamin B. V., Gao P. R., McQuinn E., Choudhary S., Chandrasekaran A. R., Bussat J. M., Alvarez-Icaza R., Arthur J. V., Merolla P. A., and Boahen K., Neurogrid: A mixed-analog-digital multichip system for large-scale neural simulations, Proc. IEEE, vol. 102, no. 5, pp. 699-716, 2014.
[11]   Pfeil T., Jordan J., Tetzlaff T., Grübl A., Schemmel J., Diesmann M., and Meier K., Effect of heterogeneity on decorrelation mechanisms in spiking neural networks: A neuromorphic-hardware study, Phys. Rev. X, vol. 6, no. 2, p. 021023, 2016.
[12]   Hinton G. E. and Salakhutdinov R. R., Reducing the dimensionality of data with neural networks, Science, vol. 313, no. 5786, pp. 504-507, 2006.
[13]   Cire?an D. C., Meier U., Gambardella L. M., and Schmidhuber J., Deep, big, simple neural nets for handwritten digit recognition, Neural Comput., vol. 22, no. 12, pp. 3207-3220, 2010.
[14]   Mohamed A. R., Dahl G. E., and Hinton G., Acoustic modeling using deep belief networks, IEEE Trans. Audio Speech Lang. Process., vol. 20, no. 1, pp. 14-22, 2012.
[15]   O’Connor P., Neil D., Liu S. C., Delbruck T., and Pfeiffer M., Real-time classification and sensor fusion with a spiking deep belief network, Front. Neurosci., vol. 7, p. 178, 2013.
[16]   Diehl P. U., Neil D., Binas J., Cook M., Liu S. C., and Pfeiffer M., Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing, in 2015 Int. Joint Conf. on Neural Networks (IJCNN), Killarney, Ireland, 2015, pp. 1-8.
[17]   Xilinx Inc., Xilinx Zynq-7000 SoC ZC706 evaluation kit, , 2019.
[18]   Paszke A., Gross S., Chintala S., Chanan G., Yang E., DeVito Z., Lin Z. M., Desmaison A., Antiga L., and Lerer A., Automatic differentiation in PyTorch, in Proc. 31st Conf. on Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 1-4.
[19]   NVIDIA Corporation, NVIDIA Tesla P100: The world’s first AI supercomputing data center GPU, , 2019.
[20]   NVIDIA Corporation, NVIDIA system management interface, , 2019.
[21]   Han S., Mao H. Z., and Dally W. J., Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding, arXiv preprint: 1510.00149, 2015.
[1] Chen Jie, Zhao Shu, Zhang Yanping. Hierarchical Covering Algorithm[J]. Tsinghua Science and Technology, 2014, 19(1): 76-81.
[2] . Experimental Analysis of Link Estimation Methods in Low Power Wireless Networks[J]. Tsinghua Science and Technology, 2011, 16(5): 539-552.
[3] . 24-bit Low-Power Low-Cost Digital Audio Sigma-Delta DAC[J]. Tsinghua Science and Technology, 2011, 16(1): 74-82.