SCI和EI收录∣中国化工学会会刊

Chin.J.Chem.Eng. ›› 2012, Vol. 20 ›› Issue (6): 1219-1224.

• RESEARCH NOTES • Previous Articles    

Fast Learning in Spiking Neural Networks by Learning Rate Adaptation*

FANG Huijuan, LUO Jiliang, WANG Fei   

  1. College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China
  • Received:2012-06-10 Revised:2012-07-31 Online:2012-12-28 Published:2012-12-28
  • Supported by:
    Supported by the National Natural Science Foundation of China (60904018;61203040);the Natural Science Foundation of Fujian Province of China (2009J05147;2011J01352);the Foundation for Distinguished Young Scholars of Higher Education of Fujian Province of China (JA10004);the Science Research Foundation of Huaqiao University (09BS617)

Fast Learning in Spiking Neural Networks by Learning Rate Adaptation*

方慧娟, 罗继亮, 王飞   

  1. College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China
  • 通讯作者: FANG Huijuan,E-mail:huijuan.fang@163.com
  • 基金资助:
    Supported by the National Natural Science Foundation of China (60904018;61203040);the Natural Science Foundation of Fujian Province of China (2009J05147;2011J01352);the Foundation for Distinguished Young Scholars of Higher Education of Fujian Province of China (JA10004);the Science Research Foundation of Huaqiao University (09BS617)

Abstract: For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs),three learning rate adaptation methods (heuristic rule,delta-delta rule,and delta-bar-delta rule),which are used to speed up training in artificial neural networks,are used to develop the training algorithms for feedforward SNN.The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem,Iris dataset,fault diagnosis in the Tennessee Eastman process,and Poisson trains of discrete spikes.The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm.Furthermore,if the adaptive learning rate is used in combination with the momentum term,the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence.In the three learning rate adaptation methods,delta-bar-delta rule performs the best.The delta-bar-delta method with momentum has the fastest convergence rate,the greatest stability of training process,and the maximum accuracy of network learning.The proposed algorithms in this paper are simple and efficient,and consequently valuable for practical applications of SNN.

Key words: spiking neural networks, learning algorithm, learning rate adaptation, Tennessee Eastman process

摘要: For accelerating the supervised learning by the SpikeProp algorithm with the temporal coding paradigm in spiking neural networks (SNNs),three learning rate adaptation methods (heuristic rule,delta-delta rule,and delta-bar-delta rule),which are used to speed up training in artificial neural networks,are used to develop the training algorithms for feedforward SNN.The performance of these algorithms is investigated by four experiments: classical XOR (exclusive or) problem,Iris dataset,fault diagnosis in the Tennessee Eastman process,and Poisson trains of discrete spikes.The results demonstrate that all the three learning rate adaptation methods are able to speed up convergence of SNN compared with the original SpikeProp algorithm.Furthermore,if the adaptive learning rate is used in combination with the momentum term,the two modifications will balance each other in a beneficial way to accomplish rapid and steady convergence.In the three learning rate adaptation methods,delta-bar-delta rule performs the best.The delta-bar-delta method with momentum has the fastest convergence rate,the greatest stability of training process,and the maximum accuracy of network learning.The proposed algorithms in this paper are simple and efficient,and consequently valuable for practical applications of SNN.

关键词: spiking neural networks, learning algorithm, learning rate adaptation, Tennessee Eastman process