Discrete Dynamics in Nature and Society
Volume 2010 (2010), Article ID 829692, 27 pages
doi:10.1155/2010/829692
Research Article

Convergence of an Online Split-Complex Gradient Algorithm for Complex-Valued Neural Networks

1Department of Mathematics, Dalian Maritime University, Dalian 116026, China
2Department of Applied Mathematics, Harbin Engineering University, Harbin 150001, China

Received 1 September 2009; Accepted 19 January 2010

Academic Editor: Manuel De La Sen

Copyright © 2010 Huisheng Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the monotonicity of the error function, it is proved that the gradient of the error function tends to zero and the weight sequence tends to a fixed point. A numerical example is given to support the theoretical findings.