×

A collective neurodynamic optimization approach to bound-constrained nonconvex optimization. (English) Zbl 1308.90137

Summary: This paper presents a novel collective neurodynamic optimization method for solving nonconvex optimization problems with bound constraints. First, it is proved that a one-layer projection neural network has a property that its equilibria are in one-to-one correspondence with the Karush-Kuhn-Tucker points of the constrained optimization problem. Next, a collective neurodynamic optimization approach is developed by utilizing a group of recurrent neural networks in framework of particle swarm optimization by emulating the paradigm of brainstorming. Each recurrent neural network carries out precise constrained local search according to its own neurodynamic equations. By iteratively improving the solution quality of each recurrent neural network using the information of locally best known solution and globally best known solution, the group can obtain the global optimal solution to a nonconvex optimization problem. The advantages of the proposed collective neurodynamic optimization approach over evolutionary approaches lie in its constraint handling ability and real-time computational efficiency. The effectiveness and characteristics of the proposed approach are illustrated by using many multimodal benchmark functions.

MSC:

90C26 Nonconvex programming, global optimization
92B20 Neural networks for/in biological studies, artificial life and related topics
Full Text: DOI

References:

[1] Bazaraa, M. S.; Sherali, H. D.; Shetty, C. M., (Nonlinear programming: theory and algorithms (2013), John Wiley & Sons)
[2] Bertsekas, D. P., (Parallel and distributed computation: numerical methods (1989), Prentice-Hall: Prentice-Hall Englewood Cliffs, NJ) · Zbl 0743.65107
[3] Bian, W.; Xue, X., Subgradient-based neural networks for nonsmooth nonconvex optimization problems, IEEE Transactions on Neural Networks, 20, 1024-1038 (2009)
[4] Boyd, S. P.; Vandenberghe, L., Convex optimization (2004), Cambridge University Press · Zbl 1058.90049
[5] Cheng, L.; Hou, Z.; Lin, Y.; Tan, M.; Zhang, W. C.; Wu, F., Recurrent neural network for nonsmooth convex optimization problems with applications to the identification of genetic regulatory networks, IEEE Transactions on Neural Networks, 22, 714-726 (2011)
[6] Chen, W.; Zhang, J.; Chung, H.; Zhong, W.; Wu, W.; Shi, Y., A novel set-based particle swarm optimization method for discrete optimization problems, IEEE Transactions on Evolutionary Computation, 14, 278-300 (2010)
[7] Das, S.; Maity, S.; Qu, B.; Nagaratnam, S., Real-parameter evolutionary multimodal optimization a survey of the state-of-the-art, Swarm and Evolutionary Computation, 1, 71-88 (2011)
[8] Duan, H.; Luo, Q.; Shi, Y.; Ma, G., Hybrid particle swarm optimization and genetic algorithm for multi-uav formation reconfiguration, IEEE Computational Intelligence Magazine, 8, 16-27 (2013)
[10] Eberhart, R. C.; Shi, Y., Computational intelligence: concepts to implementations (2007), Morgan Kaufmann Publishers · Zbl 1138.68482
[11] Fiacco, A. V.; McCormick, G. P., Nonlinear programming: sequential unconstrained minimization techniques, Vol. 4 (1990), SIAM · Zbl 0713.90043
[12] Forti, M.; Nistri, P.; Quincampoix, M., Generalized neural network for nonsmooth nonlinear programming problems, IEEE Transactions on Circuits and Systems: Part I, 51, 1741-1754 (2004) · Zbl 1374.90356
[13] Gao, X. B.; Liao, L. Z., A new one-layer neural network for linear and quadratic programming, IEEE Transactions on Neural Networks, 21, 918-929 (2010)
[14] Guo, Z.; Liu, Q.; Wang, J., A one-layer recurrent neural network for pseudoconvex optimization subject to linear equality constraints, IEEE Transactions on Neural Networks, 22, 1892-1900 (2011)
[15] Hopfield, J. J.; Tank, D. W., Neural computation of decisions in optimization problems, Biological Cybernetics, 52, 141-152 (1985) · Zbl 0572.68041
[16] Hosseini, A.; Wang, J.; Hosseini, S. M., A recurrent neural network for solving a class of generalized convex optimization problems, Neural Networks, 44, 78-86 (2013) · Zbl 1296.90088
[17] Hu, X.; Wang, J., Solving pseudomonotone variational inequalities and pseudoconvex optimization problems using the projection neural network, IEEE Transactions on Neural Networks, 17, 1487-1499 (2006)
[18] Hu, X.; Wang, J., Solving generally constrained generalized linear variational inequalities using the general projection neural networks, IEEE Transactions on Neural Networks, 18, 1697-1708 (2007)
[19] Hu, X.; Wang, J., An improved dual neural network for solving a class of quadratic programming problems and its \(k\)-winners-take-all application, IEEE Transactions on Neural Networks, 19, 2022-2031 (2008)
[20] Hyvärinen, A.; Oja, E., Independent component analysis: algorithms and applications, Neural Networks, 13, 411-430 (2000)
[21] Kennedy, M. P.; Chua, L. O., Neural networks for nonlinear programming, IEEE Transactions on Circuits and Systems, 35, 554-562 (1988)
[23] Liang, J. J.; Qin, A. K.; Suganthan, P. N.; Baskar, S., Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Transactions on Evolutionary Computation, 10, 281-295 (2006)
[24] Liao, L. Z.; Qi, H. D.; Qi, L. Q., Neurodynamical optimization, Journal of Global Optimization, 28, 175-195 (2004) · Zbl 1058.90062
[25] Liu, Q.; Guo, Z.; Wang, J., A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization, Neural Networks, 26, 99-109 (2012) · Zbl 1273.90216
[26] Liu, S.; Wang, J., A simplified dual neural network for quadratic programming with its KWTA application, IEEE Transactions on Neural Networks, 17, 1500-1510 (2006)
[27] Liu, Q.; Wang, J., A one-layer recurrent neural network with a discontinuous activation function for linear programming, Neural Computation, 20, 1366-1383 (2008) · Zbl 1135.68535
[28] Liu, Q.; Wang, J., A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming, IEEE Transactions on Neural Networks, 19, 558-570 (2008)
[29] Liu, Q.; Wang, J., Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions, IEEE Transactions on Neural Networks, 22, 601-613 (2011)
[30] Liu, Q.; Wang, J., A one-layer recurrent neural network for constrained nonsmooth optimization, IEEE Transactions on Systems, Man and Cybernetics—Part B: Cybernetics, 40 (2011)
[31] Liu, Q.; Wang, J., A one-layer projection neural network for nonsmooth optimization subject to linear equalities and bound constraints, IEEE Transactions on Neural Networks and Learning Systems, 24, 812-824 (2013)
[32] Li, G.; Yan, Z.; Wang, J., A one-layer recurrent neural network for constrained nonsmooth invex optimization, Neural Networks, 50, 79-89 (2014) · Zbl 1298.90077
[33] Shang, Y.; Qiu, Y., A note on the extended Rosenbrock function, Evolutionary Computation, 14, 119-126 (2006)
[34] Vapnik, V., The nature of statistical learning theory (2000), Springer · Zbl 0934.62009
[35] Wang, J., A deterministic annealing neural network for convex programming, Neural Networks, 7, 629-641 (1994) · Zbl 0818.90090
[36] Wang, J., A recurrent neural network for solving the shortest path problem, IEEE Transactions on Circuits and Systems: Part I, 43, 482-486 (1996)
[37] Wang, J., Primal and dual assignment networks, IEEE Transactions on Neural Networks, 8, 784-790 (1997)
[38] Whitley, D.; Rana, S.; Dzubera, J.; Mathias, K. E., Evaluating evolutionary algorithms, Artificial Intelligence, 85, 245-276 (1996)
[39] Xia, Y.; Feng, G.; Wang, J., A recurrent neural network with exponential convergence for solving convex quadratic program and linear piecewise equations, Neural Networks, 17, 1003-1015 (2004) · Zbl 1068.68130
[40] Xia, Y.; Feng, G.; Wang, J., A novel recurrent neural network for solving nonlinear optimization problems with inequality constraints, IEEE Transactions on Neural Networks, 19, 1340-1353 (2008)
[41] Xia, Y.; Leung, H.; Wang, J., A projection neural network and its application to constrained optimization problems, IEEE Transactions on Circuits and Systems: Part I, 49, 447-458 (2002) · Zbl 1368.92019
[42] Xia, Y.; Wang, J., Recurrent neural networks for optimization: the state of the art, (Medsker, L. R.; Jain, L. C., Recurrent neural networks: design and applications (1999), CRC Press: CRC Press Boca Raton), 13-45
[43] Xia, Y.; Wang, J., A general projection neural network for solving optimization and related problems, IEEE Transactions on Neural Networks, 15, 318-328 (2004)
[44] Xia, Y.; Wang, J., A recurrent neural network for nonlinear convex optimization subject to nonlinear inequality constraints, IEEE Transactions on Circuits and Systems: Part I, 51, 1385-1394 (2004) · Zbl 1374.90309
[45] Xia, Y.; Wang, J., A recurrent neural network for solving nonlinear convex programs subject to linear constraints, IEEE Transactions on Neural Networks, 16, 379-386 (2005)
[46] Zhang, S.; Constantinides, A. G., Lagrange programming neural network, IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, 39, 441-452 (1992) · Zbl 0758.90067
[47] Zhan, Z.; Zhang, J.; Li, Y.; Chung, H., Adaptive particle swarm optimization, IEEE Transactions on Systems, Man, and Cybernetics—Part B: Cybernetics, 39, 1362-1381 (2009)
[48] Zhan, Z.; Zhang, J.; Yi, J.; Shi, Y., Orthogonal learning particle swarm optimization, IEEE Transactions on Evolutionary Computation, 15, 832-847 (2011)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.