×

Random neural network methods and deep learning. (English) Zbl 1493.68335

This paper focuses primarily on random neural network (RNN) methods and deep learning. The first few pages give extended information on the existing literature and a comparison with recent articles related to RNN.
Moreover, Section 2 presents the background of neural networks, deep learning (DL), and the RNN. Section 3 is devoted to illustrating the multi-layer non-negative RNN autoencoders for dimension reduction, which are robust and effective in handling various types of data. The work on the dense RNN that includes mathematical modeling and DL algorithms is summarized in Section 4. The investigation of the standard RNN and demonstration of its power for DL are presented in Section 5. The resulting DL tool is demonstrated to be effective and it is arguably the most efficient of five different DL approaches. Section 6 presents applications of the RNN and its DL algorithms to the detection of anomalies and attacks on IoT devices and object recognition via images. Finally, conclusions and perspectives of future work are given in Section 7.

MSC:

68T07 Artificial neural networks and deep learning
60K25 Queueing theory (aspects of probability theory)
62M45 Neural nets and related approaches to inference from stochastic processes
Full Text: DOI

References:

[1] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., & Zheng, X. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems, software available from tensorflow.org. [Online]. Available: http://tensorflow.org/.
[2] Abdelbaki, H.E. (1999). Random neural network simulator for use with matlab. Technical report.
[3] Abdelbaki, H., Gelenbe, E., & El-Khamy, S.E. (1999). Random neural network decoder for error correcting codes. Neural Networks, 1999. IJCNN’99. International Joint Conference on. Vol. 5. IEEE, 3241-3245.
[4] Abdelbaki, H., Gelenbe, E., & El-Khamy, S.E. (2000). Analog hardware implementation of the random neural network model. Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium. Vol. 4, 197-201.
[5] Abdelbaki, H.M., Hussain, K., & Gelenbe, E. (2001). A laser intensity image based automatic vehicle classification system. Intelligent Transportation Systems, 2001. Proceedings. 2001 IEEE. IEEE, 460-465.
[6] Adeel, A., Larijani, H., & Ahmadinia, A. (2015). Resource management and inter-cell-interference coordination in lte uplink system using random neural network and optimization. IEEE Access3: 1963-1979.
[7] Aguilar, J. & Gelenbe, E. (1997). Task assignment and transaction clustering heuristics for distributed systems. Information Sciences97(1-2): 199-219.
[8] Anthony, M., Bartlett, P.L. (2009) Neural network learning: theoretical foundations. New York: Cambridge University Press. · Zbl 0968.68126
[9] Atalay, V., Gelenbe, E., & Yalabik, N. (1992). The random neural network model for texture generation. International Journal of Pattern Recognition and Artificial Intelligence6(01): 131-141.
[10] Bakircioglu, H. & Gelenbe, E. (1998). Random neural network recognition of shaped objects in strong clutter. Applications of artificial neural networks in image processing III. Vol. 3307. International Society for Optics and Photonics, 22-29.
[11] Bakırcıoğlu, H. & Koçak, T. (2000). Survey of random neural network applications. European Journal of Operational Research126(2): 319-330. · Zbl 0969.90022
[12] Basterrech, S., Mohammed, S., Rubino, G., & Soliman, M. (2009). Levenberg-marquardt training algorithms for random neural networks. The Computer Journal54(1): 125-135.
[13] Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., Zhang, J., Zhang, X., Zhao, J., & Zieba, K. (Apr. 2016). End to End Learning for Self-Driving Cars, ArXiv e-prints.
[14] Bousquet, O. & Bottou, L. (2008). The tradeoffs of large scale learning. Advances in neural information processing systems, 161-168.
[15] Brun, O., Wang, L., & Gelenbe, E. (2016). Big data for autonomic intercontinental overlays. IEEE Journal on Selected Areas in Communications34(3): 575-583.
[16] Brun, O., Yin, Y., Gelenbe, E., Kadioglu, Y.M., Augusto-Gonzalez, J., & Ramos, M. (2018). Deep learning with dense random neural networks for detecting attacks against iot-connected home environments. Security in Computer and Information Sciences: First International ISCIS Security Workshop 2018, Euro-CYBERSEC 2018, London, UK, February 26-27, 2018. Lecture Notes CCIS No. 821, Springer Verlag.
[17] Brun, O., Yin, Y., Gelenbe, E., Kadioglu, Y., Augusto-Gonzalez, J., & Ramos, M. (2018). Deep learning with dense random neural networks for detecting attacks against iot-connected home environments. In Gelenbe, E., Campegiani, P., Czachorski, T., Katsikas, S., Komnios, I., Romano, L., & Tzovaras, D., (eds.), Recent Cybersecurity Research in Europe: Proceedings of the 2018 ISCIS Security Workshop, Imperial College London. Lecture Notes CCIS No. 821, Springer Verlag.
[18] Brun, O., Yin, Y., & Gelenbe, E. Deep learning with dense random neural network for detecting attacks against iot-connected home environments, Procedia Computer Science. Vol. 134, 458 - 463, 2018, the 15th International Conference on Mobile Systems and Pervasive Computing (MobiSPC 2018) / The 13th International Conference on Future Networks and Communications (FNC-2018) / Affiliated Workshops. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1877050918311487.
[19] Burks, A.W., Goldstine, H.H., & Von Neumann, J. (1946). Preliminary discussion of the logical design of an electronic computing instrument. Report to the US Army Ordenance Department.
[20] Cai, D., He, X., Hu, Y., Han, J., & Huang, T. (2007). Learning a spatially smooth subspace for face recognition. 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1-7.
[21] Cambria, E., Huang, G.-B., Kasun, L.L.C., Zhou, H., Vong, C.M., Lin, J., Yin, J., Cai, Z., Liu, Q., Li, K., Leung, V.C.M., Feng, L., Ong, Y.-S., Lim, M.-H., Akusok, A., Lendasse, A., Corona, F., Nian, R., Miche, Y., Gastaldo, P., Zunino, R., Decherchi, S., Yang, X., Mao, K., Oh, B.-S., Jeon, J., Toh, K.-A., Teoh, A.B.J., Kim, J., Yu, H., Chen, Y., & Liu, J. (2013). Extreme learning machines [trends & controversies]. IEEE Intelligent Systems28(6): 30-59. [Online]. Available: http://dx.doi.org/10.1109/MIS.2013.140.
[22] Carnevale, N.T. & Hines, M.L. (2006). The NEURON book. New York: Cambridge University Press.
[23] Çerkez, C., Aybay, I., & Halici, U. (1997). A digital neuron realization for the random neural network model. Neural Networks, 1997. International Conference on. Vol. 2. IEEE, 1000-1004.
[24] Chang, C.-C. & Lin, C.-J. (2011). Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST)2(3): 27.
[25] Cheng, H.-P., Wen, W., Song, C., Liu, B., Li, H., & Chen, Y. (2016). Exploring the optimal learning technique for ibm truenorth platform to overcome quantization loss. Nanoscale Architectures (NANOARCH), 2016 IEEE/ACM International Symposium on. IEEE, 185-190.
[26] Cramer, C. & Gelenbe, E. (2000). Video quality and traffic qos in learning-based subsampled and receiver-interpolated video sequences. IEEE Journal on Selected Areas in Communications18(2): 150-167. [Online]. Available: https://doi.org/10.1109/49.824788.
[27] Cramer, C., Gelenbe, E., & Bakircloglu, H. (1996). Low bit-rate video compression with neural networks and temporal subsampling. Proceedings of the IEEE84(10): 1529-1543.
[28] Cramer, C., Gelenbe, E., & Gelenbe, P. (1998). Image and video compression. IEEE Potentials17(1): 29-33.
[29] Cramer, C., Gelenbe, E., & Bakircioglu, H. (1996). Video compression with random neural networks. Neural Networks for Identification, Control, Robotics, and Signal/Image Processing, 1996. Proceedings., International Workshop on. IEEE, 476-484.
[30] Davison, A., Brüderle, D., Kremkow, J., Muller, E., Pecevski, D., Perrinet, L., & Yger, P. (2009). Pynn: a common interface for neuronal network simulators.
[31] Diehl, P.U., Pedroni, B.U., Cassidy, A., Merolla, P., Neftci, E., & Zarrella, G. (2016). Truehappiness: Neuromorphic emotion recognition on truenorth. Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 4278-4285.
[32] Ding, C., Li, T., Peng, W., & Park, H. (2006). Orthogonal nonnegative matrix t-factorizations for clustering. Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 126-135.
[33] Ding, C.H., He, X., & Simon, H.D. (2005). On the equivalence of nonnegative matrix factorization and spectral clustering. SDM. Vol. 5. SIAM, 606-610.
[34] Dominguez-Morales, J.P., Jimenez-Fernandez, A., Rios-Navarro, A., Cerezuela-Escudero, E., Gutierrez-Galan, D., Dominguez-Morales, M.J., & Jimenez-Moreno, G. (2016). Multilayer spiking neural network for audio samples classification using spinnaker. International Conference on Artificial Neural Networks. Springer, 45-53.
[35] Esser, S.K., Andreopoulos, A., Appuswamy, R., Datta, P., Barch, D., Amir, A., Arthur, J., Cassidy, A., Flickner, M., Merolla, P.et al. (2013). Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores. Neural Networks (IJCNN), The 2013 International Joint Conference on. IEEE, 1-10.
[36] Esser, S.K., Appuswamy, R., Merolla, P., Arthur, J.V., & Modha, D.S. (2015). Backpropagation for energy-efficient neuromorphic computing. Advances in Neural Information Processing Systems, 1117-1125.
[37] Esser, S.K., Merolla, P.A., Arthur, J.V., Cassidy, A.S., Appuswamy, R., Andreopoulos, A., Berg, D.J., Mckinstry, J.L., Melano, T., Barch, D.R.et al. (2016). Convolutional networks for fast, energy-efficient neuromorphic computing. Proceedings of the National Academy of Sciences113: 201604850.
[38] Fourneau, J.-M. & Gelenbe, E. (2017). G-networks with adders. Future Internet9(3), 34.
[39] François, F. & Gelenbe, E. (2016). Towards a cognitive routing engine for software defined networks. ICC 2016. IEEE, 1-6.
[40] François, F. & Gelenbe, E. (2016). Optimizing secure sdn-enabled inter-data centre overlay networks through cognitive routing. MASCOTS 2016, IEEE Computer Society. IEEE, 283-288.
[41] Furber, S.B., Galluppi, F., Temple, S., & Plana, L.A. (2014). The spinnaker project. Proceedings of the IEEE102(5): 652-665.
[42] Gelenbe, E. (1989). Random neural networks with negative and positive signals and product form solution. Neural Computation1(4): 502-510.
[43] Gelenbe, E. (1989). Réseaux stochastiques ouverts avec clients négatifs et positifs, et réseaux neuronaux. Comptes-Rendus Acad. Sciences de Paris, Série 2309: 979-982.
[44] Gelenbe, E. (1990). Reseaux neuronaux aléatoires stables. Comptes Rendus de l’Académie des Sciences. Série 2310(3): 177-180.
[45] Gelenbe, E. (1990). Stability of the random neural network model. Neural Computation2(2): 239-247.
[46] Gelenbe, E. (1991). Product-form queueing networks with negative and positive customers. Journal of Applied Probability28(3): 656-663. · Zbl 0741.60091
[47] Gelenbe, E. (1993). G-networks by triggered customer movement. Journal of Applied Probability30(3): 742-748. · Zbl 0781.60088
[48] Gelenbe, E. (1993). G-networks with signals and batch removal. Probability in the Engineering and Informational Sciences7(3): 335-342.
[49] Gelenbe, E. (1993). Learning in the recurrent random neural network. Neural Computation5: 154-164.
[50] Gelenbe, E. (1994). G-networks: a unifying model for neural and queueing networks. Annals of Operations Research48(5): 433-461. · Zbl 0803.90058
[51] Gelenbe, E. (2007). Steady-state solution of probabilistic gene regulatory networks. Physical Review E76: 031903-1-031903-8.
[52] Gelenbe, E. (2009). Steps toward self-aware networks. Communications of the ACM52(7): 66-75.
[53] Gelenbe, E. (2012). Natural computation. The Computer Journal55(7): 848-851.
[54] Gelenbe, E. & Abdelrahman, O.H. (2018). An energy packet network model for mobile networks with energy harvesting. Nonlinear Theory and Its Applications, IEICE9(3): 1-15.
[55] Gelenbe, E. & Ceran, E.T. (2016). Energy packet networks with energy harvesting. IEEE Access4: 1321-1331.
[56] Gelenbe, E. & Cramer, C. (1998). Oscillatory corticothalamic response to somatosensory input. Bio Systems48(1-3): 67-75.
[57] Gelenbe, E. & Fourneau, J.-M. (1999). Random neural networks with multiple classes of signals. Neural Computation11(4): 953-963.
[58] Gelenbe, E. & Hussain, K.F. (2002). Learning in the multiple class random neural network. IEEE Transactions on Neural Networks13(6): 1257-1267.
[59] Gelenbe, E. & Kazhmaganbetova, Z. (2014). Cognitive packet network for bilateral asymmetric connections. IEEE Trans. Industrial Informatics10(3): 1717-1725.
[60] Gelenbe, E. & Marin, A. (2015). Interconnected wireless sensors with energy harvesting. Analytical and Stochastic Modelling Techniques and Applications - 22nd International Conference, ASMTA 2015, Albena, Bulgaria, May 26-29, 2015. Proceedings, 87-99. · Zbl 1392.68073
[61] Gelenbe, E. & Morfopoulou, C. (2010). A framework for energy-aware routing in packet networks. The Computer Journal54(6): 850-859.
[62] Gelenbe, E. & Pujolle, G. (1987). Introduction to Networks of Queues. Translation of “Introduction aux Réseaux de Files d”Attente”, Eyrolles, Paris, 1982, published by John Wiley Ltd, New York and Chichester. · Zbl 0547.60092
[63] Gelenbe, E. & Schassberger, R. (1992). Stability of product form g-networks. Probability in the Engineering and Informational Sciences6(3): 271-276. · Zbl 1134.60396
[64] Gelenbe, E. & Sungur, M. (1994). Random network learning and image compression. Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on. Vol. 6. IEEE, 3996-3999.
[65] Gelenbe, E. & Timotheou, S. (2008). Random neural networks with synchronized interactions. Neural Computation20(9): 2308-2324. · Zbl 1154.68462
[66] Gelenbe, E. & Yin, Y. (2016). Deep learning with random neural networks. 2016 International Joint Conference on Neural Networks (IJCNN), 1633-1638.
[67] Gelenbe, E. & Yin, Y. (2016). Deep learning with random neural networks. SAI Intelligent Systems Conference 2016, 907-912.
[68] Gelenbe, E. & Yin, Y. (2017). Deep learning with dense random neural networks. International Conference on Man-Machine Interactions. Springer, 3-18.
[69] Gelenbe, E., Glynn, P., & Sigman, K. (1991). Queues with negative arrivals. Journal of Applied Probability28(1): 245-250. · Zbl 0744.60110
[70] Gelenbe, E., Stafylopatis, A., & Likas, A. (1991). Associative memory operations of the random neura network. Proceedings of the International Conference on Artificial Neural Networks, 307-312.
[71] Gelenbe, E., Feng, Y., & Krishnan, K.R.R. (1996). Neural network methods for volumetric magnetic resonance imaging of the human brain. Proceedings of the IEEE84(10): 1488-1496.
[72] Gelenbe, E., Sungur, M., Cramer, C., & Gelenbe, P. (1996). Traffic and video quality with adaptive neural compression. Multimedia Systems4(6): 357-369. [Online]. Available: https://doi.org/10.1007/s005300050037.
[73] Gelenbe, E., Ghanwani, A., & Srinivasan, V. (1997). Improved neural heuristics for multicast routing. IEEE Journal on Selected Areas in Communications15(2): 147-155.
[74] Gelenbe, E., Mao, Z., & Li, Y. (Aug 1999). Approximation by random networks with bounded number of layers. Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468), 166-175.
[75] Gelenbe, E., Mao, Z., & Li, Y. (1999). Function approximation with spiked random networks. IEEE Transactions on Neural Networks10(1): 3-9. · Zbl 0941.68118
[76] Gelenbe, E., Hussain, K.F., & Abdelbaki, H. (2000). Random neural network texture model. Applications of Artificial Neural Networks in Image Processing V . Vol. 3962. International Society for Optics and Photonics, 104-112.
[77] Gelenbe, E., Koçak, T., & Wang, R. (2004). Wafer surface reconstruction from top-down scanning electron microscope images. Microelectronic Engineering75(2): 216-233.
[78] Gelenbe, E., Mao, Z.-H., & Li, Y.-D. (2004). Function approximation by random neural networks with a bounded number of layers. Differential Equations and Dynamical Systems12(1): 143-170. · Zbl 1490.62298
[79] Georgiopoulos, M., Li, C., & Kocak, T. (2011). Learning in the feed-forward random neural network: a critical review. Performance Evaluation68(4): 361-384, g-Networks and their Applications. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0166531610000970. · Zbl 1198.68189
[80] Gewaltig, M.-O. & Diesmann, M. (2007). Nest (neural simulation tool). Scholarpedia2(4): 1430.
[81] Glorot, X. & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. Aistats. Vol. 9, 249-256.
[82] Glorot, X., Bordes, A., & Bengio, Y., Deep sparse rectifier neural networks, In Gordon, G. J. & Dunson, D. B., (eds.), Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS-11), vol. 15. Journal of Machine Learning Research - Workshop and Conference Proceedings, 2011, 315-323. [Online]. Available: http://www.jmlr.org/proceedings/papers/v15/glorot11a/glorot11a.pdf.
[83] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N. D., & Weinberger, K. Q., (eds.), Advances in Neural Information Processing Systems 27, Curran Associates, Inc., 2672-2680. [Online]. Available: http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf.
[84] Goodman, D. & Brette, R. (2008). Brian: a simulator for spiking neural networks in python.
[85] Grenet, I., Yin, Y., Comet, J.-P., & Gelenbe, E. (2018). Machine learning to predict toxicity of compounds. 27th Annual International Conference on Artificial Neural Networks, ICANN18, accepted for publication. Springer Verlang.
[86] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770-778.
[87] Heeger, D. (2000). Poisson model of spike generation. Handout, University of Standford5: 1-13.
[88] Hinton, G.E. & Salakhutdinov, R.R. (2006). Reducing the dimensionality of data with neural networks. Science313(5786): 504-507. · Zbl 1226.68083
[89] Hinton, G.E., Osindero, S., & Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural Computation18(7): 1527-1554. · Zbl 1106.68094
[90] Hornik, K. (1991). Approximation capabilities of multilayer feedforward networks. Neural Networks4(2): 251-257.
[91] Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks2(5): 359-366. · Zbl 1383.92015
[92] Hoyer, P.O. (2002). Non-negative sparse coding. Neural Networks for Signal Processing, 2002. Proceedings of the 2002 12th IEEE Workshop on. IEEE, 557-565.
[93] https://en.wikipedia.org/wiki/go_(game).
[94] Huang, G.-B., Zhu, Q.-Y., & Siew, C.-K. (2006). Extreme learning machine: theory and applications. Neurocomputing70(1): 489-501.
[95] Hussain, K.F. & Moussa, G.S. (2005). Laser intensity vehicle classification system based on random neural network. Proceedings of the 43rd annual Southeast regional conference-Volume 1. ACM, 31-35.
[96] Javed, A., Larijani, H., Ahmadinia, A., & Emmanuel, R. (2017). Random neural network learning heuristics. Probability in the Engineering and Informational Sciences31(4): 436-456. · Zbl 1441.68205
[97] Javed, A., Larijani, H., Ahmadinia, A., & Gibson, D. (2017). Smart random neural network controller for hvac using cloud computing technology. IEEE Transactions on Industrial Informatics13(1): 351-360.
[98] Jo, S., Yin, J., & Mao, Z.-H. (2005). Random neural networks with state-dependent firing neurons. IEEE Transactions on Neural Networks16(4): 980-983.
[99] Kadioglu, Y.M. & Gelenbe, E. (2018). Product form solution for cascade networks with intermittent energy. IEEE Systems Journal. doi: doi:10.1109/JSYST.2018.2854838.
[100] Kasun, L.L.C., Zhou, H., & Huang, G.-B. (2013). Representational learning with extreme learning machine for big data. IEEE Intelligent Systems28(6): 31-34.
[101] Kim, H. & Gelenbe, E. (2012). Stochastic gene expression modeling with hill function for switch-like gene responses. IEEE/ACM Trans. Comput. Biology Bioinform.9(4): 973-979. [Online]. Available: https://doi.org/10.1109/TCBB.2011.153.
[102] Knight, J., Voelker, A.R., Mundy, A., Eliasmith, C., & Furber, S. (2016). Efficient spinnaker simulation of a heteroassociative memory using the neural engineering framework. Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 5210-5217.
[103] Kocak, T., Seeber, J., & Terzioglu, H. (2003). Design and implementation of a random neural network routing engine. IEEE Transactions on Neural Networks14(5): 1128-1143.
[104] Krizhevsky, A. & Hinton, G. (2009). Learning multiple layers of features from tiny images.
[105] Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE86(11): 2278-2324.
[106] Lecun, Y., Huang, F.J., & Bottou, L. (2004). Learning methods for generic object recognition with invariance to pose and lighting. Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on. Vol. 2. IEEE, II-97-104.
[107] Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature521(7553): 436-444.
[108] Lee, D.D. & Seung, H.S. (1999). Learning the parts of objects by non-negative matrix factorization. Nature401(6755): 788-791. · Zbl 1369.68285
[109] Leshno, M., Lin, V.Y., Pinkus, A., & Schocken, S. (1993). Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks6(6): 861-867.
[110] Lichman, M. (2013). UCI machine learning repository. [Online]. Available: http://archive.ics.uci.edu/ml.
[111] Likas, A. & Stafylopatis, A. (2000). Training the random neural network using quasi-newton methods. European Journal of Operational Research126(2): 331-339. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0377221799004828. · Zbl 0960.90094
[112] Liu, X., Yan, S., & Jin, H. (2010). Projective nonnegative graph embedding. Image Processing, IEEE Transactions on19(5): 1126-1137. · Zbl 1371.68216
[113] Lu, R. & Shen, Y. (2006). Image segmentation based on random neural network model and gabor filters. Engineering in Medicine and Biology Society, 2005. IEEE-EMBS 2005. 27th Annual International Conference of the. IEEE, 6464-6467.
[114] Makhzani, A., Shlens, J., Jaitly, N., & Goodfellow, I.J. (2015). Adversarial autoencoders. CoRR, abs/1511.05644, 1-16. [Online]. Available: http://arxiv.org/abs/1511.05644.
[115] Mcculloch, W.S. & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics5(4): 115-133. [Online]. Available: https://doi.org/10.1007/BF02478259. · Zbl 0063.03860
[116] Metropolis, N. & Ulam, S. (1949). The monte carlo method. Journal of the American Statistical Association44(247): 335-341. · Zbl 0033.28807
[117] Öke, G. & Loukas, G. (2007). A denial of service detector based on maximum likelihood detection and the random neural network. The Computer Journal50(6): 717-727. [Online]. Available: http://dx.doi.org/10.1093/comjnl/bxm066.
[118] Park, J. & Sandberg, I.W. (1991). Universal approximation using radial-basis-function networks. Neural Computation3(2): 246-257.
[119] Park, J. & Sandberg, I.W. (1993). Approximation and radial-basis-function networks. Neural Computation5(2): 305-316.
[120] Paudel, I., Pokhrel, J., Wehbi, B., Cavalli, A., & Jouaber, B. (Sept 2014). Estimation of video qoe from mac parameters in wireless network: A random neural network approach. 2014 14th International Symposium on Communications and Information Technologies (ISCIT), 51-55.
[121] Pavlus, J. (2015). The search for a new machine. Scientific American312(5): 58-63.
[122] Phan, H.T.T., Sternberg, M.J.E., & Gelenbe, E. (2012). Aligning protein-protein interaction networks using random neural networks. 2012 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2012, Philadelphia, PA, USA, October 4-7, 2012, 1-6. [Online]. Available: https://doi.org/10.1109/BIBM.2012.6392664.
[123] Preissl, R., Wong, T.M., Datta, P., Flickner, M., Singh, R., Esser, S.K., Risk, W.P., Simon, H.D., & Modha, D.S. (2012). Compass: a scalable simulator for an architecture for cognitive computing. Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis. IEEE Computer Society Press, 54.
[124] Qin, Z., Yu, F., Shi, Z., & Wang, Y. (2006). Adaptive inertia weight particle swarm optimization. International conference on Artificial Intelligence and Soft Computing. Springer, 450-459. · Zbl 1298.90138
[125] Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 1-16. [Online]. Available: http://arxiv.org/abs/1511.06434.
[126] Radhakrishnan, K. & Larijani, H. (2011). Evaluating perceived voice quality on packet networks using different random neural network architectures. Performance Evaluation68(4): 347-360, g-Networks and their Applications. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0166531611000101.
[127] Riedmiller, M. & Braun, H. (1993). A direct adaptive method for faster backpropagation learning: the rprop algorithm. IEEE International Conference on Neural Networks. Vol. 1, 586-591.
[128] Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review65(6), 386.
[129] Rosenblatt, F. (1961). Principles of neurodynamics. perceptrons and the theory of brain mechanisms, DTIC Document, Tech. Rep.
[130] Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1985). Learning internal representations by error propagation, California Univ San Diego La Jolla Inst for Cognitive Science, Tech. Rep.
[131] Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986). Parallel distributed processing: Explorations in the microstructure of cognition. In Rumelhart, D. E., Mcclelland, J. L., & , (eds.), ch. Learning Internal Representations by Error Propagation. Vol. 1, Cambridge, MA, USA: MIT Press, 318-362. [Online]. Available: http://dl.acm.org/citation.cfm?id=104279.104293.
[132] Sakellari, G. & Gelenbe, E. (2010). Demonstrating cognitive packet network resilience to worm attacks. 17th ACM conference on Computer and Communications Security, Proceedings of the. ACM, 636-638.
[133] Scarselli, F. & Tsoi, A.C. (1998). Universal approximation using feedforward neural networks: A survey of some existing methods, and some new results. Neural Networks11(1): 15-37.
[134] Serrano, W. & Gelenbe, E. (2018). The random neural network in a neurocomputing application for web search. Neurocomputing280: 123-134.
[135] Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M.et al. (2016). Mastering the game of go with deep neural networks and tree search. Nature529(7587): 484-489.
[136] Springenberg, J.T., Dosovitskiy, A., Brox, T., & Riedmiller, M. (2014). Striving for simplicity: The all convolutional net, arXiv preprint arXiv:1412.6806.
[137] Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research15(1), 1929-1958. · Zbl 1318.68153
[138] Stinchcombe, M. & White, H. (1989). Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions. International 1989 Joint Conference on Neural Networks. Vol. 1, 613-617.
[139] Storn, R. & Price, K. (1997). Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization11(4): 341-359. [Online]. Available: https://doi.org/10.1023/A:1008202821328. · Zbl 0888.90135
[140] Tang, J., Deng, C., & Huang, G.-B. (2016). Extreme learning machine for multilayer perceptron. IEEE Transactions on Neural Networks and Learning Systems27(4): 809-821.
[141] Teke, A. & Atalay, V. (2006). Texture classification and retrieval using the random neural network model. Computational Management Science3(3): 193-205. · Zbl 1127.62089
[142] Timotheou, S. (2008). Nonnegative least squares learning for the random neural network. International Conference on Artificial Neural Networks. Springer, 195-204.
[143] Timotheou, S. (2009). A novel weight initialization method for the random neural network. Neurocomputing73(1-3): 160-168.
[144] Timotheou, S. (2010). The random neural network: a survey. The Computer Journal53(3): 251-267.
[145] Vlontzos, A. (May 2017). The rnn-elm classifier. 2017 International Joint Conference on Neural Networks (IJCNN), 2702-2707.
[146] Wachsmuth, E., Oram, M., & Perrett, D. (1994). Recognition of objects and their component parts: responses of single units in the temporal cortex of the macaque. Cerebral Cortex4(5): 509-522.
[147] Wang, L. & Gelenbe, E. (2016). Real-time traffic over the cognitive packet network, 3-21.
[148] Wang, L. & Gelenbe, E. (2018). Adaptive dispatching of tasks in the cloud. IEEE Transactions on Cloud Computing6(1): 33-45.
[149] Wang, Y.-X. & Zhang, Y.-J. (2013). Nonnegative matrix factorization: a comprehensive review. IEEE Transactions on Knowledge and Data Engineering25(6): 1336-1353.
[150] Wen, W., Wu, C., Wang, Y., Nixon, K., Wu, Q., Barnell, M., Li, H., & Chen, Y. (2016). A new learning method for inference accuracy, core occupation, and performance co-optimization on truenorth chip. Design Automation Conference (DAC), 2016 53nd ACM/EDAC/IEEE. IEEE, 1-6.
[151] Wikipedia contributors, “Perceptron,” 2018, [Online; accessed 25-Sep-2018]. [Online]. Available: https://en.wikipedia.org/wiki/Perceptron.
[152] Wilson, D.R. & Martinez, T.R. (1996). Heterogeneous radial basis function networks. Proceedings of the International Conference on Neural networks (ICNN 96), 1263-1267.
[153] Yin, Y. (2018). Random neural networks for deep learning, Imperial College London, PhD Thesis, available inhttps://san.ee.ic.ac.uk/publications/PhDThesis_Yonghua_Yin_v31.pdf.
[154] Yin, Y. & Gelenbe, E. (2016). Deep learning in multi-layer architectures of dense nuclei, arXiv preprint arXiv:1609.07160.
[155] Yin, Y. & Gelenbe, E. (Sept. 2016). Nonnegative autoencoder with simplified random neural network, ArXiv e-prints.
[156] Yin, Y. & Gelenbe, E. (May 2017). Single-cell based random neural network for deep learning. 2017 International Joint Conference on Neural Networks (IJCNN), 86-93.
[157] Yin, Y. & Gelenbe, E. (2018). A classifier based on spiking random neural network function approximator, Preprint available in ResearchGate.net.
[158] Yin, Y. & Zhang, Y. (2012). Weights and structure determination of chebyshev-polynomial neural networks for pattern classification. Software11, 048.
[159] Yin, Y., Wang, L., & Gelenbe, E. (May 2017). Multi-layer neural networks for quality of service oriented server-state classification in cloud servers. 2017 International Joint Conference on Neural Networks (IJCNN), 1623-1627.
[160] Yunong, Z., Kene, L., & Ning, T. (2009). An rbf neural network classifier with centers, variances and weights directly determined. Computing Technology and Automation3, 002.
[161] Zeiler, M.D. (2012). ADADELTA: an adaptive learning rate method. CoRR abs/1212.5701, 1-6. [Online]. Available: http://arxiv.org/abs/1212.5701.
[162] Zhang, Y., Yin, Y., Yu, X., Guo, D., & Xiao, L. (2012). Pruning-included weights and structure determination of 2-input neuronet using chebyshev polynomials of class 1. Intelligent Control and Automation (WCICA), 2012 10th World Congress on. IEEE, 700-705.
[163] Zhang, Y., Yin, Y., Guo, D., Yu, X., & Xiao, L. (2014). Cross-validation based weights and structure determination of chebyshev-polynomial neural networks for pattern classification. Pattern Recognition47(10): 3414-3428. · Zbl 1373.68362
[164] Zhang, Y., Yu, X., Guo, D., Yin, Y., & Zhang, Z. (2014). Weights and structure determination of multiple-input feed-forward neural network activated by chebyshev polynomials of class 2 via cross-validation. Neural Computing and Applications25(7-8): 1761-1770.
[165] Zhu, X. (2005). Semi-supervised learning literature survey.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.