Abstract—This paper describes an unified learning framework for kernel networks with one hidden layer, including models like radial basis function networks and regularization networks. The learning procedure consists of meta-parameter tuning wrapping the standard parameter optimization part. Several variants of learning are described and tested on various classification and regression problems. It is shown that meta-learning can improve the performance of models for the price of higher time complexity.
Index Terms—Radial basis function networks, shallow neural networks, kernel methods, hyper-parameter tuning.
P. Vidnerová is with the Czech Academy of Sciences, Institute of Computer Science, Prague (e-mail: email@example.com).
R. Neruda is with the Czech Academy of Sciences, Institute of Computer Science and Charles University, Prague (e-mail: firstname.lastname@example.org).
Cite: Petra Vidnerová and Roman Neruda, "Kernel Function Tuning for Single-Layer Neural Networks," International Journal of Machine Learning and Computing vol. 8, no. 4, pp. 354-360, 2018.