Abstract—Two-hidden layer feedforward neural networks (TLFNs) have been shown to outperform single-hidden-layer neural networks (SLFNs) for function approximation in many cases. However, their added complexity makes them more difficult to find. Given a constant number of hidden nodes nh, this paper investigates how their allocation between the first and second hidden layers (nh = n1 + n2) affects the likelihood of finding the best generaliser. The experiments were carried out over a total of ten public domain datasets with nh = 8 and 16. The findings were that the heuristic n1 = 0.5nh + 1 has an average probability of at least 0.85 of finding a network with a generalisation error within 0.18% of the best generaliser. Furthermore, the worst case over all data sets was within 0.23% for nh = 8, and within 0.15% for nh = 16. These findings could be used to reduce the complexity of the search for TLFNs from quadratic to linear, or alternatively for ‘topology mapping’ between TLFNs and SLFNs, given the same number of hidden nodes, to compare their performance.
Index Terms—ANN, optimal node ratio, topology mapping, two-hidden-layer feedforward, function approximation.
The authors are with the School of Computing Engineering and Mathematics, University of Brighton, Brighton, BN24GJ UK (e-mail: firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org).
Cite: Alan J. Thomas, Simon D. Walters, Saeed Malekshahi Gheytassi, Robert E. Morgan, and Miltos Petridis, "On the Optimal Node Ratio between Hidden Layers: A Probabilistic Study," International Journal of Machine Learning and Computing vol. 6, no. 5, pp. 241-247, 2016.