• Jun 14, 2017 News!Vol.6, No.3 has been indexed by EI(Inspec)!   [Click]
  • May 03, 2016 News!Vol.5, No.5 has been indexed by EI(Inspec)!   [Click]
  • May 03, 2016 News!Vol.5, No.4 has been indexed by EI(Inspec)!   [Click]
General Information
    • ISSN: 2010-3700
    • Frequency: Bimonthly
    • DOI: 10.18178/IJMLC
    • Editor-in-Chief: Dr. Lin Huang
    • Executive Editor:  Ms. Cherry L. Chen
    • Abstracing/Indexing: Engineering & Technology Digital Library, Google Scholar, Crossref, ProQuest, Electronic Journals Library, DOAJ and EI (INSPEC, IET).
    • E-mail: ijmlc@ejournal.net
Editor-in-chief
Dr. Lin Huang
Metropolitan State University of Denver, USA
It's my honor to take on the position of editor in chief of IJMLC. We encourage authors to submit papers concerning any branch of machine learning and computing.
IJMLC 2011 Vol.1(1): 20-29 ISSN: 2010-3700
DOI: 10.7763/IJMLC.2011.V1.4

Efficient Recursive Least Squares Methods for the CMAC Neural Network

C. Laufer, G. Coghill

Abstract—The Cerebellar Model Articulation Controller(CMAC) neural network is an associative memory that is biologically inspired by the cerebellum, which is found in the brains of animals. The standard CMAC uses the least mean squares algorithm to train the weights. Recently, the recursive least squares (RLS) algorithm was proposed as a superior algorithm for training the CMAC online as it can converge in just one epoch, and does not require tuning of a learning rate.However, the RLS algorithm was found to be very computationally demanding as its computational complexity is dependent on the square of the number of weights required which can be huge for the CMAC. Here, we show a more efficient RLS algorithm that uses inverse QR decomposition and additionally provides a regularized solution, improving generalization. However, while the inverse QR decomposition based RLS algorithm reduces computation time significantly; it is still not fast enough for use in CMACs greater than two dimensions. To further improve efficiency we show that by using kernel methods the CMAC computational complexity can be transformed to become dependent on the number of unique training data. Additionally, it is shown how modeling error can be improved through use of higher order basis functions.

Index Terms—artificial neural networks, CMAC, kernel methods, recursive least squares

Carl Werner Laufer is a PhD student in the Electrical and Electronic Engineering Department of the University of Auckland, New Zealand.(e-mail: clau070@aucklanduni.ac.nz)
George Coghill is a Senior Lecturer in the Electrical and Electronic Engineering Department of the University of Auckland, New Zealand.(e-mail: g.coghill@auckland.ac.nz)

[PDF]

Cite: C. Laufer, G. Coghill, "Efficient Recursive Least Squares Methods forthe CMAC Neural Network," International Journal of Machine Learning and Computing vol. 1, no. 1, pp. 20-29, 2011.

Copyright © 2008-2015. International Journal of Machine Learning and Computing. All rights reserved.
E-mail: ijmlc@ejournal.net