Abstract—Recent works on recurrent neural network and deep learning architecture have shown the power of deep learning in modeling time dependent input sequences. Specific learning structure such as Higher-order boltzmann machine, gradient-based learning manifold, and Recurrent “grammar cell” reveal their ability to learn feature transformation between related input maps, and perform well in time-related learning & prediction tasks in higher order cases. In this article, we extend the conventional convolutional-Restricted-Boltzmann-Machine to learn highly abstract features among abitrary number of time related input maps by constructing a layer of multiplicative units, which capture the relations among inputs. In some cases, we only care about how one map transforms into another, so the multiplicative unit takes features from only this two maps. In other cases, however, more than two maps are strongly related, so it is reasonable to make multiplicative unit learn relations among more input maps, in other words, to find the optimal relational-order(number of related input maps that the multiplicative unit extracts features from) of each unit. In order to enable our machine to learn relational order, we developed a reinforcement-learning method whose optimality is proven to train the network.
Index Terms—Artificial neural network, convolutional-restricted-boltzmann-machine, reinforcement learning, deep learning, temporal-related, relational-order.
Zizhuang Wang is with the Xiaogan Senior High School, China (e-mail: email@example.com).
Cite: Zizhuang Wang, "Temporal-Related Convolutional-Restricted-Boltzmann-Machine Capable of Learning Relational Order via Reinforcement Learning Procedure," International Journal of Machine Learning and Computing vol. 7, no. 1, pp. 1-8, 2017.