Abstract—This paper presents a Monte-Carlo based Reinforcement Learning approach called MCRL. MCRL is applied in different domains to construct context-aware model for mobile computing. For mobile devices, we present MCRL, a light weight solution that allows to work under tight resource constraints. It is based on a multivariate regression method to construct a probability distribution which is used to construct a value function. A machine learning algorithm is applied to map the action schema with the contexts. MCRL is evaluated using different benchmark datasets. The results are compared with the start-of-the-art rivals of MCRL. The main candidate rivals of MCRL are based on the latest variations of Nayes-bias, bootstrap-ping (Ada) and Decision Tree (J48). The results show that MCRL finds the quality solutions in a far shorter time than its rivals.
Index Terms—Context-awareness, regression, monte-carlo simulations, reinforcement learning, regression.
Muath Alrammal and Munir Naveed are with the Computer Information Sciences, Higher Colleges of Technology, UAE (e-mail: email@example.com, firstname.lastname@example.org).
Cite: Muath Alrammal and Munir Naveed, "Monte-Carlo Based Reinforcement Learning (MCRL)," International Journal of Machine Learning and Computing vol. 10, no. 2, pp. 227-232, 2020.Copyright © 2020 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).