Home > Archive > 2015 > Volume 5 Number 5 (Oct. 2015) >
IJMLC 2015 Vol.5(5): 353-358 ISSN: 2010-3700
DOI: 10.7763/IJMLC.2015.V5.533

Hierarchical Reinforcement Learning with Context Detection (HRL-CD)

Yiğit E. Yücesoy and M. Borahan Tümer

Abstract—A reinforcement learning (RL) agent mostly assumes environments are stationary which is not feasible on most real world problems. Most RL approaches adapt slow changes by forgetting the previous dynamics of the environment. Reinforcement learning-context detection (RL-CD) is a technique that helps determine changes of the environment’s nature which the agent with the capability to learn different dynamics of the non-stationary environment. In this study we propose an autonomous agent that learns a dynamic environment by taking advantage of hierarchical reinforcement learning (HRL) and present how the hierarchical structure can be integrated into RL-CD to speed up the convergence of a policy.

Index Terms—Reinforcement learning, autonomous agent, hierarchical reinforcement learning, non-stationary environment, betweenness centrality, prioritized sweeping.

Yiğit E. Yücesoy is with the Halic University, Istanbul, Turkey (e-mail: efe@ycsoy.com).
M. Borahan Tümer is with the Marmara University, Istanbul, Turkey (e-mail: borahan.tumer@marmara.edu.tr).


Cite: Yiğit E. Yücesoy and M. Borahan Tümer, "Hierarchical Reinforcement Learning with Context Detection (HRL-CD)," International Journal of Machine Learning and Computing vol.5, no. 5, pp. 353-358, 2015.

General Information

  • E-ISSN: 2972-368X
  • Abbreviated Title: Int. J. Mach. Learn.
  • Frequency: Quaterly
  • DOI: 10.18178/IJML
  • Editor-in-Chief: Dr. Lin Huang
  • Executive Editor:  Ms. Cherry L. Chen
  • Abstracing/Indexing: Inspec (IET), Google Scholar, Crossref, ProQuest, Electronic Journals LibraryCNKI.
  • E-mail: ijml@ejournal.net

Article Metrics in Dimensions