×

Transfer in variable-reward hierarchical reinforcement learning. (English) Zbl 1470.68147

Summary: Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.

MSC:

68T05 Learning and adaptive systems in artificial intelligence
90C40 Markov and semi-Markov decision processes
Full Text: DOI

References:

[1] Abbeel, P., & Ng, A. (2004). Apprenticeship learning via inverse reinforcement learning. In Proceedings of the ICML.
[2] Andre, D., & Russell, S. (2002). State abstraction for programmable reinforcement learning agents. In Eighteenth national conference on artificial intelligence (pp. 119–125).
[3] Dietterich, T. (2000). Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research, 9, 227–303. · Zbl 0963.68085
[4] Feinberg, E., & Schwartz, A. (1995). Constrained Markov decision models with weighted discounted rewards. Mathematics of Operations Research, 20(2), 302–320. · Zbl 0837.90120 · doi:10.1287/moor.20.2.302
[5] Gabor, Z., Kalmar, Z., & Szepesvari, C. (1998). Multi-criteria reinforcement learning. In Proceedings of the ICML.
[6] Guestrin, C., Koller, D., & Parr, R. (2001). Multiagent planning with factored MDPs. In Proceedings NIPS-01.
[7] Guestrin, C., Koller, D., Gearhart, C., & Kanodia, N. (2003). Generalizing plans to new environments in relational MDPs. In International joint conference on artificial intelligence.
[8] Kaelbling, L., Littman, M., & Cassandra, A. (1998). Planning and acting in partially observable stochastic domains. AI Journal. · Zbl 0908.68165
[9] Liu, Y., & Stone, P. (2006). Value-function-based transfer for reinforcement learning using structure mapping. In Proceedings of the twenty-first national conference on artificial intelligence.
[10] Mausam, D. (2003). Solving relational MDPs with first-order machine learning. In Proceedings of the ICAPS workshop on planning under uncertainty and incomplete information.
[11] Mehta, N., & Tadepalli, P. (2005). Multi-agent shared hierarchy reinforcement learning. In ICML workshop on rich representations in reinforcement learning.
[12] Natarajan, S., & Tadepalli, P. (2005). Dynamic preferences in multi-criteria reinforcement learning. In Proceedings of the ICML.
[13] Parr, R. (1998). Flexible decomposition algorithms for weakly coupled Markov decision problems. In UAI.
[14] Price, B., & Boutilier, C. (2003). Accelerating reinforcement learning through implicit imitation. Journal of Artificial Intelligence Research, 569–629. · Zbl 1036.68083
[15] Puterman, M. L. (1994). Markov decision processes. New York: Wiley. · Zbl 0829.90134
[16] Russell, S., & Zimdars, A. (2003). Q-decomposition for reinforcement learning agents. In Proceedings of ICML-03.
[17] Schwartz, A. (1993). A reinforcement learning method for maximizing undiscounted rewards. In Proceedings of the 10th international conference on machine learning. San Mateo: Morgan Kaufmann.
[18] Seri, S., & Tadepalli, P. (2002). Model-based hierarchical average reward reinforcement learning. In Proceedings of the ICML (pp. 562–569).
[19] Sutton, R., Precup, D., & Singh, S. (1999). Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1–2), 181–211. · Zbl 0996.68151 · doi:10.1016/S0004-3702(99)00052-1
[20] Tadepalli, P., & Ok, D. (1998). Model-based average reward reinforcement learning. Artificial Intelligence, 100, 177–224. · Zbl 0906.68122 · doi:10.1016/S0004-3702(98)00002-2
[21] Taylor, M., Stone, P., & Liu, Y. (2005). Value functions for RL-based behavior transfer: a comparative study. In Proceedings of the twentieth national conference on artificial intelligence.
[22] Torrey, L., Shavlik, J., Walker, T., & Maclin, R. (2007). Relational macros for transfer in reinforcement learning. In Proceedings of the 17th conference on inductive logic programming. · Zbl 1136.68507
[23] Weeks, J. (1985). The shape of space: how to visualize surfaces and three-dimensional manifolds. · Zbl 0571.57001
[24] White, D. (1982). Multi-objective infinite-horizon discounted Markov decision processes. Journal of Mathematical Analysis and Applications, 89, 639–647. · Zbl 0496.90083 · doi:10.1016/0022-247X(82)90122-6
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.