×

Graph kernels and Gaussian processes for relational reinforcement learning. (English) Zbl 1263.68131

Horváth, Tamás (ed.) et al., Inductive logic programming. 13th international conference, ILP 2003, Szeged, Hungary, September 29 – October 1, 2003. Proceedings. Berlin: Springer (ISBN 3-540-20144-0/pbk). Lecture Notes in Computer Science 2835, 146-163 (2003).
Summary: Relational reinforcement learning is a Q-learning technique for relational state-action spaces. It aims to enable agents to learn how to act in an environment that has no natural representation as a tuple of constants. In this case, the learning algorithm used to approximate the mapping between state-action pairs and their so called Q(uality)-value has to be not only very reliable, but it also has to be able to handle the relational representation of state-action pairs.
In this paper we investigate the use of Gaussian processes to approximate the quality of state-action pairs. In order to employ Gaussian processes in a relational setting we use graph kernels as the covariance function between state-action pairs. Experiments conducted in the blocks world show that Gaussian processes with graph kernels can compete with, and often improve on, regression trees and instance based regression as a generalisation algorithm for relational reinforcement learning.
For the entire collection see [Zbl 1024.00052].

MSC:

68T05 Learning and adaptive systems in artificial intelligence
Full Text: DOI