General Game Playing with Relational Markov Decision Processes

Zusammenfassung:

Markov decision processes are a framework for encoding domains to tackle problems of sequential decision making under uncertainty. Dynamic programming techniques allow to solve these MDPs and thus to synthezise optimal policies (i.e. agent behaviours) to achieve their goals.

In this thesis, a tool should be designed that extends an existing MDP solver for relational domain representations – written in the language Maude – to be able to participate in general game playing (http://logic.stanford.edu/ggp/chapters/cover.html, http://www.general-game-playing.de/).

Download:
N/A