Infinite Markov Decision Processes with Decision Independent Subset of States

Authors

  • András Mészáros
  • Miklós Telek

Keywords:

Equivalent representation, Markov chain with regular structure, Markov decision process, state space reduction.

Abstract

In this work, we examine Markov Decision Processes (MDPs) that are composed of a finite subset of states with decisions and a (potentially infinite) subset of states without decisions. We show that, if some parameters of these MDPs can be computed efficiently, then a transformation to a condensed representation is possible, in which only the states with decisions are kept, and the rewards and transition rates are modified such that the optimal policy is the same in the original and the modified MDP. When the subset of states without a decision is infinite, the analysis is based on some structural regularity of the process. Two practically important structures are considered the M/G/1-type and the G/M/1-type.

Published

2019-03-01

Issue

Section

Articles