You are here:
Publication details
On the Complexity of Value Iteration
Authors | |
---|---|
Year of publication | 2019 |
Type | Article in Proceedings |
Conference | Proceedings of the 46th International Colloquium on Automata, Languages, and Programming (ICALP 2019) |
MU Faculty or unit | |
Citation | |
Doi | http://dx.doi.org/10.4230/LIPIcs.ICALP.2019.102 |
Keywords | Markov decision processes; probabilistic verification; value iteration |
Description | Value iteration is a fundamental algorithm for solving Markov Decision Processes (MDPs). It computes the maximal n-step payoff by iterating n times a recurrence equation which is naturally associated to the MDP. At the same time, value iteration provides a policy for the MDP that is optimal on a given finite horizon n. In this paper, we settle the computational complexity of value iteration. We show that, given a horizon n in binary and an MDP, computing an optimal policy is EXPTIME-complete, thus resolving an open problem that goes back to the seminal 1987 paper on the complexity of MDPs by Papadimitriou and Tsitsiklis. To obtain this main result, we develop several stepping stones that yield results of an independent interest. For instance, we show that it is EXPTIME-complete to compute the n-fold iteration (with n in binary) of a function given by a straight-line program over the integers with max and + as operators. We also provide new complexity results for the bounded halting problem in linear-update counter machines. |
Related projects: |