×

Faster algorithms for quantitative analysis of MCs and MDPs with small treewidth. (English) Zbl 1517.68263

Hung, Dang Van (ed.) et al., Automated technology for verification and analysis. 18th international symposium, ATVA 2020, Hanoi, Vietnam, October 19–23, 2020. Proceedings. Cham: Springer. Lect. Notes Comput. Sci. 12302, 253-270 (2020).
Summary: Discrete-time Markov Chains (MCs) and Markov Decision Processes (MDPs) are two standard formalisms in system analysis. Their main associated quantitative objectives are hitting probabilities, discounted sum, and mean payoff. Although there are many techniques for computing these objectives in general MCs/MDPs, they have not been thoroughly studied in terms of parameterized algorithms, particularly when treewidth is used as the parameter. This is in sharp contrast to qualitative objectives for MCs, MDPs and graph games, for which treewidth-based algorithms yield significant complexity improvements. In this work, we show that treewidth can also be used to obtain faster algorithms for the quantitative problems. For an MC with \(n\) states and \(m\) transitions, we show that each of the classical quantitative objectives can be computed in \(O((n+m)\cdot t^2)\) time, given a tree decomposition of the MC with width \(t\). Our results also imply a bound of \(O(\kappa \cdot (n+m)\cdot t^2)\) for each objective on MDPs, where \(\kappa\) is the number of strategy-iteration refinements required for the given input and objective. Finally, we make an experimental evaluation of our new algorithms on low-treewidth MCs and MDPs obtained from the DaCapo benchmark suite. Our experiments show that on low-treewidth MCs and MDPs, our algorithms outperform existing well-established methods by one or more orders of magnitude.
For the entire collection see [Zbl 1502.68030].

MSC:

68Q87 Probability in computer science (algorithm analysis, random structures, phase transitions, etc.)
60J20 Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.)
68W40 Analysis of algorithms
90C40 Markov and semi-Markov decision processes