You are here:
Publication details
Mean-Payoff Optimization in Continuous-Time Markov Chains with Parametric Alarms
Authors | |
---|---|
Year of publication | 2017 |
Type | Article in Proceedings |
Conference | Quantitative Evaluation of Systems |
MU Faculty or unit | |
Citation | |
Doi | http://dx.doi.org/10.1007/978-3-319-66335-7_12 |
Field | Informatics |
Keywords | parameter synthesis; continuous-time Markov chains; non-Markovian distributions; Markov decision process; policy iteration; generalized semi-Markov process; Markov regenerative process |
Description | Continuous-time Markov chains with alarms (ACTMCs) allow for alarm events that can be non-exponentially distributed. Within parametric ACTMCs, the parameters of alarm-event distributions are not given explicitly and can be subject of parameter synthesis. An algorithm solving the epsilon-optimal parameter synthesis problem for parametric ACTMCs with long-run average optimization objectives is presented. Our approach is based on reduction of the problem to finding long-run average optimal strategies in semi-Markov decision processes (semi-MDPs) and sufficient discretization of parameter (i.e., action) space. Since the set of actions in the discretized semi-MDP can be very large, a straightforward approach based on explicit action-space construction fails to solve even simple instances of the problem. The presented algorithm uses an enhanced policy iteration on symbolic representations of the action space. The soundness of the algorithm is established for parametric ACTMCs with alarm-event distributions satisfying four mild assumptions that are shown to hold for uniform, Dirac, exponential, and Weibull distributions in particular, but are satisfied for many other distributions as well. An experimental implementation shows that the symbolic technique substantially improves the efficiency of the synthesis algorithm and allows to solve instances of realistic size. |
Related projects: |