"Logical Methods in Computer Science" . "Ku\u010Dera, Anton\u00EDn" . "14330" . "29"^^ . "http://www.lmcs-online.org/ojs/viewarticle.php?id=1109&layout=abstract" . "Markov Decision Processes with Multiple Long-Run Average Objectives" . . "4"^^ . . "RIV/00216224:14330/14:00074494" . "P(GPP202/12/P612)" . "Markov Decision Processes with Multiple Long-Run Average Objectives" . "[805E257BDA96]" . "Bro\u017Eek, V\u00E1clav" . . "Br\u00E1zdil, Tom\u00E1\u0161" . "Markov Decision Processes with Multiple Long-Run Average Objectives"@en . "RIV/00216224:14330/14:00074494!RIV15-GA0-14330___" . . . "Forejt, Vojt\u011Bch" . . "Markov Decision Processes with Multiple Long-Run Average Objectives"@en . . "10" . "5"^^ . "27459" . . . "000333744700001" . . "10.2168/LMCS-10(1:13)2014" . "Markov decision processes; mean-payoff reward; multi-objective optimisation; formal verification"@en . "We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with k limit-average functions, in the expectation objective the goal is to maximize the expected limit-average value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the case of one limit-average function, both randomization and memory are necessary for strategies even for epsilon-approximation, and that finite-memory randomized strategies are sufficient for achieving Pareto optimal values. Under the satisfaction objective, in contrast to the case of one limit-average function, infinite memory is necessary for strategies achieving a specific value (i.e." . "Chatterjee, Krishnendu" . "1860-5974" . "1" . . . . . . "DE - Spolkov\u00E1 republika N\u011Bmecko" . . . "We study Markov decision processes (MDPs) with multiple limit-average (or mean-payoff) functions. We consider two different objectives, namely, expectation and satisfaction objectives. Given an MDP with k limit-average functions, in the expectation objective the goal is to maximize the expected limit-average value, and in the satisfaction objective the goal is to maximize the probability of runs such that the limit-average value stays above a given vector. We show that under the expectation objective, in contrast to the case of one limit-average function, both randomization and memory are necessary for strategies even for epsilon-approximation, and that finite-memory randomized strategies are sufficient for achieving Pareto optimal values. Under the satisfaction objective, in contrast to the case of one limit-average function, infinite memory is necessary for strategies achieving a specific value (i.e."@en . . .