"Heidelberg Dordrecht London New York" . . . . "0302-9743" . . "8"^^ . "Kwiatkowska, Marta" . . "Verification of Markov Decision Processes using Learning Algorithms"@en . "9783319119359" . "stochastic systems; verification; machine learning; statistical model checking; reinforcement learning"@en . "S" . "RIV/00216224:14330/14:00075875" . . "RIV/00216224:14330/14:00075875!RIV15-MSM-14330___" . "Chmel\u00EDk, Martin" . "Verification of Markov Decision Processes using Learning Algorithms" . "Chatterjee, Krishnendu" . . . "10.1007/978-3-319-11936-6_8" . "[DE49D9716FC1]" . . . "Springer-Verlag" . "Automated Technology for Verification and Analysis - 12th International Symposium, ATVA 2014" . "Forejt, Vojt\u011Bch" . "14330" . . . . "K\u0159et\u00EDnsk\u00FD, Jan" . . "53238" . "Br\u00E1zdil, Tom\u00E1\u0161" . "We present a general framework for applying machine-learning algorithms to the verification of Markov decision processes (MDPs). The primary goal of these techniques is to improve performance by avoiding an exhaustive exploration of the state space. Our framework focuses on probabilistic reachability, which is a core property for verification, and is illustrated through two distinct instantiations. The first assumes that full knowledge of the MDP is available, and performs a heuristic-driven partial exploration of the model, yielding precise lower and upper bounds on the required probability. The second tackles the case where we may only sample the MDP, and yields probabilistic guarantees, again in terms of both the lower and upper bounds, which provides efficient stopping criteria for the approximation. The latter is the first extension of statistical model checking for unbounded properties in MDPs."@en . . "Verification of Markov Decision Processes using Learning Algorithms"@en . "2"^^ . "Ujma, Mateusz" . "Parker, David" . "We present a general framework for applying machine-learning algorithms to the verification of Markov decision processes (MDPs). The primary goal of these techniques is to improve performance by avoiding an exhaustive exploration of the state space. Our framework focuses on probabilistic reachability, which is a core property for verification, and is illustrated through two distinct instantiations. The first assumes that full knowledge of the MDP is available, and performs a heuristic-driven partial exploration of the model, yielding precise lower and upper bounds on the required probability. The second tackles the case where we may only sample the MDP, and yields probabilistic guarantees, again in terms of both the lower and upper bounds, which provides efficient stopping criteria for the approximation. The latter is the first extension of statistical model checking for unbounded properties in MDPs." . "17"^^ . . . "Verification of Markov Decision Processes using Learning Algorithms" . "Heidelberg Dordrecht London New York" . "2014-01-01+01:00"^^ . .