Kybernetika 53 no. 6, 1086-1099, 2017

Second order optimality in Markov decision chains

Karel SladkýDOI: 10.14736/kyb-2017-6-1086

Abstract:

The article is devoted to Markov reward chains in discrete-time setting with finite state spaces. Unfortunately, the usual optimization criteria examined in the literature on Markov decision chains, such as a total discounted, total reward up to reaching some specific state (called the first passage models) or mean (average) reward optimality, may be quite insufficient to characterize the problem from the point of a decision maker. To this end it seems that it may be preferable if not necessary to select more sophisticated criteria that also reflect variability-risk features of the problem. Perhaps the best known approaches stem from the classical work of Markowitz on mean variance selection rules, i. e. we optimize the weighted sum of average or total reward and its variance. The article presents explicit formulae for calculating the variances for transient and discounted models (where the value of the discount factor depends on the current state and action taken) for finite and infinite time horizon. The same result is presented for the long run average nondiscounted models where finding stationary policies minimizing the average variance in the class of policies with a given long run average reward is discussed.

Keywords:

Markov decision chains, second order optimality, optimality conditions for transient, discounted and average models, policy iterations, value iterations

Classification:

90C40, 93E20

References:

  1. E. A. Feinberg and J. Fei: Inequalities for variances of total discounted costs. J. Appl. Probab. 46 (2009), 1209-1212.   DOI:10.1239/jap/1261670699
  2. F. R. Gantmakher: The Theory of Matrices. Chelsea, London 1959.   CrossRef
  3. S. C. Jaquette: Markov decision processes with a new optimality criterion: Discrete time. Ann. Statist. 1 (1973), 496-505.   DOI:10.1214/aos/1176342415
  4. P. Mandl: On the variance in controlled Markov chains. Kybernetika 7 (1971), 1-12.   CrossRef
  5. H. Markowitz: Portfolio Selection - Efficient Diversification of Investments. Wiley, New York 1959.   CrossRef
  6. M. L. Puterman: Markov Decision Processes - Discrete Stochastic Dynamic Programming. Wiley, New York 1994.   CrossRef
  7. N. Bäuerle and U. Rieder: Markov Decision Processes with Application to Finance. Springer-Verlag, Berlin 2011.   CrossRef
  8. R. Righter: Stochastic comparison of discounted rewards. J. Appl. Probab. 48 (2011), 293-294.   DOI:10.1017/S0021900200007786
  9. K. Sladký: On mean reward variance in semi-Markov processes. Math. Meth. Oper. Res. 62 (2005), 387-397.   DOI:10.1007/s00186-005-0039-z
  10. K. Sladký: Risk-sensitive and mean variance optimality in Markov decision processes. Acta Oeconomica Pragensia 7 (2013), 146-161.   CrossRef
  11. K. Sladký: Second order optimality in transient and discounted Markov decision chains. In: Proc. 33th Internat. Conf. Math. Methods in Economics MME 2015 (D. Martinčík, ed.), University of West Bohemia, Plzeň 2015, pp. 731-736.   CrossRef
  12. M. Sobel: The variance of discounted Markov decision processes. J. Appl. Probab. 19 (1982), 794-802.   DOI:10.2307/3213832
  13. N. M. Van Dijk and K. Sladký: On the total reward variance for continuous-time Markov reward chains. J. Appl. Probab. 43 (2006), 1044-1052.   DOI:10.1017/s0021900200002412
  14. A. F. Veinott and Jr: Discrete dynamic programming with sensitive discount optimality criteria. Ann. Math. Statist. 13 (1969), 1635-1660.   DOI:10.1214/aoms/1177697379
  15. D. J. White: Mean, variance and probability criteria in finite Markov decision processes: A review. J. Optimizat. Th. Appl. 56 (1988), 1-29.   DOI:10.1007/bf00938524
  16. X. Wu and X. Guo: First passage optimality and variance minimisation of Markov decision processes with varying discount factors. J. Appl. Probab. 52 (2015), 441-456.   DOI:10.1017/s0021900200012560