Kybernetika 53 no. 1, 82-98, 2017

Markov decision processes with time-varying discount factors and random horizon

Rocio Ilhuicatzi-Roldán, Hugo Cruz-Suárez and Selene Chávez-RodríguezDOI: 10.14736/kyb-2017-1-0082

Abstract:

This paper is related to Markov Decision Processes. The optimal control problem is to minimize the expected total discounted cost, with a non-constant discount factor. The discount factor is time-varying and it could depend on the state and the action. Furthermore, it is considered that the horizon of the optimization problem is given by a discrete random variable, that is, a random horizon is assumed. Under general conditions on Markov control model, using the dynamic programming approach, an optimality equation for both cases is obtained, namely, finite support and infinite support of the random horizon. The obtained results are illustrated by two examples, one of them related to optimal replacement.

Keywords:

Markov decision process, dynamic programming, varying discount factor, random horizon

Classification:

93E20, 90C40, 90C39

References:

  1. Y. Carmon and A. Shwartz: Markov decision processes with exponentially representable discounting. Oper. Res. Lett. 37 (2009), 51-55.   DOI:10.1016/j.orl.2008.10.005
  2. X. Chen and X. Yang: Optimal consumption and investment problem with random horizon in a BMAP model. Insurance Math. Econom. 61 (2015), 197-205.   DOI:10.1016/j.insmatheco.2015.01.004
  3. H. Cruz-Suárez, R. Ilhuicatzi-Roldán and R. Montes-de-Oca: Markov decision processes on Borel spaces with total cost and random horizon. J. Optim. Theory Appl. 162 (2014), 329-346.   DOI:10.1007/s10957-012-0262-8
  4. E. Della Vecchia, S. Di Marco and F. Vidal: Dynamic programming for variable discounted Markov decision problems. In: Jornadas Argentinas de Informática e Investigación O\-pe\-ra\-ti\-va (43JAIIO) XII Simposio Argentino de Investigación Operativa (SIO), Buenos Aires 2014, pp. 50-62.   CrossRef
  5. E. Feinberg and A. Shwartz: Constrained dynamic programming with two discount factors: applications and an algorithm. IEEE Trans. Automat. Control 44 (1999), 628-631.   DOI:10.1109/9.751365
  6. E. Feinberg and A. Shwartz: Markov decision models with weighted discounted criteria. Math. Oper. Res. 19 (1994), 152-168.   DOI:10.1287/moor.19.1.152
  7. Y. H. García and J. González-Hernández: Discrete-time Markov control process with recursive discounted rates. Kybernetika 52 (2016), 403-426.   DOI:10.14736/kyb-2016-3-0403
  8. J. González-Hernández, R. R. López-Martínez and J. A. Minjarez-Sosa: Adaptive policies for stochastic systems under a randomized discounted criterion. Bol. Soc. Mat. Mex. 14 (2008), 149-163.   CrossRef
  9. J. González-Hernández, R. R. López-Martínez and J. A. Minjarez-Sosa: Approximation, estimation and control of stochastic systems under a randomized discounted cost criterion. Kybernetika 45 (2009), 737-754.   CrossRef
  10. J. González-Hernández, R. R. López-Martínez, J. A. Minjarez-Sosa and J. A. Gabriel-Arguelles: Constrained Markov control processes with randomized discounted cost criteria: occupation measures and external points. Risk and Decision Analysis 4 (2013), 163-176.   CrossRef
  11. J. González-Hernández, R. R. López-Martínez, J. A. Minjarez-Sosa and J. A. Gabriel-Arguelles: Constrained Markov control processes with randomized discounted rate: infinite linear programming approach. Optimal Control Appl. Methods 35 (2014), 575-591.   DOI:10.1002/oca.2089
  12. J. González-Hernández, R. R. López-Martínez and J. R. Pérez-Hernández: Markov control processes with randomized discounted cost. Math. Methods Oper. Res. 65 (2007), 27-44.   DOI:10.1007/s00186-006-0092-2
  13. X. Guo, A. Hernández-del-Valle and O. Hernández-Lerma: First passage problems for nonstationary discrete-time stochastic control systems. Eur. J. Control 18 (2012), 528-538.   DOI:10.3166/ejc.18.528-538
  14. O. Hernández-Lerma and J. B. Laserre: Discrete-time Markov Control Processes: Basic Optimality Criteria. Springer-Verlag, New York 1996.   DOI:10.1007/978-1-4612-0729-0
  15. K. Hinderer: Foundations of non-stationary dynamic programming with discrete time parameter. In: Lectures Notes Operations Research (M. Bechmann and H. K{ü}nzi, eds.), Springer-Verlag 33, Z{ü}rich 1970.   DOI:10.1007/978-3-642-46229-0
  16. R. Ilhuicatzi-Roldán and H. Cruz-Suárez: Optimal replacement in a system of $n$-machines with random horizon. Proyecciones 31 (2012), 219-233.   DOI:10.4067/s0716-09172012000300003
  17. J. A. Minjares-Sosa: Markov Control Models with unknown random state-action-dependent discounted factors. TOP 23 (2015), 743-772.   DOI:10.1007/s11750-015-0360-5
  18. M. L. Puterman: Markov Decision Process: Discrete Stochastic Dynamic Programming. John Wiley and Sons, New York 1994.   CrossRef
  19. M. Sch{ä}l: Conditions for optimality in dynamic programming and for the limit of n-stage optimal policies to be optimal. Probab. Theory Related Fields 32 (1975), 179-196.   DOI:10.1007/bf00532612
  20. Q. Wei and X. Guo: Markov decision processes with state-dependent discounted factors and unbounded rewards/costs. Oper. Res. Lett. 39 (2011), 369-374.   DOI:10.1016/j.orl.2011.06.014
  21. Q. Wei and X. Guo: Semi-Markov decision processes with variance minimization criterion. 4OR, 13 (2015), 59-79.   DOI:10.1007/s10288-014-0267-2
  22. X. Wu and X. Guo: First passage optimality and variance minimisation of Markov decision processes with varying discounted factors. J. Appl. Probab. 52 (2015), 441-456.   DOI:10.1017/s0021900200012560
  23. X. Wu, X. Zou and X. Guo: First passage Markov decision processes with constraints and varying discount factors. Front. Math. China 10 (2015), 1005-1023.   DOI:10.1007/s11464-015-0479-6
  24. X. Wu and J. Zhang: An application to the finite approximation of the first passage models for discrete-time Markov decision processes with varying discount factors. In: Proc. 11th World Congress on Intelligent Control and Automation 2015, pp. 1745-1748.   DOI:10.1109/wcica.2014.7052984
  25. X. Wu and J. Zhang: Finite approximation of the first passage models for discrete-time Markov decision processes with varying discounted factors. Discrete Event Dyn. Syst. 26 (2016), 669-683.   DOI:10.1007/s10626-014-0209-3
  26. L. Ye and X. Guo: Continuous-time Markov decision processes with state-dependent discount factors. Acta Appl. Math. 121 (2012), 5-27.   DOI:10.1007/s10440-012-9669-3
  27. Y. Zhang: Convex analytic approach to constrained discounted Markov decision processes with non-constant discount factors. TOP 21 (2013), 378-408.   DOI:10.1007/s11750-011-0186-8