Kybernetika 55 no. 3, 495-517, 2019

Solutions of semi-Markov control models with recursive discount rates and approximation by $\epsilon-$optimal policies

Yofre H. García and Juan González-HernándezDOI: 10.14736/kyb-2019-3-0495

Abstract:

This paper studies a class of discrete-time discounted semi-Markov control model on Borel spaces. We assume possibly unbounded costs and a non-stationary exponential form in the discount factor which depends of on a rate, called the discount rate. Given an initial discount rate the evolution in next steps depends on both the previous discount rate and the sojourn time of the system at the current state. The new results provided here are the existence and the approximation of optimal policies for this class of discounted Markov control model with non-stationary rates and the horizon is finite or infinite. Under regularity condition on sojourn time distributions and measurable selector conditions, we show the validity of the dynamic programming algorithm for the finite horizon case. By the convergence in finite steps to the value functions, we guarantee the existence of non-stationary optimal policies for the infinite horizon case and we approximate them using non-stationary $\epsilon-$optimal policies. We illustrated our results a discounted semi-Markov linear-quadratic model, when the evolution of the discount rate follows an appropriate type of stochastic differential equation.

Keywords:

dynamic programming method, optimal stochastic control, semi-Markov processes

Classification:

93E20, 49L20

References:

  1. L. Arnold: Stochastic Differential Equations. John Wiley and Sons, New York 1973.   CrossRef
  2. R. Ash and C. Doléans-Dade: Probability and Measure Theory. Academic Press, San Diego, 2000.   CrossRef
  3. R. Bhattacharya and M. Majumdar: Controlled semi-Markov models - the discounted case. J. Statist. Plann. Inference 21 (1989), 3, 365-381.   DOI:10.1016/0378-3758(89)90053-0
  4. D. Bertsekas and S. Shreve: Stochastic Optimal Control: The Discrete Time Case. Athena Scientific, Belmont, Massachusetts 1996.   CrossRef
  5. D. Blackwell: Discounted dynamic programming. Ann. Math. Statist. 36, (1965), 226-235.   DOI:10.1214/aoms/1177700285
  6. J. De Cani: A dynamic programming algorithm for embedded Markov chains the planning horizon is infinitely. Management. Sci. 10 (1963), 716-733.   DOI:10.1287/mnsc.10.4.716
  7. R. Drenyovszki, L. Kovács, K. Tornai, A. Oláh and I. Pintér I.: Bottom-up modeling of domestic appliances with Markov chains and semi-Markov processes. Kybernetika 53 (2017), 6, 1100-1117.   DOI:10.14736/kyb-2017-6-1100
  8. R. Dekker and A. Hordijk: Denumerable semi-Markov decision chains with small interest rates. Ann. Oper. Res. 28 (1991), 185-212.   DOI:10.1007/bf02055581
  9. Y. García and J. González-Hernández: Discrete-time Markov control process with recursive discounted rates. Kybernetika 52 (2016), 403-426.   DOI:10.14736/kyb-2016-3-0403
  10. J. González-Hernández, R. López-Martínez and J. Pérez-Hernández: Markov control processes with randomized discounted cost. Math. Meth. Oper. Res. 65 (2006), 27-44.   DOI:10.1007/s00186-006-0092-2
  11. J. González-Hernández and C. Villarreal-Rodríguez: Optimal solutions of constrained discounted semi-Markov control problems    CrossRef
  12. O. Hernández-Lerma and J. Lasserre: Discrete-Time Markov Control Processes. Basic Optimality Criteria. Springer-Verlag, New York 1996.   DOI:10.1007/978-1-4612-0729-0\_1
  13. Q. Hu and W. Yue: Markov Decision Processes With Their Applications. Springer-Verlag, Advances in Mechanics and Mathematics book series 14, (2008).   DOI:10.14736/kyb-2017-1-0059
  14. X. Huang and Y. Huang: Mean-variance optimality for semi-Markov decision processes under first passage criteria. Kybernetika 53 (2017), 1, 59-81.   DOI:10.14736/kyb-2017-1-0059
  15. R. Howard: Semi-Markovian decision processes. Bull. Int. Statist. Inst. 40 (1963), 2, 625-652.   CrossRef
  16. W. Jewell: Markov-renewal programming I: formulation, finite return models, Markov-renewal programming II: infinite return models, example. Oper. Res. 11 (1963), 938-971.   DOI:10.1287/opre.11.6.938
  17. F. Luque-Vázquez and O. Hernández-Lerma: Semi-Markov control models with average costs. Appl. Math. 26 (1999), 315-331.   DOI:10.4064/am-26-3-315-331
  18. F. Luque-Vásquez and J. A. Minjárez-Sosa: Semi-Markov control processes with unknown holding times distribution under a discounted criterion. Math. Methods Oper. Res. 61 (2005), 455-468.   DOI:10.1007/s001860400406
  19. F. Luque-Vásquez, J. Minjárez-Sosa and L. Rosas: Semi-Markov control processes with unknown holding times distribution under an average cost criterion. Appl. Math. Optim. 61, (2010), 317-336.   10.1007/s00245-009-9086-9
  20. P. Schweitzer: Perturbation Theory and Markovian Decision Processes. Ph.D. Dissertation, Massachusetts Institute of Technology, 1965.   CrossRef
  21. O. Vasicek: An equilibrium characterization of the term structure. J. Financ. Econom. 5 (1977), 177-188.   DOI:10.1016/0304-405x(77)90016-2
  22. O. Vega-Amaya: Average optimatily in semi-Markov control models on Borel spaces: unbounded costs and control. Bol. Soc. Mat. Mexicana 38 (1997), 2, 47-60.   CrossRef
  23. R. Zagst: The effect of information in separable Bayesian semi-Markov control models and its application to investment planning. ZOR - Math. Methods Oper. Res. 41 (1995), 277-288.   CrossRef