Kybernetika 57 no. 1, 1-14, 2021

Bi-personal stochastic transient Markov games with stopping times and total reward criterion

Martínez-Cortés Victor ManuelDOI: 10.14736/kyb-2021-1-0001

Abstract:

The article is devoted to a class of Bi-personal (players 1 and 2), zero-sum Markov games evolving in discrete-time on Transient Markov reward chains. At each decision time the second player can stop the system by paying terminal reward to the first player. If the system is not stopped the first player selects a decision and two things will happen: The Markov chain reaches next state according to the known transition law, and the second player must pay a reward to the first player. The first player (resp. the second player) tries to maximize (resp. minimize) his total expected reward (resp. cost). Observe that if the second player is dummy, the problem is reduced to finding optimal policy of a transient Markov reward chain. Contraction properties of the transient model enable to apply the Banach Fixed Point Theorem and establish the Nash Equilibrium. The obtained results are illustrated on two numerical examples.

Keywords:

two-person Markov games, stopping times, stopping times in transient Markov decision chains, transient and communicating Markov chains

Classification:

91A50, 91A05

References:

  1. E. Ash: Real Analysis and Probability. Academic Press, 1972.   CrossRef
  2. R. Cavazos-Cadena and D. Hernández-Hernández: Nash equilibria in a class of Markov stopping games. Kybernetika 48 (2012), 1027-1044.   CrossRef
  3. R. Cavazos-Cadena and R. Montes-de-Oca: Nearly optimal policies in risk-sensitive positive dynamic programming on discrete spaces. Math. Methods Oper. Res. 27 (2000), 137-167.   DOI:10.4064/am-27-2-167-185
  4. J. A. Filar and O. J. Vrieze: Competitive Markov Decision Processes. Springer Verlag, Berlin 1996.   DOI:10.1007/978-1-4612-4054-9
  5. A. Granas and J. Dugundji: Fixed Point Theory. Springer-Verlag, New York 2003.   CrossRef
  6. K. Hinderer: Foundations of Non-stationary Dynamic Programming with Discrete Time Parameter. Springer-Verlag, Berlin 1970.   DOI:10.1007/978-3-642-46229-0
  7. R. A. Howard and J. Matheson: Risk-sensitive Markov decision processes. Management Sci. 23 (1972), 356-369.   DOI:10.1287/mnsc.18.7.356
  8. V. N. Kolokoltsov and O. A. Malafayev: Understanding Game Theory. World Scientific, Singapore 2010.   DOI:10.1142/7564
  9. J. Nash: Equilibrium points in n-person games. Proc. National Acad. Sci. United States of America 36 (1950), 48-49.   DOI:10.1073/pnas.36.1.48
  10. M. L. Puterman: Markov Decision Processes - Discrete Stochastic Dynamic Programming. Wiley, New York 1994.   DOI:10.1002/9780470316887
  11. T. E. S. Raghavan, S. H. Tijs, O. J. and Vrieze: On stochastic games with additive reward and transition structure. J. Optim. Theory Appl. 47 (1985), 451-464.   DOI:10.1007/BF00942191
  12. S. Ross: Introduction to Probability Models. Ninth edition. Elsevier 2007.   CrossRef
  13. L. S. Shapley: Stochastic games. Proc. National Academy Sciences of United States of America 39 (1953), 1095-1100.   DOI:10.1073/pnas.39.10.1095
  14. A. Shiryaev: Optimal Stopping Rules. Springer, New York 1978.   CrossRef
  15. K. Sladký and V. M. Martínez-Cortés: Risk-sensitive optimality in Markov games. In: Proc. 35th International Conference Mathematical Methods in Economics 2017 (P. Pražák, ed.). Univ. Hradec Králové 2017, pp. 684-689.   CrossRef
  16. L. C. Thomas: Connectedness conditions used in finite state Markov decision processes. J. Math. Anal. Appl. 68 (1979), 548-556.   DOI:10.1016/0022-247X(79)90135-5
  17. L. C. Thomas: Connectedness conditions for denumerable state Markov decision processes. In: Recent Developments in Markov Decision Processes (R. Hartley, L.|,C. Thomas and D. J. White, eds.), Academic Press, New York 1980, pp. 181-204.   CrossRef
  18. F. Thuijsman: Optimality and Equilibria in Stochastic Games. Mathematical Centre Tracts, Amsterdam 1992.   CrossRef
  19. J. Van der Wal: Discounted Markov games: successive approximations and stopping times. Int. J. Game Theory 6 (1977), 11-22.   DOI:10.1007/BF01770870
  20. J. Van der Wal: Stochastic Dynamic Programming. Mathematical Centre Tracts, Amsterdam 1981.   CrossRef
  21. O. J. Vrieze: Stochastic Games with Finite State and Action Spaces. Mathematical Centre Tracts, Amsterdam 1987.   CrossRef
  22. L. Zachrisson: Markov games. In: Advances in Game Theory (M. Dresher, L. S. Shapley and A. W. Tucker, eds.), Princeston University Press 1964.   DOI:10.1515/9781400882014-014
  23. W. H. M. Zijm: Nonnegative Matrices in Dynamic Programming. Mathematisch Centrum, Amsterdam 1983.   CrossRef