Kybernetika 48 no. 5, 1027-1044, 2012

Nash equilibria in a class of Markov stopping games

Rolando Cavazos-Cadena and Daniel Hernández-Hernández

Abstract:

This work concerns a class of discrete-time, zero-sum games with two players and Markov transitions on a denumerable space. At each decision time player II can stop the system paying a terminal reward to player I and, if the system is no halted, player I selects an action to drive the system and receives a running reward from player II. Measuring the performance of a pair of decision strategies by the total expected discounted reward, under standard continuity-compactness conditions it is shown that this stopping game has a value function which is characterized by an equilibrium equation, and such a result is used to establish the existence of a Nash equilibrium. Also, the method of successive approximations is used to construct approximate Nash equilibria for the game.

Keywords:

contractive operator, zero-sum stopping game, equality of the upper and lower value functions, hitting time, stationary strategy

Classification:

91A10, 91A15

References:

  1. E. Altman and A. Shwartz: Constrained Markov Games: Nash Equilibria. In: Annals of Dynamic Games (V. Gaitsgory, J. Filar and K. Mizukami, eds.) 6 (2000), pp. 213-221, Birkhauser, Boston.   CrossRef
  2. R. Atar and A. Budhiraja: A stochastic differential game for the inhomogeneous infinty-Laplace equation. Ann. Probab. 2 (2010), 498-531.   CrossRef
  3. T. Bielecki, D. Hernández-Hernández and S. R. Pliska: Risk sensitive control of finite state Markov chains in discrete time, with applications to portfolio management. Mathe. Methods Oper. Res. 50 (1999), 167-188.   CrossRef
  4. E. B. Dynkin: The optimum choice for the instance for stopping Markov process. Soviet. Math. Dokl. 4 (1963), 627-629.   CrossRef
  5. V. N. Kolokoltsov and O. A. Malafeyev: Understanding Game Theory. World Scientific, Singapore 2010.   CrossRef
  6. G. Peskir: On the American option problem. Math. Finance 15 (2010), 169-181.   CrossRef
  7. G. Peskir and A. Shiryaev: Optimal Stopping and Free-Boundary Problems. Birkhau\-ser, Boston 2010.   CrossRef
  8. M. Puterman: Markov Decision Processes. Wiley, New York 1994.   CrossRef
  9. A. Shiryaev: Optimal Stopping Rules. Springer, New York 1978.   CrossRef
  10. K. Sladký: Ramsey Growth model under uncertainty. In: Proc. 27th International Conference Mathematical Methods in Economics (H. Brozová, ed.), Kostelec nad Černými lesy 2009, pp. 296-300.   CrossRef
  11. K. Sladký: Risk-sensitive Ramsey Growth model. In: Proc. of 28th International Conference on Mathematical Methods in Economics (M. Houda and J. Friebelová, eds.) České Budějovice 2010.   CrossRef
  12. L. S. Shapley: Stochastic games. Proc. Nat. Acad. Sci. U.S.A. 39 (1953), 1095-1100.   CrossRef
  13. J. van der Wal: Discounted Markov games: Successive approximation and stopping times. Internat. J. Game Theory 6 (1977), 11-22.   CrossRef
  14. J. van der Wal: Discounted Markov games: Generalized policy iteration method. J. Optim. Theory Appl. 25 (1978), 125-138.   CrossRef
  15. D. J. White: Real applications of Markov decision processes. Interfaces 15 (1985), 73-83.   CrossRef
  16. D. J. White: Further real applications of Markov decision processes. Interfaces 18 (1988), 55-61.   CrossRef
  17. L. E. Zachrisson: Markov games. In: Advances in Game Theory (M. Dresher, L. S.Shapley and A. W. Tucker, eds.), Princeton Univ. Press, Princeton 1964, pp. 211-253.   CrossRef