In this paper, we study the problem of finding deterministic (also known as feedback or closed-loop) Markov Nash equilibria for a class of discrete-time stochastic games. In order to establish our results, we develop a potential game approach based on the dynamic programming technique. The identified potential stochastic games have Borel state and action spaces and possibly unbounded nondifferentiable cost-per-stage functions. In particular, the team (or coordination) stochastic games and the stochastic games with an action independent transition law are covered.
optimal control, dynamic programming, stochastic games, potential approach
91A50, 91A25, 93E20, 91A14, 91A10