In this paper we give a {\it new} set of verifiable conditions for the existence of average optimal stationary policies in discrete-time Markov decision processes with Borel spaces and {\it unbounded} reward/cost functions. More precisely, we provide another set of conditions, which only consists of a Lyapunov-type condition and the common continuity-compactness conditions. These conditions are imposed on the {\it primitive data} of the model of Markov decision processes and thus easy to verify. We also give two examples for which all our conditions are satisfied, but some of conditions in the related literature fail to hold.
discrete-time Markov decision processes, average reward criterion, optimal stationary policy, Lyapunov-type condition, unbounded reward/cost function
90C40, 93E20