%The usual optimization criteria for Markov decision processes can %be quite insufficient to fully capture the various aspects for a %decision maker. It may be preferable to select more sophisticated %criteria that also reflect variability-risk features of the problem. In this note attention is focused on finding policies optimizing risk-sensitive optimality criteria in Markov decision chains. To this end we assume that the total reward generated by the Markov process is evaluated by an exponential utility function with a given risk-sensitive coefficient. The ratio of the first two moments depends on the value of the risk-sensitive coefficient; if the risk-sensitive coefficient is equal to zero we speak on risk-neutral models. Observe that the first moment of the generated reward corresponds to the expectation of the total reward and the second central moment of the reward variance. %We then optimize average expectation of the considered utility %function if the time horizon tends to infinity. In particular, for %a given stationary policy we can also calculate the corresponding variance. For communicating Markov processes and for some specific classes of unichain processes long run risk-sensitive average reward is independent of the starting state. In this note we present necessary and sufficient condition for existence of optimal policies independent of the starting state in unichain models and characterize the class of average risk-sensitive optimal policies.
controlled Markov processes, finite state space, asymptotic behavior, risk-sensitive average optimality
90C40, 93E20