Point estimators based on minimization of information-theoretic divergences between empirical and hypothetical distribution induce a problem when working with continuous families which are measure-theoretically orthogonal with the family of empirical distributions. In this case, the $\phi$-divergence is always equal to its upper bound, and the minimum $\phi$-divergence estimates are trivial. Broniatowski and Vajda \cite{IV09} proposed several modifications of the minimum divergence rule to provide a solution to the above mentioned problem. We examine these new estimation methods with respect to consistency, robustness and efficiency through an extended simulation study. We focus on the well-known family of power divergences parametrized by $\alpha \in \mathbb{R}$ in the Gaussian model, and we perform a comparative computer simulation for several randomly selected contaminated and uncontaminated data sets, different sample sizes and different $\phi$-divergence parameters.
robustness, minimum $\phi $-divergence estimation, subdivergence, superdivergence, PC simulation, relative efficiency
62B05, 62H30