Kybernetika 57 no. 1, 38-45, 2021

A note on the convergence rate in regularized stochastic programming

Evgueni Gordienko and Yury GryazinDOI: 10.14736/kyb-2021-1-0038

Abstract:

We deal with a stochastic programming problem that can be inconsistent. To overcome the inconsistency we apply Tikhonov's regularization technique, and, using recent results on the convergence rate of empirical measures in Wasserstein metric, we treat the following two related problems: \begin{enumerate} \item A choice of regularization parameters that guarantees the convergence of the minimization procedure. \item Estimation of the rate of convergence in probability. \end{enumerate} Considering both light and heavy tail distributions and Lipschitz objective functions (which can be unbounded), we obtain the power bounds for the convergence rate.

Keywords:

convergence rate, stochastic programming problem, Tikhonov's regularization, Lipschitz conditions, Kantorovich metric

Classification:

90C15

References:

  1. E. Boissard and T. Le Gouic: On the mean speed of convergence of empirical and occupation measures in Wasserstein distance. Annales de l'Institut Henry Poincaré, Probabilités et Statistiques 50 (2014), 539-563.   DOI:10.1214/12-AIHP517
  2. L. Devroye, L. Györfi and G. Lugosi: A Probabilistic Theory of Pattern Recognition. Springer, New York 1996.   DOI:10.1007/978-1-4612-0711-5
  3. T. Evgeniou, T. Poggio, M. Pontil and A. Verri: Regularization and statistical learning theory for data analysis. Comput. Statist. Data Anal. 38 (2002), 421-432.   DOI:10.1016/S0167-9473(01)00069-X
  4. N. Fournier and A. Guillin: On the rate of convergence in Wasserstein distance of the empirical measure. Probab. Theory Related Fields 162 (2015), 707-738.   DOI:10.1007/s00440-014-0583-7
  5. E. Gordienko: A remark on stability in prediction and filtering problems. Izv. Akad. Nank SSR Tekhn. Kibernet. 3 (1978), 202-205.   CrossRef
  6. Y. A. Gryazin, M. V. Klibanov and T. R. Lucas: Numerical solution of a subsurface imaging inverse problem. SIAM J. Appl. Math. 62 (2001), 664-683.   DOI:10.1137/S0036139900377366
  7. V. Kaňková: An approximative solution of stochastic optimization problem. In: Trans. 8th Prague Conf. Academia, Prague 1978, pp. 349-353.   DOI:10.1007/978-94-009-9857-5\_33
  8. V. Kaňková: Empirical estimates in stochastic programming via distribution tails. Kybernetika 46 (2010), 459-471.   CrossRef
  9. V. Kaňková and M. Houda: Thin and heavy tails in stochastic programming. Kybernetika 51 (2015), 433-456.   DOI:10.14736/kyb-2015-3-0433
  10. S. T. Rachev and W. Römisch: Quantitative stability and stochastic programming: the method of probabilistic metrics. Math. Oper. Res. 27 (2002), 792-818.   DOI:10.1287/moor.27.4.792.304
  11. S. Shafieezadeh-Abadeh and P. M. Esfahani: Regularization via mass transportation. J. Machine Learning Res. 20 (2019), 1-68.   CrossRef
  12. A. Shapiro and H. Xu: Stochastic mathematical programs with equilibrium constrains, modeling and sample average approximation. Optimization 57 (2008), 395-418.   DOI:https://doi.org/10.1080/02331930801954177
  13. A. N. Tikhonov and V. Y. Arsenin: Solutions of Ill-posed Problems. Winston and Sons, Washington DC 1977.   CrossRef
  14. V. N. Vapnik: Statistical Learning Theory. Wiley and Sons, New York 1998.   CrossRef
  15. V. Vapnik and R. Izmailov: Synergy of monotonic rules. J. Machine Learning Res. 17 (2016), 988-999.   CrossRef