Kybernetika 61 no. 6, 762-788, 2025

ET-DMGing: Event-triggered distributed momentum-gradient tracking optimization algorithm for multi-agent systems

Aijuan Wang, Xingmeng Tan and Hai NanDOI: 10.14736/kyb-2025-6-0762

Abstract:

This paper proposes an event-triggered distributed momentum-gradient tracking optimization algorithm (ET-DMGing) for the collaborative optimization problem of minimizing the sum of all agents' local objective functions in multi-agent systems. Firstly, gradient tracking is employed to precisely track the average momentum gradient for updating agent states, which effectively reduces their dwell time in flat and oscillatory regions. The proposed ET-DMGing exhibits enhanced directional consistency and dynamic stability during optimization by leveraging momentum accumulation effects, achieving a linear convergence rate. Secondly, a new event-triggered condition is proposed, which considers the dual metrics of state error and momentum gradient error. This allows for a more comprehensive assessment of the agents' triggering needs, avoiding instability caused by single-dimensional triggering, and improving the triggering threshold. This event-triggered condition reduces the communication frequency among agents. Thirdly, we rigorously proved that the proposed ET-DMGing converges to the global optimum at a linear convergence rate by employing the small-gain theorem. Furthermore, explicit convergence conditions have been derived for parameter selection, including step size parameters and event-triggered weighting coefficients. Finally, numerical simulations are performed to verify the effectiveness and accuracy of the theoretical results.

Keywords:

multi-agent systems, distributed optimization, event-triggered mechanism, gradient tracking

Classification:

68W15, 93D05, 93D21

References:

  1. G. Carnevale, F. Farina, I. Notarnicolam and G. Notarstefano: GTAdam: Gradient tracking with adaptive momentum for distributed online optimization. IEEE Trans. Control Network Systems 10 (2022), 3, 1436-1448.   DOI:10.1109/TCNS.2022.3232519
  2. W. Chen and W. Ren: Event-triggered zero-gradient-sum distributed consensus optimization over directed networks. Automatica 65 (2016), 90-97.   DOI:10.1016/j.automatica.2015.11.015
  3. C. Chen, L. Shen, W. Liu and Z.-Q. Luo: Efficient-Adam: Communication-Efficient Distributed Adam. IEEE Trans. Signal Process. (2023).   DOI:10.1109/tsp.2023.3309461
  4. A. Defazio, F. Bach and S. Lacoste-Julien: SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. Adv. Neural Inform. Process. Systems 27 (2014).   CrossRef
  5. L. Gao, S. Deng, H. Li and Ch. Li: An event-triggered approach for gradient tracking in consensus-based distributed optimization. IEEE Trans. Network Sci. Engrg. 9 (2021), 2, 510-523.   DOI:10.1109/TNSE.2021.3122927
  6. H.-Ch. Huang and J. Lee: A new variable step-size NLMS algorithm and its performance analysis. IEEE Trans. Signal Process. 60 (2011), 4, 2055-2060.   DOI:10.1109/TSP.2011.2181505
  7. K. Huang, S. Pu and A. Nedić: An accelerated distributed stochastic gradient method with momentum. arXiv preprint arXiv:2402.09714 (2024).   CrossRef
  8. X. Jiang, X. Zeng, J. Sun and Jie Chen: Distributed stochastic gradient tracking algorithm with variance reduction for non-convex optimization. IEEE Trans. Neural Networks Learning Systems 34 (2022), 9, 5310-5321.   DOI:10.1109/TNNLS.2022.3170944
  9. H. Lee, S. H. Lee and T. Q. S. Quek: Deep learning for distributed optimization: Applications to wireless resource management. IEEE J. Select. Areas Commun. 37 (2019), 10, 2251-2266.   DOI:10.1109/JSAC.2019.2933890
  10. H.-S. Lee, S.-E. Kim, J.-W. Lee and W.-J. Song: A variable step-size diffusion LMS algorithm for distributed estimation. IEEE Trans. Signal Process. 63 (2015), 7, 1808-1820.   DOI:10.1109/TSP.2015.2401533
  11. A. Lederer, Z. Yang, J. Jiao and S. Hirche: Cooperative control of uncertain multiagent systems via distributed Gaussian processes. IEEE Trans. Automat. Control 68 (2022), 5, 3091-3098.   DOI:10.1109/TAC.2022.3205424
  12. Q. Li, Y. Liao, K. Wu, L. Zhang, J. Lin, M. Chen, J. M. Guerrero and D. Abbott: Parallel and distributed optimization method with constraint decomposition for energy management of microgrids. IEEE Trans. Smart Grid 12 (2021), 6, 4627-4640.   DOI:10.1109/TSG.2021.3097047
  13. H. Li, X. Liao, G. Chen, D. J. Hill, Z. Dong and T. Huang: Event-triggered asynchronous intermittent communication strategy for synchronization in complex dynamical networks. Neural Networks 66 (2015), 1-10.   DOI:10.1016/j.neunet.2015.01.006
  14. H. Li, S. Liu, Y. Ch. Soh, L. Xie and D. Xia: Achieving linear convergence for distributed optimization with zeno-like-free event-triggered communication scheme. In: Proc. 29th Chinese Control And Decision Conference 2017, pp. 6224-6229.   CrossRef
  15. H. Li, L. Zheng, Z. Wang, Y. Yan, L. Feng and J. Guo: S-DIGing: A stochastic gradient tracking algorithm for distributed optimization. IEEE Trans. Emerging Topics Comput. Intell. 6 (2020), no. 1, 53-65.   DOI:10.1109/tetci.2020.3017242
  16. J. Li and H. Su: Gradient tracking: A unified approach to smooth distributed optimization. arXiv preprint arXiv:2202.09804 (2022).   CrossRef
  17. X. Liu, Ch. Miao, G. Fiumara and P. De Meo: Information propagation prediction based on spatial-temporal attention and heterogeneous graph convolutional networks. IEEE Trans. Comput. Social Systems 11 (2024), 1, 945-958.   DOI:10.1109/TCSS.2023.3244573
  18. Ch. Liu, X. Dou, Y. Fan and S. Cheng: A penalty ADMM with quantized communication for distributed optimization over multi-agent systems. Kybernetika 59 (2023), 3, 392-417.   DOI:10.14736/kyb-2023-3-0392
  19. S. Liu, L. Xie and D. E. Quevedo: Event-triggered quantized communication-based distributed convex optimization. IEEE Trans. Control Network Systems 5 (2016), 1, 167-178.   DOI:10.1109/tcns.2016.2585305
  20. K. Lu and Q. Zhu: Distributed algorithms involving fixed step size for mixed equilibrium problems with multiple set constraints. IEEE Trans. Neural Networks Learn. Systems 32 (2020), 11, 5254-5260.   DOI:10.1109/TNNLS.2020.3027288
  21. G. Morral, P. Bianchi and G. Fort: Success and failure of adaptation-diffusion algorithms with decaying step size in multiagent networks. IEEE Trans. Signal Process. 65 (2017), 11, 2798-2813.   DOI:10.1109/TSP.2017.2666771
  22. A. Nedic, A. Olshevsky and W. Shi: Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM J. Optim. 27 (2017), 4, 2597-2633.   DOI:10.1137/16M1084316
  23. N. Qian: On the momentum term in gradient descent learning algorithms. Neural Networks 12 (1999), 1, 145-151.   DOI:10.1016/S0893-6080(98)00116-6
  24. G. Qu and N. Li: Harnessing smoothness to accelerate distributed optimization. IEEE Trans. Control Network Systems 5 (2017), 3, 1245-1260.   DOI:10.1109/TCNS.2017.2698261
  25. M. Rabbat and R. Nowak: Distributed optimization in sensor networks. In: Proc. 3rd International Symposium on Information Processing in Sensor Networks 2004, pp. 20-27.   CrossRef
  26. Z. Shen and H. Yin: A distributed routing-aware deployment algorithm for underwater sensor networks. IEEE Sensors J. (2024).   DOI:10.1109/jsen.2024.3396145
  27. W. Shi, Q. Ling, G. Wu and W. Yin: Extra: An exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25 (2015), 2, 944-966.   DOI:10.1137/14096668X
  28. G. Tychogiorgos, A. Gkelias and K. K. Leung: A non-convex distributed optimization framework and its application to wireless ad-hoc networks. IEEE Trans. Wireless Commun. 12 (2013), 9, 4286-4296.   DOI:10.1109/TW.2013.072313.120739
  29. R. Tron, J. Thomas, G. Loianno, K. Daniilidis and V. Kumar: A distributed optimization framework for localization and formation control: Applications to vision-based measurements. IEEE Control Systems Magazine 36 (2016), 4, 22-44.   DOI:10.1109/MCS.2016.2558401
  30. Z. Tu and S. Liang: Distributed dual averaging algorithm for multi-agent optimization with coupled constraints. Kybernetika 60 (2024), 4, 427-445.   DOI:10.14736/kyb-2024-4-0427
  31. T. Yang, X. Yi, J. Wu, Y. Yuan, D. Wu, Z. Meng, Y. Hong, Ho. Wang, Z. Lin and K. H. Johansson: A survey of distributed optimization. Ann. Rev. Control 47 (2019), 278-305.   DOI:10.1016/j.arcontrol.2019.05.006
  32. Q. Yang, W.-N. Chen, T. Gu, H. Zhang, H. Yuan, S. Kwong and J. Zhang: A distributed swarm optimizer with adaptive communication for large-scale optimization. IEEE Trans. Cybernetics 50 (2019), 7, 3393-3408.   DOI:10.1109/TCYB.2019.2904543
  33. Y. Yuan, W. He, W. Du, Y.-Ch. Tian, Q.-L. Han and F. Qian: Distributed gradient tracking for differentially private multi-agent optimization with a dynamic event-triggered mechanism. IEEE Trans. Systems Man Cybernet.: Systems (2024).   DOI:10.1109/tsmc.2024.3357253
  34. Y. Wang and S. Cheng: A stochastic mirror-descent algorithm for solving $AXB=C$ over a multi-agent system. Kybernetika 57 (2021), 2, 256-271.   DOI:10.14736/kyb-2021-2-0256