Research on Q-Learning and Motion Control of Multi-Robot Systems Based on Community-Aware Networks

Authors

  • Kexuan Shen Texas A&M University, College Station, Texas, USA Author

DOI:

https://doi.org/10.71222/k0vycf75

Keywords:

multi-robot systems, community-aware networks, reinforcement learning, Q-learning, motion control

Abstract

Collaborative perception and autonomous motion control in complex environments are key challenges for multi-robot systems. As system scale increases, traditional centralized control and conventional distributed learning methods suffer from high communication overhead, low learning efficiency, and reduced stability. To address these limitations, this paper introduces a community-aware network framework that partitions the communication topology of a multi-robot system into communities with strong internal interactions. Based on this structure, a community-aware Q-learning and motion control method is proposed. The communication relationships among robots are modeled from the perspective of complex networks, and a community-based state representation is designed to capture local cooperation while reducing state dimensionality. Community-level information is further incorporated into the Q-value update through an improved reward mechanism, enhancing learning efficiency and convergence stability. In addition, learning decisions are mapped to continuous control inputs using robot kinematic models, enabling coordinated motion within communities and effective obstacle avoidance between communities. Simulation results demonstrate that the proposed method outperforms traditional multi-robot Q-learning approaches in convergence speed, path efficiency, and overall cooperative performance.

References

1. P. Tsiotras, M. Gombolay, and J. Foerster, "Decision-making and planning for multi-agent systems," Frontiers in Robotics and AI, vol. 11, p. 1422344, 2024.

2. E. Sebastián, T. Duong, N. Atanasov, E. Montijano, and C. Sagüés, "Physics-informed multi-agent reinforcement learning for distributed multi-robot problems," IEEE Transactions on Robotics, 2025.

3. X. Zhou, X. Shi, L. Zhang, C. Chen, H. Li, L. Ma, and J. Chen, "Scalable Hierarchical Reinforcement Learning for Hyper Scale Multi-Robot Task Planning," arXiv preprint arXiv:2412.19538, 2024.

4. Y. Liu, D. Wu, and Y. Liang, "Survey on Graph-Based Reinforcement Learning for Networked Coordination and Control," Automation, vol. 6, no. 4, p. 65, 2025. doi: 10.3390/automation6040065

5. S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, and C. Finn, "Robonet: Large-scale multi-robot learning," arXiv preprint arXiv:1910.11215, 2019.

6. N. Xie, Y. Hu, and L. Chen, "A distributed multi-agent formation control method based on deep Q learning," Frontiers in Neurorobotics, vol. 16, p. 817168, 2022. doi: 10.3389/fnbot.2022.817168

7. M. G. Quiles, L. Zhao, R. L. Alonso, and R. A. Romero, "Particle competition for complex network community detection," Chaos: An Interdisciplinary Journal of Nonlinear Science, vol. 18, no. 3, 2008.

8. Y. Du, B. Liu, V. Moens, Z. Liu, Z. Ren, J. Wang, and H. Zhang, "Learning correlated communication topology in multi-agent reinforcement learning," In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, May, 2021, pp. 456-464. doi: 10.65109/fxam2191

9. S. Devaraju, S. Garg, A. Ihler, E. S. Bentley, and S. Kumar, "Pipe routing with topology control for decentralized and autonomous UAV networks," Drones, vol. 9, no. 2, p. 140, 2025. doi: 10.3390/drones9020140

10. G. S. Mahalakshmi, and T. V. Geetha, "Multi-robot learning using non-deterministic argument games," International Journal of Autonomous and Adaptive Communications Systems, vol. 3, no. 4, pp. 439-463, 2010.

11. F. Venturini, "Multi-Agent Reinforcement Learning of Swarm Behaviours with Graph Neural Networks: prototype and first experiments," .

12. P. Singh, R. Tiwari, and M. Bhattacharya, "Navigation in Multi Robot system using cooperative learning: A survey," In 2016 International Conference on Computational Techniques in Information and Communication Technologies (ICCTICT), March, 2016, pp. 145-150. doi: 10.1109/icctict.2016.7514569

13. G. E. Setyawan, P. Hartono, and H. Sawada, "Cooperative multi-robot hierarchical reinforcement learning," International Journal of Advanced Computer Science and Applications, vol. 13, no. 9, 2022. doi: 10.14569/ijacsa.2022.0130904

14. L. Zhou, P. Yang, C. Chen, and Y. Gao, "Multiagent reinforcement learning with sparse interactions by negotiation and knowledge transfer," IEEE transactions on cybernetics, vol. 47, no. 5, pp. 1238-1250, 2016. doi: 10.1109/tcyb.2016.2543238

15. J. Yin, W. Rao, Y. Xiao, and K. Tang, "Cooperative Path Planning With Asynchronous Multiagent Reinforcement Learning," IEEE Transactions on Mobile Computing, 2025. doi: 10.1109/tmc.2025.3526979

Downloads

Published

15 February 2026

Issue

Section

Article

How to Cite

Shen, K. (2026). Research on Q-Learning and Motion Control of Multi-Robot Systems Based on Community-Aware Networks. Journal of Computer, Signal, and System Research, 3(2), 1-12. https://doi.org/10.71222/k0vycf75