Suitable for Dynamic Modeling of 3D Vision Algorithm and Motion Control Compensation
DOI:
https://doi.org/10.71222/eeb1ck26Keywords:
3D dynamic modeling, motion control compensation, multi-sensor fusion, time series perception algorithm, predictive control, robot visionAbstract
The accuracy of 3D modeling in dynamic scenes is constrained by motion blur and system latency. Traditional visual algorithms often produce geometric distortions when reconstructing high-speed moving objects, and the response delay in control loops further exacerbates these modeling errors. This paper introduces a collaborative framework that integrates time-varying perception 3D vision with predictive compensation control: first, a motion state estimation module based on multi-sensor tight coupling is designed. This module fuses RGB-D data and IMU information using adaptive Kalman filtering to achieve real-time decoupling of motion trajectories. Next, a hierarchical control compensation mechanism is developed, which combines feedforward motion prediction from LSTM networks with online tuning of PID parameters based on visual-inertial feedback. This significantly reduces modeling distortions caused by actuator delays. Verification on a robotic arm dynamic grasping platform shows that compared to the ORB-SLAM3 system, the modeling point cloud registration error is reduced by 62.3%, and the root mean square error (RMSE) of trajectory tracking is reduced by 58.1%. This effectively addresses the industry challenge of 'modeling-control' cross-interference in dynamic scenes, providing robust technical support for scenarios such as intelligent manufacturing and unmanned systems.
References
1. F. Romanelli, “Multi-sensor fusion for autonomous resilient perception,” Nuvern Appl. Sci. Rev., vol. 8, no. 10, pp. 59–68, 2024.
2. P. Veysi, M. Adeli, and N. P. Naziri, “Implementation of Kalman filtering and multi-sensor fusion data for autonomous driving,” Nuvern Appl. Sci. Rev., vol. 8, no. 10, pp. 59–68, 2024.
3. N. Senel, et al., “Multi-sensor data fusion for real-time multi-object tracking,” Processes, vol. 11, no. 2, p. 501, 2023, doi: 10.3390/pr11020501.
4. Y. Liang, S. Müller, and D. Rolle, “Tightly coupled multimodal sensor data fusion for robust state observation with online delay estimation and compensation,” IEEE Sens. J., vol. 22, no. 13, pp. 13480–13496, 2022, doi: 10.1109/JSEN.2022.3177365.
5. Y. Cai, Y. Ou, and T. Qin, “Improving SLAM techniques with integrated multi-sensor fusion for 3D reconstruction,” Sensors, vol. 24, no. 7, p. 2033, 2024, doi: 10.3390/s24072033.
6. R. Li, et al., “Research on parameter compensation method and control strategy of mobile robot dynamics model based on digital twin,” Sensors, vol. 24, no. 24, p. 8101, 2024, doi: 10.3390/s24248101.
7. M. Sun, “Multi-sensor data fusion and management strategies for robust perception in autonomous vehicles,” Nuvern Appl. Sci. Rev., vol. 8, no. 10, pp. 59–68, 2024.
8. X. Yang, et al., “Sensor fusion-based teleoperation control of anthropomorphic robotic arm,” Biomimetics, vol. 8, no. 2, p. 169, 2023, doi: 10.3390/biomimetics8020169.
9. J. Lan and X. Dong, “Improved Q-learning-based motion control for basketball intelligent robots under multi-sensor data fusion,” IEEE Access, 2024, doi: 10.1109/ACCESS.2024.3390679.
10. L. Huang, et al., “Temporal based multi-sensor fusion for 3D perception in automated driving system,” IEEE Access, 2024, doi: 10.1109/ACCESS.2024.3450535.
11. I. A. Ebu, et al., “Improved distance estimation in dynamic environments through multi-sensor fusion with extended Kalman filter,” SAE Technical Paper 2025-01-8034, 2025, doi: 10.4271/2025-01-8034.
12. V. Masalskyi, et al., “Hybrid mode sensor fusion for accurate robot positioning,” Sensors, vol. 25, no. 10, p. 3008, 2025, doi: 10.3390/s25103008.
13. H. Pan, et al., “Robust environmental perception of multi-sensor data fusion,” in Robust Environmental Perception and Relia-bility Control for Intelligent Vehicles, Singapore: Springer, 2023, pp. 15–61. ISBN: 9789819977895.
14. M. Andronie, et al., “Remote big data management tools, sensing and computing technologies, and visual perception and environment mapping algorithms in the internet of robotic things,” Electronics, vol. 12, no. 1, p. 22, 2022, doi: 10.3390/electronics12010022.
15. K. Gupta, et al., “Enhancing sensor perception: Integrating multi-sensor data for robust sensor perception,” in Proc. 17th Int. Conf. COMSNETS, 2025, doi: 10.1109/COMSNETS63942.2025.10885686.
16. Y. Yan, et al., “Real-time localization and mapping utilizing multi-sensor fusion and visual–IMU–wheel odometry for agri-cultural robots in unstructured, dynamic and GPS-denied greenhouse environments,” Agronomy, vol. 12, no. 8, p. 1740, 2022, doi: 10.3390/agronomy12081740.
17. A. Li, et al., “Map construction and path planning method for a mobile robot based on multi-sensor information fusion,” Appl. Sci., vol. 12, no. 6, p. 2913, 2022, doi: 10.3390/app12062913.
18. C. Pan, et al., “Pose estimation algorithm for quadruped robots based on multi-sensor fusion,” in Proc. Int. Symp. Intell. Robot. Syst. (ISoIRS), 2024, doi: 10.1109/ISoIRS63136.2024.00009.
19. I. Yaqoob, “Sensor fusion-based approach for real time navigation in autonomous mobile robots using mobile stereonet in warehouse,” SSRN Electron. J., 2023, doi: 10.2139/ssrn.4623371.
20. Y. H. Khalil and H. T. Mouftah, “LiCaNet: Further enhancement of joint perception and motion prediction based on mul-ti-modal fusion,” IEEE Open J. Intell. Transp. Syst., vol. 3, pp. 222–235, 2022, doi: 10.1109/OJITS.2022.3160888.