Abstract
Over the years, active object tracking has emerged as a prominent topic in object tracking. However, most of these methods are unsuitable for tracking ground objects in high-altitude environments. Therefore, the paper proposes an air-to-ground active object tracking method based on reinforcement learning for high-altitude environments, which consists of a state recognition model and a reinforcement learning module. The state recognition model leverages the correlation between observed states and image quality (as measured by object recognition probability) as prior knowledge to guide the training of reinforcement learning. Then, the reinforcement learning module can actively control the PTZ camera to achieve stable tracking and successfully recover tracking after object loss. Additionally, the study introduces a UE-free simulator that increases the efficiency of the training process by over nine times. High-altitude experimental results with the proposed method show significantly enhanced stability and robustness compared to the PID method. Furthermore, the results also indicate that the proposed method can significantly improve the image quality of the observation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
\(Loss=\frac{\sum _{i=1}^N|p-\bar{p}|}{N}\), where N represents the batch size of training.
- 2.
The rule for updating the learning rate: \(lr_{epoch} = \frac{1}{1 + 0.02 \times epoch}\), where epoch represents the number of iterations.
- 3.
The magnitude is assessed using the thresholds provided by Romano et al. [8], |delta| >0.474 is large.
References
Cliff, N.: Ordinal Methods for Behavioral Data Analysis. Psychology Press, London (2014)
Cui, Y., Hou, B., Wu, Q., Ren, B., Wang, S., Jiao, L.: Remote sensing object tracking with deep reinforcement learning under occlusion. IEEE Trans. Geoscience Remote Sens. 60, 1–13 (2021)
Devo, A., Dionigi, A., Costante, G.: Enhancing continuous control of mobile robots for end-to-end visual active tracking. Robot. Auton. Syst. 142, 103799 (2021)
Jeong, H., Hassani, H., Morari, M., Lee, D.D., Pappas, G.J.: Deep reinforcement learning for active target tracking. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 1825–1831. IEEE (2021)
Kyrkou, C.: C\(^3\)Net: end-to-end deep learning for efficient real-time visual active camera control. J. Real-Time Image Proc. 18, 1421–1433 (2021)
Luo, Y., et al.: Calibration-free monocular vision-based robot manipulations with occlusion awareness. IEEE Access 9, 85265–85276 (2021)
Ma, X., Wang, Y., Yang, S., Niu, W., Ma, W.: Trajectory tracking of an underwater glider in current based on deep reinforcement learning. In: OCEANS 2021, San Diego-Porto, pp. 1–7. IEEE (2021)
Romano, J., Kromrey, J.D., Coraggio, J., Skowronek, J.: Appropriate statistics for ordinal level data: should we really be using t-test and Cohen’s d for evaluating group differences on the NSSE and other surveys. In: Annual Meeting of the Florida Association of Institutional Research, vol. 177, p. 34 (2006)
Rosner, B., Glynn, R.J., Lee, M.L.T.: The Wilcoxon signed rank test for paired comparisons of clustered data. Biometrics 62(1), 185–192 (2006)
Ross, D.L., Lim, J.L., et al.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1r3), 125r141 (2008)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)
Xi, M., Zhou, Y., Chen, Z., Zhou, W., Li, H.: Anti-distractor active object tracking in 3D environments. IEEE Trans. Circuits Syst. Video Technol. 32(6), 3697–3707 (2021)
Yang, J., Tang, Z., Pei, Z., Song, X.: A novel motion-intelligence-based control algorithm for object tracking by controlling pan-tilt automatically. Math. Probl. Eng. 2019 (2019)
Yao, B.: GARAT: generative adversarial learning for robust and accurate tracking. Neural Netw. 148, 206–218 (2022)
Yun, S., Choi, J., Yoo, Y., Yun, K., Choi, J.Y.: Action-driven visual object tracking with deep reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. 29(6), 2239–2252 (2018)
Zhang, H., He, P., Zhang, M., Chen, D., Neretin, E., Li, B.: UAV target tracking method based on deep reinforcement learning. In: 2022 International Conference on Cyber-Physical Social Intelligence (ICCSI), pp. 274–277. IEEE (2022)
Zhao, W., Meng, Z., Wang, K., Zhang, J., Lu, S.: Hierarchical active tracking control for UAVs via deep reinforcement learning. Appl. Sci. 11(22), 10595 (2021)
Zhong, F., Sun, P., Luo, W., Yan, T., Wang, Y.: Towards distraction-robust active visual tracking. In: International Conference on Machine Learning, pp. 12782–12792. PMLR (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, X., Ren, W., Tan, J., Zhang, X., Ren, X., Dai, H. (2023). Air-to-Ground Active Object Tracking via Reinforcement Learning. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14259. Springer, Cham. https://doi.org/10.1007/978-3-031-44223-0_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-44223-0_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44222-3
Online ISBN: 978-3-031-44223-0
eBook Packages: Computer ScienceComputer Science (R0)