Visual Object Tracking in Virtual Environmental Simulation Platform Using Deep Reinforcement Learning Technique

Authors

Khurshedjon Farkhodov
Department of Artificial Intelligence Convergence, Pukyong National University, Busan, Korea
Suk-Hwan Lee
Department of Computer engineering, Dong-A University, Busan, Korea
Jin-Hyeok Park
Department of IT Convergence and Application Engineering, Pukyong National University, Busan
Ki-Ryong Kwon
Department of IT Convergence and Application Engineering, Pukyong National University, Busan

Synopsis

Background: The complexity of object tracking models in hardware applications has become a more in-demand task to complete with multifunctional algorithm skills in a variety of indeterminable environment tracking circumstances. Experimenting with real-time apps offers additional dependencies and needs that may be encountered while testing real-time processing. As far as we know, there are numerous basic difficulties to visual object tracking, such as occlusion, motion blur, background clutter, changes in ambient lighting, and so on. To address these issues, the most common tracking approaches [1][2] monitor specific object classes utilizing a variety of feature learning algorithms available.

Objective: The main objective of this work is experimenting an algorithm with a virtual environment simulation platform (Aerial Informatics and Robotics Simulation – AirSim [3], City Environ) and one of the Deep Reinforcement Learning Model Deep Q-Learning algorithms to present an object tracking framework that differs from the most advanced tracking models in this paper.

Methodology: Using sequential images from the virtual simulation environment as inputs, our proposed network evaluates the environment using a deep reinforcement learning model to control activities in the virtual simulation environment. The deep reinforcement network model was pre-trained using several sequential training image sets and fine-tuned for flexibility during runtime tracking to get the target and background variables.

Result and discussion: In a virtual simulation environment, we examined and assessed the proposed tracking model, and the results were exceptional in terms of speed and accuracy when compared to state-of-the-art techniques based on deep network-based trackers in real-time.

Future Work: Future work we are going to improve the performance by fine-tuning methodology and do more experiment with different weather conditions. Moreover, we are going to test our model with other open-source video sequences to compare with other conventional RL based DQN trackers.

MISS2021
Published
January 28, 2022