Collision Avoidance in Crowded Zone Using Adversarial Reinforcement Learning
Synopsis
Abstract: Reinforcement learning magnificently evolved in the field of robotics by providing comfortable agents to society that can travel in the crowded zone without colliding with any obstacles. To our knowledge reinforcement learning allows an agent to learn from the previous action done in the environment and also the agent will be rewarded for its action. Sometimes the RL agent can’t able to learn the policy extensively, the agents will act in the real world environment like entering into unknown territory due to poor action or not any improvement in the reward function. For that, we proposed an algorithm using Adversarial Network in Reinforcement Learning (GA-RL) to let an agent learn the policy extensively in the environment by providing the observable reward from expert’s state. This will boost reinforcement learning to produce a better result. In our policy, we used simulation as an environment for our agent to do the action and for the reward function, we used an Adversarial Network. In that, the discriminator will optimize the reward function to a learnable reward policy which creates a disentangled reward function to respect the reward reference to the ground-truth that enables us to create a robust reward function for RL-agent and the policy eventually acts as a generator. To test that we created our own robot model URDF in RVIZ we tested the robot in some difficult task and scenarios to check whether our method can work in a complex situation. We showed in our experiment phase, that our robot model performed greatly out on doing the collision avoidance in the complex scenarios.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.