Autonomous Exploration With Deep Reinforcement Learning

    Author Name(s)

    Sara Centeno
    Mishell Beylik
    Hugo Quintero
    Andrew Herdering

    Faculty Advisor(s)

    Jason Isaacs

    Abstract

    We propose a technique for autonomous robotic exploration for the application of searching unknown environments intended for use in urban search and rescue. Performing search and rescue operations in an environment where natural or other disasters have occurred is dangerous and taxing for even the most skilled human teams. Existing solutions to the autonomous search problem focus on mapping these environments efficiently with LiDAR sensors. However, these solutions disregard camera viewing distance and obstacles which limit the camera’s view. A deep reinforcement learning approach that maximizes area viewed through platform cameras while avoiding obstacles has the potential to provide valuable feedback to rescue teams. In contrast to traditional map-based frontier exploration and camera-based frontier exploration, our deep reinforcement learning algorithm approach seeks to maximize the metric of squared meters searched over time. The results of our simulation study indicate that this approach improves upon conventional frontier exploration approaches by searching significantly more area over time.

    Presentation

    Poster

    Leave a Reply

    Your email address will not be published. Required fields are marked *