Vision Based Autonomous Navigation in Unstructured Static Environments for Mobile Ground Robots
Abstract
This paper presents an algorithm for real-time vision based autonomous navigation for mobile ground robots in an unstructured static environment. The obstacle detection is based on Canny edge detection and a suite of algorithms for extracting the location of all obstacles in robot's current view. In order to avoid obstacles we designed a reasoning process that successively builds an environment representation using the location of the detected obstacles. This environment representation is then used for making optimal decisions on obstacle avoidance.