This paper presents an image processing method for an autonomous mobile robot indoor navigation system using fluorescent lights. The self-localization of the vehicle is done by detecting the position and orientation of fluorescent tubes located above it’s desired path. A map of the lights based on odometry data is built in advance by the robot guided by an operator.
During the teaching of the environment, the robot performs autonomously lights detection and adds appropriate information for each landmark to the lights’ map. Then a graphic user interface is used to define the trajectory the robot must follow with respect to the lights. While the robot is moving, an image processing algorithm similar to the one used during the teaching step is used in order to compare the position and orientation of the detected lights to the map values, which enables the vehicle to cancel odometry errors. This paper implements the power of the robots in indoor navigation system using florescent lights.When a wheel type mobile robot navigates on a two dimensional plane, it can use sensors to know its relative localization by summing elementary displacements provided by incremental encoders mounted on its wheels.