Greenhouse Robot Navigation Using KLT Feature Tracking for Visual Odometry

Authors

  • P. J Younse
  • T. F Burks

Abstract

A visual odometer was developed for an autonomous greenhouse sprayer to estimate vehicle translation and rotation relative to the world coordinate system during navigation. Digital images were taken from a CCD camera mounted on the robot. 7 x 7 pixel features were selected in the image using the KLT algorithm (Csetverikov, 2004). Features were tracked from image to image by finding the best 7 x 7 pixel match of the feature within a 25 x 25 pixel search box. By analyzing the movement of these features, vehicle rotation and translation were estimated. Five features were tracked with the odometer. Tests were run to verify the visual odometer’s accuracy during translation, rotation, and on various surfaces. The visual odometer ran at an average of 10 Hz during experimentation. Translation tests of the odometer in a lab environment gave an average error of 4.85 cm for a 30.5 cm forward translation and 12.4 cm average error for a 305 cm translation. Rotation tests of the odometer in a lab environment gave an average error of 1° for a 45° rotation and an 8° error for a 180° rotation about the vehicle z-axis. Tests completed on concrete, sand, and gravel demonstrated adaptability of the odometer on different ground surfaces that are common in greenhouses. The visual odometer was successfully integrated into a visual navigation system for intersection navigation of an autonomous greenhouse sprayer.

Downloads

Published

2007-07-01

Issue

Section

Automation Technology for Off-Road Equipment-2006