Entry Date:
February 20, 2008

Learning Applied to Ground Robots (LAGR)

Principal Investigator Steven Dubowsky

Co-investigator Karl Iagnemma


The goal of the LAGR program is to develop a new generation of learned perception and control algorithms for autonomous ground vehicles, and to integrate these learned algorithms with a highly capable robotic ground vehicle.

The LAGR Program, sponsored by the Defense Advanced Research Projects Agency (DARPA), takes the form of a series of competitions between the eight participating organizations (Stanford/MIT, Applied Perception, Georgia Tech, JPL, Net-Scale, NIST, Penn, and SRI). The competitions are run approximately once a month. Each team uses the same robot hardware (described below), and develops software control algorithms for that robot. The emphasis of the competition is to develop learning control algorithms that allow the robot to reach the target quicker and more reliably than the "baseline" controller.

The MIT Field and Space Robotics Laboratory is partnered with the Stanford AI Lab for the LAGR program. The FSRL is developing models and algorithms to analyze root mobility in rough, slippery, and deformable terrain.

The Robot -- The LAGR robot was developed by Carnegie Mellon University. Each team has two identical copies of the robot for testing. During competitions, all teams load their control software onto the government team’s robot.

The LAGR robot is front wheel driven with two direct drive DC motors. Steering is accomplished via differential wheel velocity.

The onboard sensors include:

(*) Stereo cameras: Dual Point Grey Bumblebee stereo cameras
(*) GPS: Garmin
(*) IMU: XSENS 3 axis gyro/compass/accelerometer
(*) Bumper activated switches and IR rangefinders