Field Day


Under a blue sky, Andy practiced navigating on the Moon.

One of the most crucial and difficult parts of navigation is knowing where you are right now. On Andy’s mission, that’s even more true than usual: to win the Google Lunar XPrize, teams have to prove that they've driven 500 meters. Near-real-time localization is mission critical.

But there’s no magic bullet. The big techniques: wheel odometry, inertial measurement units (IMUs), visual odometry; they’re all powerful, but have their downsides. In principle one could just count wheel turns or watch the ground move to infer the rover's movement. In practice, they have strengths and weaknesses, but the team thinks they can be used to cover each other. Wheel ticks are good for distance but bad at measuring turn angles, IMUs can measure angles but not traverses, and visual odometry is decent at both and lets us refine our estimates. Because no one sensor can carry the mission, Andy will fly with all three. Of the three, wheel odometry and IMUs are the most pre-packaged. While those two are purchasable as part of a mature and hardened driving system, it’s been on the team to develop visual odometry tools.  

This isn’t the team's first pass at this crucial tool, but that previous work wasn’t the end of the story. Before, odometry took the team a long time to compute, and required stereo vision. As part of the move towards a top-down fisheye camera and emphasis of fast odometry, that old work was revisited improved. Unfortunately, that leap meant that the team's old navigation data might not be relevant: a plan was made to recalibrate our model with new field data.

Along with high definition front- and back-facing cameras, a top-down fisheye camera like this might fly with Andy.

Along with high definition front- and back-facing cameras, a top-down fisheye camera like this might fly with Andy.

It’s processes like this where Andy 2, the engineering prototype, comes into its own. For rapid development - this camera assembly didn’t exist before the spring - a rugged platform is an absolute must. The team was able to test the new capabilities to their limits with minimal risk to the unique engineering prototype.

But a good rover is nothing without a good test site. Where can you go that looks like the Moon? Navigating with visual odometry is, unlike human vision, very sensitive to the kinds of objects it can see. Artificial environments tend to have a lot of sharp edges, recognizable objects, and looping paths. The Moon, we think, doesn’t. Unfortunately for the team, all of those features are ideal for computer vision, and not having them makes our task that much harder. While Carnegie Mellon is a world leader in simulating regolith at the lab scale, that doesn't include the large, featureless spaces you need to get a proper visual Moon simulation. 

Pittsburgh, fortunately, has our backs. It turns out that an ideal location for these tests is a slag heap. Slag, the rock waste product from extracting metal ore, is only a decent simulant of Lunar conditions. The real reason slag heaps work so well is that they’re absolutely massive and featureless. Though Pittsburgh’s days as the steel capital of the world are behind it, it still takes a lot of rock to yield comparatively small amounts of metal, and that makes those sites perfect playgrounds for a lunar rover.

A small slice of this LaFarge site contains both driving sites. LaFarge is a great example the healthy cooperation you see in the resurgent Pittsburgh business sector.

A small slice of this LaFarge site contains both driving sites. LaFarge is a great example the healthy cooperation you see in the resurgent Pittsburgh business sector.

So the team assembled and trucked out a mobile command center to the LaFarge site. One of our desirements for the test was to keep practicing the team's mission control, ironing out bugs in an environment that’s as realistic as it can be. To that end, the drivers weren't allowed to see Andy’s or its location: all the navigation had to be done with pre-made maps and Andy’s fisheye camera.

The plan ended up simple, as good plans often are. Andy would drive several loops, in order to test how well its navigation computer performs “loop closure”. Loop closure is a way of retroactively correcting for localization errors. Once the beginning and end of a looping trek is identified as actually the same location, errors can be smoothed by working backwards. The team drove three such loops: a simple one in test site A (right basin in above) and two more elaborate ones in test site B (left basin). Each successive loop was more complicated than the one before, and each answered a battery of questions including loop closure itself.

To learn the limits of the fisheye camera, on one loop the drivers were told to go up the hill until they thought Andy was imminently at risk.

To learn the limits of the fisheye camera, on one loop the drivers were told to go up the hill until they thought Andy was imminently at risk.

Overall, the test was exceptionally smooth, thanks to the excellent attention to detail in its planning and execution. The behind-the-scenes work by systems engineering team, refining and structuring the test, payed off. Even though a few serious bugs were uncovered during the test, the team was able to work speedily past them and still achieve all of the objectives. In many ways, that's the most successful kind of test.

The results weren’t surprising: visual odometry is a crucial part of our localization toolset. With this new, properly calibrated navigation system, Andy is one step closer its mission to the Moon.

Comment