Extraterrestrial landers sent to gather samples from the surface of distant moons and planets have limited time and battery power to complete their mission. Aerospace and computer science engineering researchers at The Grainger College of Engineering, University of Illinois Urbana-Champaign trained a model to autonomously assess and scoop quickly, then watched it demonstrate its skill on a robot at a NASA facility.
Aerospace Ph.D. student Pranay Thangeda said they trained their robotic lander arm to collect scooping data on a variety of materials, from sand to rocks, resulting in a database of 6,700 points of knowledge. The two terrains in NASA’s Ocean World Lander Autonomy Testbed at the Jet Propulsion Laboratory were brand new to the model that operated the JPL robotic arm remotely.
The study, “Learning and Autonomy for Extraterrestrial Terrain Sampling: An Experience Report from OWLAT Deployment,” was published in the AIAA Scitech Forum.
“We just had a network link over the internet,” Thangeda said. “I connected to the test bed at JPL and got an image from their robotic arm’s camera. I ran it through my model in real time. The model chose to start with the rock-like material and learned on its first try that it was an unscoopable material.”
Based on what it learned from the image and that first attempt, the robotic arm moved to another more likely area and successfully scooped the other terrain, a finer grain material. Because one of the mission requirements is that the robot scoop a specific volume of material, the JPL team measured the volume of each scoop until the robot accomplished scooping the full amount.
Thangeda said that although this work was originally motivated by exploration of ocean worlds, their model can be used on any surface.
“Usually, when you train models based on data, they only work on the same data distribution. The beauty of our method is that we didn’t have to change anything to work on NASA’s test bed because, in our method, we are adapting online.
“Even though we never saw any of the terrains at the NASA test bed, without any fine tuning on their data, we managed to deploy the model trained here directly over there, and the model deployment happened remotely—exactly what autonomous robot landers will do when deployed on a new surface in space.”
Thangeda’s adviser, Melkior Ornik, is the lead on one of four projects solving different problems. The only commonality between them is they are all a part of the Europa program and use this Lander as a test bed to explore different problems.
“We were one of the first to demonstrate something meaningful on their platform designed to mimic a Europa surface. It was great to finally see something you worked on for months being deployed on a real, high-fidelity platform. It was cool to see the model being tested on a completely different terrain and a completely different platform robot that we’d never trained on. It was a boost of confidence in our model and our approach.”
Thangeda said the feedback they received from the JPL team was good, too. “They were happy that we were able to deploy the model without a lot of changes. There were some issues when we were just starting out, but I learned it was because we were the first to try to deploy a model on their platform, so it was network issues and some simple bugs in the software that they had to fix.
“Once we got it working, people were surprised that it was able to learn within like one or two samples. Some didn’t even believe it until they were shown the exact results and methodology.”
Thangeda said one of the significant issues he and his team had to overcome was to bring their setup on parity with NASA’s setup.
“Our model was trained on a camera in a particular location with a particular shaped scoop. The location and the shape of the scoop were two things we had to address. To make sure their robot had the exact same scoop shape, we sent them a CAD design and they 3D printed it and attached it to their robot.
“For the camera, we took their RGB-D point cloud information and reprojected it in real time to a different viewpoint, so that it matched what we had in our robot before we sent it to the model. That way, what the model saw was a similar viewpoint to what it saw during training.”
Thangeda said they plan to build on this research for more autonomous excavation and automating construction work like digging a canal. It’s much easier for humans to do these things. It’s hard for a model to learn to do these things autonomously, because the interactions are very nuanced.
More information:
Pranay Thangeda et al, Learning and Autonomy for Extraterrestrial Terrain Sampling: An Experience Report from OWLAT Deployment, AIAA SCITECH 2024 Forum (2024). DOI: 10.2514/6.2024-1962
Citation:
AI model masters new terrain at NASA facility one scoop at a time (2025, February 7)
retrieved 7 February 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.