Taku Okawara, Kenji Koide, Aoki Takanose, Shuji Oishi, Masashi Yokozuka, Kentaro Uno, Kazuya Yoshida
Tohoku University, AIST
Robotics and Automation Letters, 2025, May
Legged robots have significant potential for transportation and inspection tasks in challenging environments (e.g., rough terrain) thanks to their superior locomotion capabilities compared to wheeled robots. Featureless environments (e.g., tunnels, long corridors, alleys, lunar surfaces) and deformable terrains (e.g., sandy beaches and gravel) are challenging environments for exteroceptive sensors (e.g., LiDAR, camera)-based and kinematic models-based odometry estimation, respectively.
To deal with these challenges, we presented an odometry estimation algorithm that fuses LiDAR-IMU constraints and online trainable leg kinematic constraints incorporating tactile information (neural leg kinematics model) in the proposed factor graph based on tightly coupled way. We propose the neural adaptive leg odometry factor to solve simultaneously solve odometry estimation and online training of the neural leg kinematics. This model's online training enhances its adaptability to changes in the weight loads of a legged robot and terrain conditions. This enables the effective utilization of foot tactile information (reaction forces) for motion prediction because foot reaction forces vary with both robot weight loads and terrain conditions. To balance accuracy and computational costs for training the network, we divided the model into two models trained online (online learning model) and offline (offline learning model).
Foot tactile information (foot reaction force)-based motion prediction is performed by: 1) estimating the acceleration induced by the reaction force by dividing this reaction force by the robot's mass, and 2) estimating the robot's velocity by integrating this acceleration [1][2]. Therefore, adaptation to robot weight loads is needed to effectively use foot tactile information for generic motion prediction based on neural leg kinematics model. This adaptation is important for some tasks, such as delivery and transportation, where the robot's weight load can change mid-application. Therefore, we train the neural leg kinematics model online to adapt to robot weight loads.
As shown in the overview image, the neural leg kinematics model is divided into the online learning model and the offline learning model. The offline learning model is trained by the 28 types of datasets shown in the following image. We obtained the reference twist of the offline batch learning based on LiDAR-IMU odometry using the omnidirectional FOV LiDAR (Livox MID-360).
Regarding label data for contact states, we manually processed the 1D foot force sensor (Unitree footpad) values based on a threshold, to create the label indicating whether each foot is in contact or not. Note that the contact state labels are needed for only the offline batch learning phase (i.e., the contact state labels are NOT needed for the online learning phase).
As shown in the overview image, the online learning model of the neural leg kinematics model is trained online with odometry estimation to retain the both consistency. To jointly conduct odometry estimation and online learning on a unified factor graph, we propose the neural adaptive leg odometry factor, which is a constraint related to a robot pose and MLP parameter of the online learning model. Note that the offline learning is fixed during the odometry estimation (online learning procedure).
The objective function eALL is defined by the sum of LiDAR-based (Matching cost factor), IMU-based (IMU preintegration factor), the neural leg kinematics model-based motion constraints (neural adaptive leg odometry factor), and some constraints (e.g., prior factor) as follows. See the paper for details of each error term. Therefore, we optimize this objective function eALL using the ISAM2 optimizer to perform our state estimation.
The proposed odometry estimation algorithm was demonstrated with the narrow FOR LiDAR (Livox AVIA) to imitate severely featureless environments such as the following image.
The proposed method was evaluated based on two experiment sequences:
Method/Sequence | Campus | Sandy Beach | ||
---|---|---|---|---|
ATE [m] | RTE [m] | ATE [m] | RTE [m] | |
Ours | 0.29 ± 0.12 | 0.13 ± 0.04 | 0.08 ± 0.05 | 0.12 ± 0.04 |
Ours w/o online learning | 0.36 ± 0.17 | 0.17 ± 0.07 | 0.90 ± 0.70 | 0.20 ± 0.10 |
Ours w/o tactile info. | 0.63 ± 0.30 | 0.15 ± 0.06 | 0.12 ± 0.06 | 0.12 ± 0.04 |
FAST-LIO2 | Corrupted | Corrupted | Corrupted | Corrupted |
Unitree odometry w/ LIO | 0.57 ± 0.32 | 0.15 ± 0.06 | No record | No record |
Unitree odometry | 0.80 ± 0.46 | 0.17 ± 0.06 | No record | No record |