Skip to content

Supplementary material for "Self-Supervised Learning Of Visual Pose Estimation Without Pose Labels By Classifying LED States"

Notifications You must be signed in to change notification settings

idsia-robotics/ssl-pose-estimation-without-pose-labels

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Self-Supervised Learning Of Visual Pose Estimation
Without Pose Labels By Classifying LED States

Nicholas Carlotti, Mirko Nava, and Alessandro Giusti

Dalle Molle Institute for Artificial Intelligence (IDSIA), USI-SUPSI, Lugano, Switzerland

Abstract

We introduce a model for monocular RGB relative pose estimation of a ground robot that trains from scratch without pose labels nor prior knowledge about the robot's shape or appearance. At training time, we assume: (i) a robot fitted with multiple LEDs, whose states are independent and known at each frame; (ii) knowledge of the approximate viewing direction of each LED; and (iii) availability of a calibration image with a known target distance, to address the ambiguity of monocular depth estimation. Training data is collected by a pair of robots moving randomly without needing external infrastructure or human supervision. Our model trains on the task of predicting from an image the state of each LED on the robot. In doing so, it learns to predict the position of the robot in the image, its distance, and its relative bearing. At inference time, the state of the LEDs is unknown, can be arbitrary, and does not affect the pose estimation performance. Quantitative experiments indicate that our approach: is competitive with SoA approaches that require supervision from pose labels or a CAD model of the robot; generalizes to different domains; and handles multi-robot pose estimation.


Self-supervised pose estimation without pose labels approach

Figure 1: Overview of the approach: (a) given an input image, our approach predicts the robot's location in the image and its bearing relative to the camera. (b) We apply this mechanism over multiple rescaled versions of the input image to infer the robot's distance to the camera.


Table 1: Model's performance metrics computed on the laboratory testing set, three replicas per row.

Self-supervised pose estimation without pose labels performance

Bibtex

@inproceedings{carlotti2025self,
  title={Self-supervised Learning of Visual Pose Estimation Without Pose Labels by Classifying LED States},
  author={Carlotti, Nicholas and Nava, Mirko and Giusti, Alessandro},
  booktitle={{PMLR} Conference on Robot Learning},
  pages={To appear},
  year={2025},
}

Video

Self-Supervised Learning Of Visual Pose Estimation Without Pose Labels By Classifying LED States

About

Supplementary material for "Self-Supervised Learning Of Visual Pose Estimation Without Pose Labels By Classifying LED States"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published