SO101-Nexus
Guides

Teleop Dataset Recording

Record demonstration datasets by teleoperating a simulated robot with a physical leader arm.

SO101-Nexus includes a teleoperation script at examples/teleop.py that lets you control a simulated robot using a physical SO-100 or SO-101 leader arm. Demonstrations are recorded as LeRobot v3 datasets and can be pushed directly to the HuggingFace Hub.

Hardware requirements

  • A physical SO-100 or SO-101 leader arm connected via USB
  • A Linux machine with the leader arm's serial port accessible

Finding the serial port

If you're on Linux, you may need to make the serial devices writable before running the port finder or launching teleop:

sudo chmod 666 /dev/ttyACM0
sudo chmod 666 /dev/ttyACM1

Assuming you have already installed LeRobot, you can use the LeRobot port discovery tool to identify the leader arm's serial port:

lerobot-find-port

This will list connected devices with their port paths (typically /dev/ttyACM0 or similar).

Launching the teleop interface

uv run --package so101-nexus-mujoco --group teleop \
    python examples/teleop.py \
    --leader-port /dev/ttyACM0

This opens a Gradio web UI in your browser.

CLI arguments

ArgumentTypeDefaultDescription
--leader-portstr/dev/ttyACM0Serial port of the leader arm
--leader-idstrso101_leaderDevice identifier for the leader arm
--wrist-roll-offset-degfloat-90.0Wrist roll offset in degrees

Gradio UI configuration

The web interface provides controls for the recording session:

SettingDescription
EnvironmentWhich simulation environment to use
Robot typeSO-100 or SO-101
EpisodesNumber of episodes to record
FPSRecording frame rate
Camera resolutionResolution of recorded camera frames
Action spacejoint_pos (absolute) or joint_pos_delta (relative)
CountdownSeconds of countdown before recording starts

Session flow

  1. Configure. Set the environment, robot type, number of episodes, FPS, camera resolution, action space, and countdown timer in the Gradio UI.
  2. Record. A countdown plays before each episode begins. Teleoperate the simulated robot by moving the physical leader arm. The simulation mirrors your movements in real time.
  3. Review. After each episode, the UI displays a video replay and diagnostic plots of the recorded trajectory.
  4. Approve or discard. Accept the episode to add it to the dataset, or discard it and re-record.
  5. Push to Hub. Once all episodes are recorded, push the completed dataset to the HuggingFace Hub directly from the UI.

Recorded dataset format

Datasets are saved in the LeRobot v3 format, which includes:

  • observation.state: Joint positions (6D) at each timestep, matching what real robot encoders would report
  • action: Commanded joint positions (or deltas, depending on action space)
  • observation.images.wrist_cam: Camera frames at the configured resolution
  • Episode metadata (environment, robot type, timestamps)

The observation.state field always contains the raw joint positions from the leader arm, not the simulator's internal state vector (which may include privileged information like object positions). This ensures datasets are compatible with sim-to-real transfer and vision-based policy training.

These datasets are compatible with LeRobot training pipelines and can be loaded with the standard LeRobot dataset API.

On this page