Control Robot in Simulation Environment Using LeRobot

By Danqing Zhang, PathOn.ai TeamMay 20, 2025

If you don't have a robot but still want to try out robot learning, you can use simulation environments. You might also be interested in sim2real, where you train the policy controlling the robot in simulation and then deploy the policy on the real robot.

Unfortunately, as of June 11, 2025, Hugging Face has not yet released an official simulation environment for the SO-ARM100 robot arm or its variants. However, based on Discord conversations, it is coming soon.

Getting Started with Robot Simulation

As of June 11, 2025, to get started with robot simulation, I suggest you follow these steps:

1. Train Policies in Existing Environments

Try training a policy in the PushT environment and the two ALOHA environments using Diffusion Policy or ACT (Action Chunking with Transformers) policy.

2. Explore ManiSkill Environment

Try out the PickCubeSO100-v1 simulation environment in ManiSkill: ManiSkill PickCubeSO100-v1

3. Build Your Own Gym Environment

Build your own gym environment using MuJoCo simulator. For example, I created a gymnasium environment for the SO-ARM100 robot arm with two cubes for a cube stacking task: SO-ARM100 Simulation Environment

4. Teleoperate in Simulation

For either the ManiSkill environment or the MuJoCo environment you created, you can teleoperate the robot in the simulation environment using Meta Quest 3. If you don't have the robot yet, this assumes you're working purely in simulation. If you do have the robot, you can use the leader arm to teleoperate the robot in the simulation environment.

This way, you can collect demonstration data in simulation and train policies for tasks, then later deploy them on real robots to test sim2real transfer. As for the sim2real transfer, you can also use reinforcement learning to train policies in simulation and deploy them on real robots, as shown in this Twitter thread: RL Training Demo

5. Real2Sim Transfer

Try applying policies trained on real robots to simulation environments. For example, LeRobot released trained model weights like SVLA SO100 Stacking. You can create your own simulation environment for the cube stacking task and apply the policy to see if it generalizes to the simulation environment.

6. Real2Sim2Real Pipeline

An interesting approach is real2sim2real: generate simulation environments using real robot scenes and task descriptions, train policies in simulation, and then deploy them on real robots. This creates a complete loop between real and simulated environments.