Sim-to-Real Transfer
One of the core goals of the 3we platform is making sim-to-real transfer reliable and measurable. This guide explains the noise model, domain randomization, and the five validation tests you should pass before deploying a policy on hardware.
Transfer Philosophy
Section titled “Transfer Philosophy”The 3we simulator is not a pixel-perfect digital twin. Instead, it is calibrated so that policies trained in simulation achieve at least 85% of their simulated performance when deployed on the real robot. We accomplish this through:
- Realistic sensor noise models calibrated from hardware recordings
- Domain randomization over physical parameters
- A standardized set of transfer validation tests
Noise Model
Section titled “Noise Model”The simulator injects noise to match real-world sensor characteristics:
| Sensor | Noise Type | Parameters |
|---|---|---|
| LiDAR | Gaussian range noise | sigma = 0.01 m |
| LiDAR | Random dropouts | 2% of rays per scan |
| IMU (accel) | White noise + bias | sigma = 0.02 m/s^2, bias drift = 0.001 m/s^2/s |
| IMU (gyro) | White noise + bias | sigma = 0.005 rad/s, bias drift = 0.0001 rad/s/s |
| Camera | Gaussian pixel noise | sigma = 3.0 (per channel, uint8) |
| Odometry | Slip model | 5% random wheel slip per step |
| Motor response | First-order lag | tau = 50 ms |
Enable or disable noise in your environment:
env = gym.make("3we/Navigation-v1", sensor_noise=True)SensorNoiseModel
Section titled “SensorNoiseModel”The SDK provides a configurable SensorNoiseModel you can tune to your specific hardware:
from threewe.sim.noise import SensorNoiseModel
noise = SensorNoiseModel( lidar_gaussian_stddev=0.02, # 2cm noise on LiDAR imu_accel_stddev=0.01, # m/s^2 imu_gyro_stddev=0.005, # rad/s camera_gaussian_stddev=5.0, # pixel intensity noise odometry_slip_factor=0.05, # 5% wheel slip)Domain Randomization
Section titled “Domain Randomization”During training, randomize these parameters to build robust policies:
from threewe.sim.domain_randomization import DomainRandomization
dr = DomainRandomization()params = dr.sample()# {# "mass_scale": 1.05, # 0.9-1.1x robot mass# "friction_scale": 0.92, # 0.8-1.2x floor friction# "motor_torque_noise": 0.02, # Gaussian noise on motor output# "gravity_noise": -0.003, # Slight gravity variation# "lighting_intensity": 1.2, # 0.5-1.5x scene lighting# "color_jitter": 0.05, # Random color shift# "lidar_noise_scale": 1.3, # 0.5-2.0x LiDAR noise# "imu_noise_scale": 0.8, # 0.5-2.0x IMU noise# "camera_noise_scale": 1.1, # 0.5-2.0x camera noise# "odometry_slip_scale": 1.4, # 0.5-2.0x wheel slip# }Randomization Categories
Section titled “Randomization Categories”Physics Randomization:
- Mass scaling (0.9x to 1.1x) — accounts for payload variations
- Friction scaling (0.8x to 1.2x) — different floor surfaces
- Motor torque noise — actuator imprecision
- Gravity noise — sensor mounting angle variations
Visual Randomization:
- Lighting intensity (0.5x to 1.5x) — different times of day
- Texture randomization — varied wall/floor appearances
- Shadow randomization — different light positions
- Color jitter — camera white balance variations
Sensor Randomization:
- LiDAR noise scaling — accounts for real-world reflectivity variation
- IMU noise scaling — temperature drift, vibration effects
- Camera noise scaling — varying illumination quality
- Odometry slip scaling — wheel-floor contact variation
Using DR with Gymnasium Environments
Section titled “Using DR with Gymnasium Environments”import gymnasium as gymfrom threewe.sim.domain_randomization import DomainRandomization
dr = DomainRandomization(seed=42)env = gym.make("3we/Navigation-v1")
for episode in range(1000): params = dr.sample() obs, info = env.reset(options={"domain_randomization": params}) done = False while not done: action = policy(obs) obs, reward, terminated, truncated, info = env.step(action) done = terminated or truncatedCustom Randomization Config
Section titled “Custom Randomization Config”Create a YAML file to customize ranges:
physics: mass_scale_range: [0.85, 1.15] friction_scale_range: [0.7, 1.3] motor_torque_noise_stddev: 0.08
visual: randomize_textures: true randomize_lighting: true lighting_intensity_range: [0.3, 1.8] color_jitter: 0.15
sensor: lidar_noise_scale_range: [0.3, 2.5] imu_noise_scale_range: [0.5, 2.0] camera_noise_scale_range: [0.5, 2.5] odometry_slip_scale_range: [0.5, 2.5]Five Validation Tests
Section titled “Five Validation Tests”Before deploying to hardware, run the transfer validation suite. The SDK includes a formal validation protocol with 5 standard transfer tests:
| Test | Metric | Pass Condition | Description |
|---|---|---|---|
straight_walk_5m | Endpoint error (m) | real < sim x 2.0 | Drive 5m forward, measure position error |
rotation_360 | Angle error (deg) | real < 1.0 (absolute) | Rotate 360 degrees, measure final heading error |
obstacle_avoidance_3 | Success rate | real > sim x 0.7 | Navigate past 3 obstacles |
pointnav_10m | SPL | real > sim x 0.6 | Navigate 10m with optimal path ratio |
dynamic_obstacle | Collision rate | real < sim x 1.5 | Navigate with moving obstacles |
Running the Validation
Section titled “Running the Validation”# Sim-only validation (CI-friendly, no hardware needed)threewe test sim2real --backend gazebo --scene office_v2
# Full sim2real comparison (requires hardware)threewe sim2real report --backend gazebo --scene office_v2 --trials 5Programmatic Validation
Section titled “Programmatic Validation”import asynciofrom threewe.benchmark.sim2real import Sim2RealValidator
async def validate(): validator = Sim2RealValidator()
report = await validator.validate( sim_backend="gazebo", real_backend="real", num_trials=5, )
print(f"Overall: {'PASS' if report.overall_passed else 'FAIL'}") print(f"Pass rate: {report.pass_rate:.0%}")
for result in report.results: status = "PASS" if result.passed else "FAIL" print(f" [{status}] {result.test_name}: " f"sim={result.sim_value:.3f}, " f"real={result.real_value:.3f}, " f"ratio={result.transfer_ratio:.2f}")
asyncio.run(validate())Transfer Ratios
Section titled “Transfer Ratios”After running both sim and real tests, compute the transfer ratio:
transfer_ratio = real_score / sim_scoreA transfer ratio above 0.85 for success rate indicates your policy is well-suited for deployment. Ratios between 0.6 and 0.85 are acceptable for initial deployment with monitoring. Ratios below 0.6 suggest the need for additional domain randomization or fine-tuning on real data.
Tips for Better Transfer
Section titled “Tips for Better Transfer”- Train with domain randomization enabled from the start, not as a fine-tuning step.
- Use the action smoothing wrapper (
threewe.wrappers.ActionSmoothing) to avoid jerky motions that amplify on real hardware. - Collect 10 minutes of real-world driving data and compare sensor distributions to simulation. Adjust noise parameters if they diverge significantly.
- Prefer policies that use LiDAR over camera input for initial transfer — LiDAR has lower sim-to-real gap.
- The calibrated motor model uses response curves measured from the actual JGA25-370 motors (150ms ramp-up time, within 5% steady-state accuracy).