Skip to content

Build a 2200 RMB ($300) ROS2 Research Robot from Scratch

A complete ROS2 research robot with mecanum omnidirectional drive, 2D LiDAR SLAM, edge AI inference, and a Python API that lets you write your first autonomous behavior in 10 lines of code. Total hardware cost: under 2200 RMB (~$300 USD). No custom PCBs required for the first prototype — everything is off-the-shelf from Taobao or AliExpress.

For context, a TurtleBot4 Lite costs $1,200+. A custom university research platform typically runs $2,000-5,000 by the time you source sensors, compute, and a chassis. The 3we platform achieves comparable sensing and compute capability at a fraction of the cost, with the added advantage of a Python-first SDK that eliminates the ROS2 learning curve.


ComponentPartApprox. Cost (RMB)
ComputeRaspberry Pi 5 (8GB) + Hailo-8L AI Hat650
MicrocontrollerESP32-S3 DevKitC-135
LiDARLD06 360-degree 2D LiDAR100
IMUBNO055 9-DOF IMU45
Camera160-degree Fisheye USB Camera (1080p)80
Motors4x JGA25-370 DC Motors with Encoders120
Motor Driver2x DRV8833 Dual H-Bridge20
Wheels4x 65mm Mecanum Wheels160
Battery3S 2200mAh LiPo + BMS120
Chassis200x200mm Aluminum Plate + Standoffs80
Power5V/5A Buck Converter (for Pi5)25
SafetyEmergency Stop Button + Relay30
WiringConnectors, cables, crimps50
FastenersM3/M4 bolts, nuts, spacers30
Total1545 RMB ($215)

With the optional upgrades below, the full research configuration reaches ~2200 RMB:

Optional UpgradePartCost (RMB)
Depth sensingIntel RealSense D405 (used)350
Better computePi 5 (16GB) instead of 8GB+100
Charging dockCustom dock PCB + contacts200
Total with upgrades2200 RMB ($300)

The Pi 5 runs ROS2 Jazzy natively at 1.5x the performance of Pi 4. The Hailo-8L AI accelerator (13 TOPS) plugs in via M.2 hat and runs YOLO, MobileNet, and custom VLA models at 30+ FPS without touching the CPU. This combination gives you:

  • Full ROS2 stack (Nav2, SLAM, perception)
  • 13 TOPS neural inference for on-device AI
  • USB3 for cameras, Ethernet for remote development
  • Under 8W total system power

The ESP32-S3 handles real-time motor control via micro-ROS, communicating with the Pi over USB-CDC at 1000Hz. It runs:

  • PID velocity control for all 4 motors
  • Encoder counting (4 channels, quadrature)
  • IMU fusion at 100Hz
  • Safety watchdog (stops motors if communication drops)

This architecture separates hard real-time (motor control) from soft real-time (navigation, perception), which is critical for reliable robot operation.

The LD06 is a $14 360-degree 2D LiDAR with 12m range and 4500 samples/sec. It interfaces via UART and works directly with the ROS2 ldlidar package. For SLAM and obstacle avoidance in indoor environments, it performs comparably to LiDARs costing 10x more.

Mecanum wheels give the platform omnidirectional movement (forward, sideways, diagonal, rotation in place) without a complex steering mechanism. This is essential for:

  • Tight indoor navigation (corridors, under desks)
  • Precise docking maneuvers
  • Holonomic motion planning (simplifies Nav2 configuration)

┌─────────────────────────────────────────────────────────┐
│ Your Python Code │
│ from threewe import Robot │
│ await robot.move_to(x=2.0, y=1.5) │
├─────────────────────────────────────────────────────────┤
│ threewe Python SDK │
│ Robot, VLMRunner, VLARunner, Gymnasium Envs │
├─────────────────────────────────────────────────────────┤
│ Backend Abstraction Layer │
│ GazeboBackend │ IsaacSimBackend │ RealBackend │
├─────────────────────────────────────────────────────────┤
│ ROS2 Jazzy │
│ Nav2 │ SLAM Toolbox │ robot_perception │ micro-ROS │
├──────────────────────┬──────────────────────────────────┤
│ Raspberry Pi 5 │ ESP32-S3 │
│ + Hailo-8L │ Motor PID + Encoders │
│ Camera, LiDAR │ IMU, Safety Watchdog │
└──────────────────────┴──────────────────────────────────┘

The key insight: you never need to learn ROS2. The threewe SDK exposes everything through Python:

import asyncio
from threewe import Robot
async def main():
async with Robot(backend="real") as robot:
# Get sensor data
image = robot.get_camera_image() # (480, 640, 3) uint8
scan = robot.get_lidar_scan() # 360 range measurements
pose = robot.get_pose() # x, y, theta in map frame
# Navigate
await robot.move_to(x=2.0, y=1.5) # Uses Nav2 path planning
await robot.move_forward(1.0) # Drive 1 meter forward
await robot.rotate(1.57) # Rotate 90 degrees
# AI inference (runs on Hailo-8L)
result = await robot.execute_instruction("find the door")
asyncio.run(main())

Under the hood, RealBackend translates these calls to ROS2 service calls, action clients, and topic subscriptions. But you never see that complexity.


You do not need to buy any hardware to start developing:

Terminal window
pip install threewe[sim]
threewe launch --backend gazebo --scene office_v2

This launches a Gazebo simulation with the 3we robot model in an office environment. Your Python code works identically:

async with Robot(backend="gazebo") as robot:
await robot.move_to(x=2.0, y=1.5)
image = robot.get_camera_image()

When you are ready to move to hardware, change backend="gazebo" to backend="real". That is the only modification needed.


The full assembly guide is in docs/assembly_guide.md. Here are the key steps:

Cut or order the 200x200mm aluminum plate. Mount the 4 motors with L-brackets. Attach mecanum wheels (pay attention to the diagonal pattern — wrong orientation breaks omnidirectional movement).

┌────────────────┐
│ FL ╲ ╱ FR │ FL/BR: Left-handed rollers
│ ╲╱ │ FR/BL: Right-handed rollers
│ ╱╲ │
│ BL ╱ ╲ BR │ Viewed from above
└────────────────┘

Mount components on standoffs in layers:

  • Bottom: Battery + BMS + buck converter
  • Middle: ESP32-S3 + motor drivers
  • Top: Raspberry Pi 5 + Hailo-8L hat + LiDAR

Key connections:

FromToInterface
Pi 5ESP32-S3USB-C (micro-ROS)
ESP32-S3DRV8833 x2GPIO (PWM + DIR)
DRV8833MotorsDirect wire
Pi 5LD06 LiDARUSB-UART
Pi 5CameraUSB3
ESP32-S3BNO055I2C
E-StopSafety RelayNC contact
Safety RelayMotor powerInline
Terminal window
# On Raspberry Pi 5 (Ubuntu 24.04 with ROS2 Jazzy pre-installed)
git clone https://github.com/telleroutlook/3we-robot-platform.git
cd 3we-robot-platform
# Install ROS2 packages
cd ros2_ws && colcon build --symlink-install
source install/setup.bash
# Flash ESP32-S3 firmware
cd firmware/esp32
idf.py set-target esp32s3
idf.py build && idf.py flash
# Install Python SDK
pip install threewe
# Launch everything
ros2 launch robot_bringup robot.launch.py
import asyncio
from threewe import Robot
async def main():
async with Robot(backend="real") as robot:
# Simple motion test
await robot.move_forward(0.5) # Move 50cm forward
await robot.rotate(3.14) # Turn around
await robot.move_forward(0.5) # Come back
asyncio.run(main())

Feature3we PlatformTurtleBot4 LiteCustom Build
Cost~$300$1,200+$2,000-5,000
Drive typeMecanum (omnidirectional)DifferentialVaries
Edge AIHailo-8L (13 TOPS)None standardAdd-on
LiDARLD06 (360-deg)RPLiDAR (360-deg)Varies
CameraFisheye 1080pOAK-D LiteVaries
Python APIthreewe SDKCustomCustom
Sim2RealBuilt-in (Gazebo/Isaac)Separate setupManual
SLAMSLAM ToolboxSLAM ToolboxManual
NavigationNav2 (pre-configured)Nav2Manual
VLM/VLA supportBuilt-inNoneManual
Gymnasium envsIncludedNoneManual
Open hardwareFull (CERN-OHL-P)ProprietaryVaries
ReproducibilityHigh (BOM, guides)Buy pre-builtLow
Time to first demo2-3 hours30 min (pre-built)Days-weeks

The key differentiator is not any single spec — it is the integrated experience. A Python researcher can go from pip install threewe[sim] to training RL agents and deploying on hardware without ever writing a ROS2 node, configuring Nav2, or debugging TF trees.


All components are available from mainstream Chinese electronics retailers and AliExpress for international buyers:

PartSearch TermTypical Store
Pi 5”树莓派5 8GB”Official Pi store
Hailo-8L”Hailo-8L M.2 AI Hat”Waveshare
ESP32-S3”ESP32-S3-DevKitC-1”Espressif store
LD06 LiDAR”LD06 激光雷达”LDRobot store
BNO055”BNO055 九轴”Various
JGA25-370”JGA25-370 编码器电机”Various
Mecanum wheels”65mm 麦克纳姆轮”Various
DRV8833”DRV8833 电机驱动”Various

Search the English part names. Most ship worldwide in 2-3 weeks. The LD06 LiDAR and mecanum wheels are particularly easy to find.

The first-generation platform uses zero custom PCBs. Everything connects with standard dupont wires, USB cables, and screw terminals. The custom charging dock PCB is optional and only needed for autonomous long-duration experiments.


async with Robot(backend="real") as robot:
# SLAM is running automatically
occupancy_map = robot.get_map()
# Navigate to any reachable point
result = await robot.move_to(x=5.0, y=3.0)
print(f"Arrived: {result.success}, distance: {result.distance:.2f}m")
async with Robot(backend="real") as robot:
result = await robot.execute_instruction(
"go to the kitchen and find the coffee mug"
)
import gymnasium as gym
import threewe.gym # Registers environments
env = gym.make("3we/Navigation-v1", render_mode="rgb_array")
obs, info = env.reset()
for _ in range(1000):
action = your_policy(obs)
obs, reward, terminated, truncated, info = env.step(action)
from threewe.ai.vla_runner import VLARunner
vla = VLARunner.from_pretrained("lerobot/act_3we_nav")
async with Robot(backend="real") as robot:
while True:
obs = robot.get_observation(modalities=["image", "lidar", "velocity"])
action = vla.predict(obs, instruction="patrol the office")
robot.execute_action(action)
Terminal window
# Run 100 episodes of point-goal navigation
threewe benchmark run --task pointnav --episodes 100 --backend gazebo
# Compare against baselines
threewe benchmark compare --result results.json --baseline random_walk
# Submit to community leaderboard
threewe benchmark submit --result results.json

The platform supports multiple robots in simulation. Each robot instance connects independently:

async def multi_robot():
async with Robot(backend="gazebo", config="robot_1") as r1:
async with Robot(backend="gazebo", config="robot_2") as r2:
# Coordinate two robots
await asyncio.gather(
r1.move_to(x=2.0, y=0.0),
r2.move_to(x=-2.0, y=0.0),
)

The platform implements hardware-level safety that cannot be bypassed by software:

  1. Physical E-Stop: A big red mushroom button that cuts motor power through a normally-closed relay. Pressing it immediately removes power from all motor drivers regardless of what the software is doing.

  2. Communication Watchdog: The ESP32-S3 firmware has a 200ms watchdog. If it does not receive a velocity command for 200ms, motors stop automatically. This protects against Pi crashes, network drops, or hung processes.

  3. Velocity Limits: Hard-coded in both firmware and SDK. Maximum linear velocity is 0.5 m/s, maximum angular velocity is 1.0 rad/s. These cannot be overridden without reflashing the firmware.

  4. Safety Distance: The SDK enforces a 15cm safety buffer — if LiDAR detects an obstacle within 15cm, motion commands are rejected with a SafetyError.


The modular design means you can start minimal and upgrade incrementally:

StageHardwareCapability
Stage 1Pi5 + ESP32 + Motors + LiDARNavigation, SLAM
Stage 2+ CameraVisual perception, VLM control
Stage 3+ Hailo-8LOn-device AI, real-time detection
Stage 4+ RealSense3D mapping, depth-based avoidance
Stage 5+ Charging dockAutonomous long-duration experiments

Each stage requires no changes to existing code. The SDK detects available hardware and adapts.


Terminal window
# 1. Try in simulation first (no hardware needed)
pip install threewe[sim]
threewe launch --backend gazebo --scene office_v2
# 2. Write your first autonomous behavior
python -c "
import asyncio
from threewe import Robot
async def main():
async with Robot(backend='gazebo') as robot:
await robot.move_to(x=2.0, y=1.5)
print('Arrived!')
asyncio.run(main())
"
# 3. When ready, build hardware (see docs/assembly_guide.md)
# 4. Change backend="gazebo" to backend="real"
# 5. Run the same code on your physical robot