Skip to content

Dev Log #1: Why These Parts, Why This Architecture

This is not a guide. This is a journal entry about the decisions that shaped the 3we platform — what we chose, what we rejected, and what broke along the way. If you’re evaluating whether this project is real engineering or vapor, this post is for you.


Most “open-source robot” projects fall into two categories:

  1. Toy demos — Arduino car with ultrasonic sensor, no path planning, no SLAM, useless for research
  2. Academic one-offs — Custom hardware that costs $5,000+, runs unmaintained ROS1 code, and requires a PhD student to operate

We wanted a third option: a robot that costs less than a good monitor, runs modern ROS2, and has a Python API that ML researchers can use without ever learning colcon build. That’s what shaped every decision below.


OptionWhy we rejected it
STM32 onlyGreat for motor control, but no WiFi/BLE built-in. Adding wireless means another chip, more PCB complexity, more failure modes.
Jetson Nano/Orin4× the cost of Pi 5. The GPU is overkill for 2D navigation — we use Hailo-8L for inference, which is cheaper and lower power. Jetson also means NVIDIA’s SDK lock-in.
Single SoC (Pi 5 does everything)Pi 5 runs Linux. Linux is not real-time. Motor PID at 1000Hz on a Linux scheduler = jitter = bad odometry = SLAM drift.
ESP32 (original, not S3)No USB-OTG. We need USB-CDC for micro-ROS transport. Also less RAM and no vector instructions.

ESP32-S3 handles:

  • Motor PID at 1kHz (4 channels, quadrature encoders)
  • IMU fusion at 100Hz (BNO055 over I2C)
  • Safety watchdog (kills motors if no heartbeat from Pi in 200ms)
  • micro-ROS agent over USB-CDC

Raspberry Pi 5 handles:

  • ROS2 Jazzy (Nav2, SLAM Toolbox, perception)
  • Hailo-8L inference (13 TOPS, M.2 HAT)
  • Python SDK server
  • WiFi AP for development

This split is not novel — it’s the same architecture as commercial AMRs (ABB, MiR, etc.). The contribution is making it reproducible at $300.

Honestly? Because it’s $4 on Taobao, has excellent documentation, and the ESP-IDF + micro-ROS toolchain actually works without fighting CMake for three days. We tried STM32 first. The HAL library and CubeMX code generation experience was painful enough that we switched after two weeks.


Differential drive is simpler and cheaper. We chose mecanum anyway because:

  1. Holonomic motion — the robot can strafe sideways. This makes docking trivial (approach perpendicular, slide in) and indoor navigation much cleaner (no 3-point turns in corridors).

  2. Nav2 compatibility — Nav2’s omnidirectional motion model works directly with mecanum. Differential drive needs a different controller configuration and produces worse paths in tight spaces.

  3. Research utility — omnidirectional platforms are more interesting for RL research because the action space is richer (3-DOF: vx, vy, omega vs 2-DOF: v, omega).

Mecanum wheels have terrible traction on carpet and uneven floors. The rollers slip. Our odometry on carpet drifts ~5% over 10 meters without LiDAR correction. On hard floors it’s under 1%.

We accepted this because the primary use case is indoor labs and offices (hard floors). If you need outdoor capability, swap to differential drive — the SDK API doesn’t change, only the firmware kinematic model.


Researchers want to mount custom hardware: gripper arms, additional cameras, soil sensors, air quality monitors. The obvious solution is “just connect to GPIO” — but that creates three problems:

  1. No isolation — a short in your payload kills the robot’s compute
  2. No discovery — how does the SDK know what payload is attached?
  3. No power management — you can’t hot-plug without inrush protection

A 34-pin connector (2×17, 2.54mm pitch, cheap and available everywhere) with:

  • Separate power rails (5V/5A, 12V/3A, VBAT/10A) each with P-MOSFET high-side switches
  • I2C, UART, SPI, CAN, USB, GPIO — pick what you need
  • EEPROM-based discovery (24C02, I2C address 0x50) — plug in, robot reads descriptor, SDK exposes payload API
  • Hardware overcurrent protection (per-rail shutdown in <1ms)
  • DETECT pin with physical contact — no software-only detection

Rev A: We used N-channel MOSFETs as low-side switches. Problem: when the MOSFET is off, the payload’s ground floats. Any capacitance between the payload and robot chassis creates noise that crashes the I2C bus. Spent two weeks debugging “random I2C errors.”

Fix: Switched to P-channel high-side switches (Si2301CDS for 5V, AO3401A for 12V). Ground is always connected. No more floating ground problems.

Rev B: EEPROM detection worked, but we forgot to add pull-ups on the DETECT pin on the robot-side PCB. If a payload didn’t have the DETECT resistor (early prototypes), the pin floated and the firmware thought a payload was perpetually connecting/disconnecting.

Fix: Added 10k pull-up on DETECT, changed firmware to require stable LOW for 50ms before triggering discovery.

These are the boring, real problems that don’t appear in README files. They’re also the reason we have confidence the design works — because we broke it, diagnosed it, and fixed it.


ComponentStatusWhat’s missing
Mock backendWorking✅ 309 tests pass, navigation demo runs
FirmwareWorkingTested on bench with motors, needs field hours
ROS2 stackWorkingNav2, SLAM, perception all run in Gazebo
Real hardware backendPartially workingROS2 bridge works, needs integration testing with full SDK
PyPI publishingNot donePackage works from source, not yet on PyPI
Isaac Sim backendStubbedInterface defined, not tested with real Isaac Sim
Gazebo backendPartially workingWorks in isolation, not fully tested through SDK
Hardware validationIn progressCharging dock PCB untested, main board on rev C

The honest summary: the software stack runs end-to-end in mock mode, and the firmware runs on real hardware, but the full Sim2Real pipeline (SDK → ROS2 → firmware → motors) has not been validated as a complete chain yet. That’s the next milestone.


You don’t need to take our word for any of this. Clone, install, run:

Terminal window
git clone https://github.com/telleroutlook/3we-robot-platform.git
cd 3we-robot-platform
pip install -e sdk/threewe/
python examples/navigate_office.py

The mock backend simulates a robot navigating through an office with collision detection and LiDAR raycasting — all in pure Python, zero dependencies beyond numpy.

asciicast


  1. PyPI publishpip install threewe should just work
  2. Full Sim2Real chain test — SDK → Gazebo, record video proof
  3. Real hardware video — motors spinning, LiDAR scanning, Nav2 navigating
  4. Community feedback — what do ML researchers actually want from the API?

If you have opinions on #4, open an issue or join the discussion at github.com/telleroutlook/3we-robot-platform.