Skip to content

Perception

The perception module provides access to all on-board sensor data through a consistent interface. Whether you are running in simulation or on hardware, the same methods return data in identical formats, making sim-to-real transfer straightforward.

The 3we reference platform includes:

  • USB Camera - 640x480 RGB at 30 fps
  • LD06 LiDAR - 360-degree 2D scan, 12 m range, 10 Hz
  • BNO055 IMU - 9-axis (accelerometer, gyroscope, magnetometer) at 100 Hz
  • OAK-D Lite (optional) - Stereo depth camera for RGB-D

Capture a single frame from the RGB camera.

image = robot.get_camera_image()
# Returns: numpy.ndarray with shape (480, 640, 3), dtype uint8, BGR color order

Parameters:

ParameterTypeDefaultDescription
camera_idint0Camera index if multiple cameras are attached.
encodingstr"bgr8"Color encoding. Options: "bgr8", "rgb8", "gray".

Retrieve the latest 2D LiDAR scan.

scan = robot.get_lidar_scan()
# Returns: LaserScan(ranges, angles, timestamp)
# ranges: numpy.ndarray of float32, shape (N,), in meters
# angles: numpy.ndarray of float32, shape (N,), in radians

Parameters:

ParameterTypeDefaultDescription
max_rangefloat12.0Discard points beyond this distance (meters).
min_rangefloat0.05Discard points closer than this distance (meters).

Read the current IMU state.

imu = robot.get_imu()
# Returns: IMUReading(linear_acceleration, angular_velocity, orientation, timestamp)

All vectors are returned as numpy.ndarray of shape (3,) in the body frame. Orientation is a quaternion (x, y, z, w).

Capture an aligned RGB-D frame (requires depth camera).

color, depth = robot.get_rgbd_image()
# color: numpy.ndarray, shape (H, W, 3), uint8
# depth: numpy.ndarray, shape (H, W), float32, meters

Parameters:

ParameterTypeDefaultDescription
alignboolTrueAlign depth to color frame.
max_depthfloat5.0Clip depth values beyond this range.

For real-time pipelines, use callbacks instead of polling:

def on_scan(scan):
print(f"Got {len(scan.ranges)} points")
robot.subscribe("lidar", on_scan)

Available topics: "lidar", "camera", "imu", "rgbd".

All sensor data is returned in the sensor’s own frame by default. Use robot.transform(data, target_frame="base_link") to convert to the robot base frame.