# TurboPi Capabilities & Annie Integration Plan

## Hardware Inventory

| Component | Interface | Details |
|-----------|-----------|---------|
| **4x Mecanum Wheels** | Serial (STM32) | Omnidirectional: strafe, diagonal, spin-in-place, drift |
| **2x PWM Servos** | Serial (STM32) | Camera pan/tilt gimbal, range 500-2500 (center 1500) |
| **Camera** | USB (`/dev/video0`) | icSpring, 640x480 RGB |
| **RPLIDAR C1 (DTOF Lidar)** | USB (`/dev/ttyUSB1`, CP2102N) | 360° scan, 12m range, 5000 samples/sec, 0.72° resolution, 460800 baud. Modes: Standard (4m, mode 0) + DenseBoost (10.24m, mode 1) — BOTH VERIFIED. Model 65, FW 1.1, HW 18. pyrplidar 0.1.2 (DenseCabin byte-order patched). |
| **Ultrasonic Sonar** | I2C (`0x77`) | Distance mm/cm, has 2x built-in RGB LEDs |
| **4-Channel IR Sensor** | I2C (`0x78`) | Black line detection (4 independent sensors) |
| **IMU (onboard)** | Serial (STM32) | `board.get_imu()` returns None — no chip on TurboPi variant |
| **IMU (external)** | USB (Pi Pico bridge) | MPU-6050 via Pico RP2040: GP4=SDA, GP5=SCL, 100kHz I2C, 100Hz heading over /dev/ttyACM0. VERIFIED WORKING. |
| **Buzzer** | Serial (STM32) | Freq/on/off/repeat control |
| **2x GPIO LEDs** | GPIO 16, 26 | Blue LEDs (may conflict with hw_wifi service) |
| **OLED Display** | Serial (STM32) | Text output (2 lines) |
| **USB Speaker** | USB (ALSA card 2) | JMTek USB PnP Audio Device, `plughw:2,0`. TTS via espeak-ng. |
| **Battery Sensor** | Serial (STM32) | Voltage reading |
| **Gamepad Input** | Serial (STM32) | PS2-style controller support |
| **SBUS Input** | Serial (STM32) | RC receiver (16 channels) |

## Connection Details

- **Pi**: Raspberry Pi 5, 16 GB RAM, Debian Trixie 13, Python 3.13.5
- **Host**: `pi-car` at `192.168.68.61`, SSH alias `ssh pi`
- **Controller Board**: STM32 via `/dev/ttyAMA0` at 1,000,000 baud
- **SDK**: `~/TurboPi/HiwonderSDK/ros_robot_controller_sdk.py`
- **Symlink**: `/home/pi -> /home/rajesh` (SDK path compat)
- **Power**: USB-C (battery switch bypassed). Battery powers Pi + STM32 together.

## SDK API Reference

```python
import HiwonderSDK.ros_robot_controller_sdk as rrc
board = rrc.Board(device="/dev/ttyAMA0")

# Motors (duty: -100 to 100)
board.set_motor_duty([[1, 50], [2, 50], [3, 50], [4, 50]])  # all forward
board.set_motor_speed(speeds)  # RPM-based

# Servos (position: 500-2500, duration in seconds)
board.pwm_servo_set_position(0.5, [[1, 1500], [2, 1500]])  # center both
board.pwm_servo_read_position(servo_id)

# Buzzer
board.set_buzzer(freq=1000, on_time=0.5, off_time=0.5, repeat=1)

# RGB (sonar LEDs)
board.set_rgb([[1, 255, 0, 0], [2, 0, 255, 0]])  # LED1 red, LED2 green

# OLED
board.set_oled_text(line=1, text="Hello Annie")

# Sensors
board.get_battery()   # voltage
board.get_imu()       # accel + gyro
board.get_button()    # onboard button
board.get_gamepad()   # PS2 controller
board.get_sbus()      # RC receiver

# Ultrasonic (separate I2C module)
from HiwonderSDK.Sonar import Sonar
sonar = Sonar()
distance = sonar.getDistance()  # mm
sonar.setRGB(mode, r, g, b)    # sonar RGB LEDs

# IR Line Sensor (separate I2C module)
from HiwonderSDK.FourInfrared import FourInfrared
ir = FourInfrared()
data = ir.readData()  # [True/False x4] — True = black line detected

# Mecanum kinematics
from HiwonderSDK.mecanum import MecanumChassis
car = MecanumChassis()
car.set_velocity(velocity=50, direction=180, angular_rate=0)
# velocity: 0-100 mm/s, direction: 0-360 deg, angular_rate: -2 to 2 rad/s
```

## Built-in Demos (in `~/TurboPi/Functions/`)

| Demo | What It Does | Deps | Status |
|------|-------------|------|--------|
| ColorTracking | Camera follows colored object, car drives toward it (PID) | OpenCV | READY |
| ColorDetect | Identifies and labels colors in view | OpenCV | READY |
| LineFollower | Follows black line using 4-ch IR sensor | OpenCV | READY |
| VisualPatrol | Follows colored line using camera (PID steering) | OpenCV | READY |
| Avoidance | Ultrasonic sonar obstacle avoidance | OpenCV | READY |
| FaceTracking | Camera servo tracks a face (Haar cascade) | OpenCV | READY |
| ColorWarning | Detects colors and buzzes alert | OpenCV | READY |
| QuickMark | QR code reading → car actions | pyzbar | READY |
| GestureRecognition | Hand gesture → car movement | mediapipe | BLOCKED (no aarch64 wheel for Python 3.13) |
| RemoteControl | Placeholder for external control | None | Stub only |

### Mecanum Movement Demos (`~/TurboPi/MecanumControl/`)

- `Car_Forward_Demo.py` — drive forward/backward
- `Car_Turn_Demo.py` — rotate in place
- `Car_Move_Demo.py` — strafe (omnidirectional)
- `Car_Slant_Demo.py` — diagonal movement
- `Car_Drifting_Demo.py` — drift (velocity + yaw combined)

## Dependency Status

| Package | Status | How Installed |
|---------|--------|---------------|
| python3-opencv (4.10) | INSTALLED | `apt install python3-opencv` |
| python3-yaml | INSTALLED | Pre-installed (apt) |
| pyrplidar (0.1.2) | INSTALLED | `pip3 --break-system-packages` |
| pyzbar (0.1.9) | INSTALLED | `pip3 --break-system-packages` |
| espeak-ng (1.52) | INSTALLED | `apt install espeak-ng` — TTS for USB speaker |
| libzbar0 | INSTALLED | Pre-installed (apt) |
| mediapipe | NOT AVAILABLE | Python 3.13 ABI breaks mediapipe (issue #6159). No aarch64 wheels exist for 3.13. Building from source takes 5+ hours and still fails. Community wheels (PINTO0309/mediapipe-bin) top out at Python 3.12. |

### mediapipe Workaround: TFLite Direct (RECOMMENDED)

mediapipe's hand/gesture models are `.tflite` files that can run without the mediapipe package.
Validated on Pi 5 + Debian Trixie by [Cytron tutorial](https://my.cytron.io/tutorial/MediaPipe-Alternatives-on-Trixie-Hand-Skeleton-Tracking). Gets **10-13 FPS**.

Setup (~30 min):
1. Install micromamba, create Python 3.11 venv
2. `pip install tflite-runtime numpy opencv-python`
3. Download `palm_detection.tflite` + `hand_landmark_full.tflite` from mediapipe model cards
4. Two-stage pipeline: palm detect → ROI crop → 21-keypoint hand landmarks → gesture classify

Other alternatives:
- **OpenCV DNN + YOLO hand model** — simpler but less accurate
- **Hailo AI Kit** (Pi 5 HAT, optional) — accelerates .tflite models to 30+ FPS

Not critical for MVP — gesture recognition is the only demo that needs it. All other demos work with OpenCV alone.

---

## Project Ideas

### Tier 1: Quick Wins (< 1 hour each)

**1. Remote MJPEG Driving**
Stream camera via `MjpgServer.py` (already in repo), send WASD commands over WebSocket or HTTP.
See the car's view from laptop, drive it around the house.

**2. Run Built-in Demos**
Color tracking, line following, avoidance — all ready to run. Just need to calibrate colors in `lab_config.yaml` for the environment.

**3. QR Code Waypoints**
Print QR codes with instructions ("turn left", "go forward 2s", "buzzer"). Car reads QR, executes action, looks for next QR.

### Tier 2: Annie Integration (the big one)

**4. Annie-Controlled Robot — "Give Annie a Car"**

Annie gets a new tool: `drive_robot`. She can command the TurboPi via an API running on the Pi.

Architecture:
```
Annie (Titan/Panda) 
  → HTTP POST to Pi (192.168.68.61:8080)
    → FastAPI on Pi translates to SDK calls
      → STM32 executes motor/servo/buzzer commands
      → Camera frames returned as base64 or MJPEG URL
```

Tool interface for Annie:
```python
# Annie's tool definition
def drive_robot(command: str, duration: float = 1.0) -> dict:
    """Control the TurboPi robot car.
    
    Commands: forward, backward, left, right, 
              strafe_left, strafe_right, spin_left, spin_right,
              stop, look_up, look_down, look_left, look_right,
              look_center, buzzer, photo, distance, battery
    """
```

Capabilities:
- "Annie, go check the kitchen" — she drives there using camera + sonar
- "Annie, take a photo of the whiteboard" — drives, aims camera, captures
- "Annie, patrol the house" — autonomous loop with obstacle avoidance
- "What's the battery level?" — reads and reports voltage

**5. Security Patrol Mode**

Autonomous roaming with face detection. When it sees someone:
1. Takes a photo
2. Sends Telegram alert with image
3. Optionally buzzes or speaks via Annie's voice

Uses: Avoidance (sonar) + FaceTracking (OpenCV Haar) + Telegram bot integration.

**6. Voice-Controlled Driving**

Pair with Annie's voice pipeline on Panda:
- Rajesh: "Annie, drive forward"
- Annie processes via phone/WebRTC → sends HTTP to Pi → car moves
- Camera feed shown on phone or dashboard

### Tier 3: Advanced (multi-session)

**7. Follow-Me Mode**

Face detection + sonar depth estimation to maintain a fixed distance from a person.
PID control on face position (X for steering, size for distance).

**8. Mapping & Navigation**

Build a simple occupancy grid using sonar readings while driving.
SLAM-lite: dead reckoning from mecanum kinematics + IMU + sonar.
Navigate to coordinates on the map.

**9. Object Fetch**

Color tracking + gripper (would need a servo gripper add-on).
"Annie, bring me the red ball."

**10. Intercom on Wheels** (HARDWARE READY)

USB speaker already connected (JMTek, `plughw:2,0`, espeak-ng installed).
Annie drives to a family member, speaks through the speaker,
records via USB mic (if added), brings reply back.
Mobile Annie terminal. Could also use Titan's Kokoro TTS for Annie's real voice.

---

## Recommended Build Order

1. **Pi-side FastAPI server** — thin HTTP wrapper around the SDK (motors, servos, camera, sensors)
2. **Annie tool definition** — `drive_robot` tool registered in Annie's tool system
3. **Camera streaming** — MJPEG endpoint for live view in dashboard
4. **Basic commands** — forward/back/turn/stop/photo/distance
5. **Autonomous behaviors** — obstacle avoidance loop, patrol, face tracking
6. **Dashboard integration** — live camera feed, battery, distance, controls

## Hailo AI Hat — Neural Accelerator

Rajesh has a Hailo AI Hat. Research findings:

### Pin Conflicts: NONE

| Interface | TurboPi Uses | Hailo Uses | Conflict? |
|-----------|-------------|------------|-----------|
| UART (GPIO 14/15) | STM32 controller | -- | No |
| I2C-1 (GPIO 2/3) | Sonar + IR sensor | -- | No |
| I2C-0 (GPIO 0/1) | -- | HAT EEPROM (boot only) | No |
| GPIO 16, 26 | LEDs | -- | No |
| PCIe FPC | -- | Hailo chip data | No |
| USB | Camera | -- | No |

Hailo data goes through Pi 5's dedicated PCIe FPC port, not GPIO. Only GPIO 0/1 used for HAT EEPROM at boot.

### Physical Stacking: THE CONSTRAINT

Both boards want the 40-pin GPIO header. Options:
1. **GPIO splitter/extender** (Waveshare) — two sockets from one header
2. **Hailo underneath + stacking header** — Hailo HAT has 16mm passthrough (female on top), TurboPi plugs into that. Clearance with Hailo's active cooler (~14mm tall) may be tight.
3. **M.2 adapter via FPC only** — frees GPIO header entirely for TurboPi

### Performance (Hailo-8, 26 TOPS) — VERIFIED

Benchmarked on the actual hardware (2026-04-09):

| Model | FPS | HW Latency | Notes |
|-------|-----|------------|-------|
| **YOLOv8s (measured)** | **310** | **6.66ms** | General object detection — verified via `hailortcli benchmark` |
| YOLOv8n (expected) | ~136 | ~7ms | Lighter variant |
| YOLOv6n (expected) | ~354 | ~3ms | Fastest detection |
| Person/face (expected) | ~150 | ~7ms | yolov5s_personface |

~100x faster than Pi 5 CPU alone. All vision tasks are real-time with massive headroom.

### Pre-compiled Models Available (`/usr/share/hailo-models/`)

| Model | File | Use Case |
|-------|------|----------|
| YOLOv8s detection | `yolov8s_h8.hef` | General object detection |
| YOLOv8s pose | `yolov8s_pose_h8.hef` | Human skeleton/pose estimation |
| YOLOv6n detection | `yolov6n_h8.hef` | Lightweight fast detection |
| YOLOv5n segmentation | `yolov5n_seg_h8.hef` | Instance segmentation |
| Person+face | `yolov5s_personface_h8l.hef` | Person and face detection |
| Face detection | `scrfd_2.5g_h8l.hef` | Face detection (SCRFD) |
| ResNet50 | `resnet_v1_50_h10.hef` | Image classification |

### Software (Trixie supported)

```bash
sudo apt install hailo-all && sudo reboot
hailortcli fw-control identify
```

DKMS-based driver. Uses `rpicam-apps` for camera+AI pipelines. Examples at `github.com/hailo-ai/hailo-rpi5-examples`.

### Variants

| Product | Chip | TOPS | Notes |
|---------|------|------|-------|
| AI HAT+ 13T | Hailo-8L | 13 | Budget option |
| AI HAT+ 26T | Hailo-8 | 26 | Best value for robot |
| AI HAT+ 2 | Hailo-10H | 40 | Newest, 8 GB dedicated RAM |
| AI Kit (M.2) | Hailo-8L | 13 | Discontinued, best physical fit |

**Rajesh has: AI HAT+ 26T (Hailo-8, 26 TOPS)** — the best value variant for robotics.

## Key Constraints

- **Network**: Pi at `192.168.68.61` on same LAN as Titan (`192.168.68.52`) and Panda (`192.168.68.105`)
- **Latency**: HTTP on LAN should be <10ms. Camera MJPEG adds ~50-100ms.
- **Battery**: Unknown runtime. `board.get_battery()` for monitoring. USB-C bypasses switch.
- **WiFi range**: Limited to home WiFi coverage.
- **Camera quality**: 640x480 USB — adequate for color/face tracking, not for detailed recognition.
- **Python 3.13**: Some ML packages (mediapipe) lack wheels. OpenCV works fine.
- **Hailo AI HAT+ 26T**: CONNECTED + VERIFIED. 310 FPS YOLOv8s, 6.66ms latency. Stacked: Pi→Hailo(heatsink)→TurboPi.
- **RPLIDAR C1**: CONNECTED + BOTH MODES VERIFIED. `/dev/ttyUSB1` (CP2102N USB-UART), 460800 baud. Standard scan (mode 0): 5000 samp/s, 4m range. **DenseBoost (mode 1): 10.24m range, 6176 pts/3s, WORKING** — pyrplidar had byte-order bug in `PyRPlidarDenseCabin` (big-endian → little-endian fix patched in-place). No GPIO conflicts (USB only). FW v1.1 (not user-updatable). Official C++ SDK v2.1.0 at github.com/Slamtec/rplidar_sdk.

## Capability Gap Analysis (2026-04-10)

**Kit variant**: Advanced (without Pi 5), product variant `42513907122263`.
**Advertised by Hiwonder vs what we actually use:**

### Hiwonder Hardware — Usage Status

| Component | Included In Kit | Integrated Into Annie? | Notes |
|-----------|:---:|:---:|-------|
| 4x Mecanum Wheels | ✅ | ✅ | Full omnidirectional drive via `/drive` |
| 2x Camera Servos | ✅ | ✅ | Pan/tilt gimbal via `/look` |
| HD Camera (130° wide-angle) | ✅ | ✅ | FrameGrabber thread, shared with safety daemon |
| Ultrasonic Sonar | ✅ | ✅ | SonarPoller at 10 Hz, ESTOP integration |
| 4-Channel IR Line Sensor | ✅ | ❌ | Never used — line following not a priority |
| Buzzer | ✅ | ⚠️ | Endpoint exists (`/buzzer`), never called by Annie |
| OLED Display | ✅ | ⚠️ | Endpoint exists (`/oled`), never called by Annie |
| Sonar RGB LEDs | ✅ | ❌ | SDK ready (`board.set_rgb()`), not wired |
| 2x GPIO LEDs | ✅ | ❌ | May conflict with hw_wifi service |
| Battery Sensor | ✅ | ✅ | In `/health` endpoint |
| IMU (onboard) | ✅ | ❌ | `board.get_imu()` returns None — no chip on this variant |
| IMU (external MPU-6050) | N/A | ✅ | Pi Pico RP2040 USB bridge, 100Hz heading, VERIFIED |
| Gamepad Input | ✅ | ❌ | Manual control, not relevant for autonomous |
| SBUS RC Input | ✅ | ❌ | Manual control, not relevant for autonomous |
| USB Speaker | ✅ | ❌ | Detected (`plughw:2,0`), espeak-ng installed, not integrated |
| **WonderEcho Pro** | ✅ (Advanced) | ❌ | **AI Voice Box — connected via USB-C, completely unused** |
| Aluminum Chassis | ✅ | ✅ | The car itself |
| 2x 18650 (2200mAh) | ✅ (Advanced) | ✅ | USB-C bypass, battery powers Pi+STM32 |

### Hiwonder Advertised Features — Reality Check

| Advertised Feature | How Hiwonder Does It | What We Do Instead | Gap? |
|-------------------|---------------------|-------------------|:---:|
| Autonomous line following | IR sensor + OpenCV PID | Not implemented | YES |
| Road sign recognition | YOLO26 framework | Not implemented | YES |
| Traffic light detection | OpenCV color analysis | Not implemented | YES |
| Color tracking | OpenCV + PID driving | Not implemented (demo exists) | YES |
| Face recognition/tracking | Haar cascade | Not implemented (Hailo has face models) | YES |
| Gesture control | MediaPipe | BLOCKED (Python 3.13) | BLOCKED |
| Obstacle avoidance | Sonar only | **3-layer: YOLO@310fps + lidar + sonar** | WE WIN |
| Vision tracking | OpenCV color blob | Not implemented | YES |
| Voice control | WonderEcho Pro | **Annie IS the voice** (Kokoro TTS + Whisper STT) | WE WIN |
| LLM integration | ChatGPT/Gemini API | **Native: Gemma 4 26B on Titan GPU** | WE WIN |
| Autonomous patrol | Simple wander+avoid | **VLM sense-think-act + visual return** | WE WIN |
| Scene understanding | Basic YOLO labeling | **Multimodal VLM (image+lidar+goal)** | WE WIN |
| QR code actions | pyzbar scan→action | Not implemented (library installed) | YES |
| WonderPi App | iOS/Android control | Telegram + web dashboard | DIFFERENT |
| ROS2 | Robot OS | FastAPI server (simpler) | DIFFERENT |

### Quick Wins — Untapped Capabilities

**Minimal effort (run existing demos or add 1-10 lines):**
1. Sonar RGB LEDs — mood/status colors (`board.set_rgb()`)
2. OLED status display — Annie's mood/battery/message (endpoint exists)
3. Run ColorTracking demo — "follow the red ball"
4. Run FaceTracking demo — camera gimbal follows a face
5. Run LineFollower demo — black tape on floor
6. Run QR code demo — QR waypoint course
7. USB speaker TTS — Annie speaks through the car (`espeak-ng`)

**Medium effort (integrate into Annie's tools):**
8. Follow-Me Mode — Hailo person/face model + PID distance control
9. Security Patrol — roam + face detect + Telegram photo alert
10. Intercom on Wheels — drive to person, speak, return
11. WonderEcho Pro — investigate what this box actually does

## MentorPi M1 Comparison (2026-04-10)

Hiwonder's MentorPi M1 ($299 without Pi) ships with ROS2 + SLAM + Nav2 **working out of the box**.
This is what we're struggling to achieve with custom code.

### Hardware Comparison

| Feature | TurboPi (Ours) | MentorPi M1 | Who Wins? |
|---------|:---:|:---:|:---:|
| Lidar | **RPLIDAR C1 (DTOF, 10.24m, DenseBoost)** | ldrobot STL-19P (budget TOF) | **Us** |
| NPU | **Hailo-8 26T (310 FPS YOLOv8s)** | None | **Us** |
| Camera | 130° wide-angle, 640×480 | 170° monocular, 640×480 | Tie |
| Sonar | ✅ ultrasonic | ❌ | **Us** |
| IR sensor | ✅ 4-channel | ❌ | **Us** |
| Wheels | Mecanum (omnidirectional) | Mecanum (omnidirectional) | Tie |
| Encoders | ❌ | **✅ AB-phase high-accuracy** | **MentorPi** |
| Motors | DC (no feedback) | **310 metal gear + encoders** | **MentorPi** |
| Pi | Pi 5 16 GB | Pi 5 (various) | Tie |
| Battery | 2x 18650 2200mAh | 7.4V 2200mAh 10C LiPo | Tie |

**Hardware verdict: We have BETTER sensors (lidar, NPU, sonar) but WORSE actuators (no wheel encoders = no odometry).**

### Software Comparison — THE REAL GAP

| Feature | TurboPi (Our Custom Stack) | MentorPi M1 (ROS2) | Gap |
|---------|:---:|:---:|:---:|
| SLAM mapping | ⚠️ **IN PROGRESS** (HectorSLAM numpy port planned) | ✅ (lidar + encoders) | CLOSING |
| Persistent maps | ❌ | ✅ | **CRITICAL** |
| Path planning | ❌ | ✅ (Nav2) | **CRITICAL** |
| Point-to-point nav | ❌ (VLM goal-seek, ~35s) | ✅ (Nav2, precise) | **HIGH** |
| Dynamic obstacle avoidance | ⚠️ (ESTOP only, no replanning) | ✅ (costmaps, replanning) | **HIGH** |
| Odometry | ⚠️ **IMU added** (MPU-6050 via Pico, gyro heading) | ✅ (encoder-based, accurate) | IMPROVING |
| Color tracking | ❌ (demo exists, not integrated) | ✅ | LOW |
| Object detection | ✅ (Hailo YOLO 310fps) | ✅ (YOLOv11 on CPU) | **We win** |
| Face/gesture | ❌ | ✅ (MediaPipe) | MEDIUM |
| Voice AI | ✅ (Annie, Kokoro, Gemma 4) | ❌ | **We win** |
| LLM reasoning | ✅ (Gemma 4 26B on GPU) | ❌ | **We win** |
| Visual scene understanding | ✅ (multimodal VLM) | ❌ | **We win** |

### Root Cause: Why We're Struggling

1. **No wheel encoders** → no odometry → no accurate position tracking → SLAM can't work properly
2. **No ROS2** → no Nav2 stack → no path planning, no costmaps, no recovery behaviors
3. **Custom VLM nav is creative but slow** — 3.5s per cycle, no persistent map, dead reckoning drifts
4. **We built top-down (AI brain) instead of bottom-up (motor control)** — MentorPi ships with the bottom layers done

### Options Going Forward

**Option A: Add ROS2 to TurboPi**
- Install ROS2 Humble in Docker on Pi 5 (same as MentorPi does)
- Use RPLIDAR C1 (better than MentorPi's STL-19P) for SLAM
- Problem: no wheel encoders = no odometry for Nav2. Would need visual odometry or IMU-based (IMU returns None).
- Workaround: lidar-only SLAM (e.g., Hector SLAM — doesn't need odometry, uses scan matching)

**Option B: Add wheel encoders to TurboPi**
- Hall effect or optical encoders on the 4 motors
- Gives proper odometry → enables full Nav2 stack
- Hardware modification required

**Option C: Keep custom stack, improve incrementally**
- Add lidar-based localization (scan matching between frames)
- Add persistent maps (save lidar scans as occupancy grid)
- Improve dead reckoning with lidar drift correction
- Won't match Nav2 maturity but keeps Annie's VLM brain as the differentiator

**Option D: Buy a MentorPi M1 as the "legs" platform**
- $299 without Pi 5 (reuse our Pi)
- Get proper encoders + ROS2 SLAM out of the box
- Move Hailo + RPLIDAR C1 to MentorPi chassis
- Keep TurboPi for fun demos (color tracking, line following)

## Community ROS2 Resources (2026-04-10)

### Matzefritz/HiWonder_MentorPi — ROS2 Starter Pack

GitHub: `github.com/Matzefritz/HiWonder_MentorPi` (April 2026)

Provides ROS2 drivers for **the same Hiwonder hardware platform** (same SDK, same expansion board):

| Package | What It Does | TurboPi Compatible? |
|---------|-------------|:---:|
| `controller` | Motor, servo, IMU control via Hiwonder SDK. Publishes `/odom_raw`, `/imu` | **YES** — same SDK |
| `ldrobot-lidar-ros2` | 2D lidar scan on `/ldlidar_node/scan` | DIFFERENT lidar (we have RPLIDAR C1, not ldrobot) — but rplidar_ros2 exists |
| `peripherals` | USB camera, joystick, AprilTag detection | **YES** |
| `orchestrator_launch` | SLAM + Navigation launch files | **YES** — slam_toolbox + Nav2 |

**Key odometry topics:**
- `/odom_raw` — direct wheel encoder measurements (MentorPi has encoders, we don't)
- `/odom_rf2o` — **laser-based odometry from lidar scans** (this WORKS WITHOUT encoders!)
- `/imu` + `/imu/rpy/filtered` — IMU data (our `board.get_imu()` returns None)

**Critical finding: `rf2o_laser_odometry`**
- Computes odometry purely from sequential lidar scans (scan matching)
- No wheel encoders needed, no IMU needed
- Our RPLIDAR C1 (DenseBoost, 10.24m) is BETTER than MentorPi's ldrobot STL-19P
- This is the path to SLAM on TurboPi without hardware changes

**Pre-installed ROS2 packages:** usb_cam, camera_calibration, image_proc, apriltag-ros, slam_toolbox, navigation2

**Environment variable:** `MACHINE_TYPE=MentorPi_Mecanum` — we'd set a TurboPi equivalent

### syed-taha-ali/autonomousfleet-ai — Full SLAM + Nav2 on MentorPi

BSc FYP: ROS2 Humble + SLAM + Nav2 + NLP mission control + 100+ agent swarm simulation.
Repo is sparse (only .gitignore + CLAUDE.md visible) but confirms the stack works on Hiwonder hardware.

### Hiwonder Built-in Demos — Already on the Pi

All demos in `~/TurboPi/Functions/` use a consistent pattern:
- `init()` → `start()` → `run(img)` loop → `stop()` → `exit()`
- All use `HiwonderSDK.mecanum.MecanumChassis()` for movement
- All use `HiwonderSDK.PID.PID` for servo/motor tracking
- Camera via `Camera.Camera()` with optional distortion correction
- Lab color space calibration via `lab_config.yaml`

They are immediately runnable: `cd ~/TurboPi && python3 Functions/ColorTracking.py`

### Proposed ROS2 Migration Path for TurboPi

1. Install ROS2 Humble in Docker (same as MentorPi does)
2. Port `controller` package — replace encoder odometry with `rf2o_laser_odometry`
3. Use `rplidar_ros2` (official SLAMTEC package) instead of `ldrobot-lidar-ros2`
4. Keep `slam_toolbox` + `navigation2` as-is
5. Keep our Hailo safety daemon as an independent layer (subsumption — reflexes don't need ROS2)
6. Bridge Annie's tool calls to ROS2 Nav2 goals via a thin FastAPI→ROS2 bridge

This gives us: SLAM mapping + persistent maps + path planning + dynamic obstacle avoidance — all from proven packages, not custom code.

## Related Research Docs

| Doc | What |
|-----|------|
| `docs/RESEARCH-SLAM-IMU-SENSORS.md` | SLAM algorithm selection, MPU-6050 specs, Pico bridge architecture, sensor comparison (sonar vs lidar 128mm offset), I2C gotchas, full wiring verified |
| `docs/RESEARCH-ANNIE-ROBOT-CAR.md` | Annie robot car architecture, safety stack, motor control, power supply |
| `docs/ARCHITECTURE-ROBOT-NAVIGATION.md` | Subsumption architecture blog: Hailo reflexes + GPU brain, FrameGrabber, UART safety |
| `docs/NEXT-SESSION-HECTORSLAM-IMU.md` | **Implementation plan** for HectorSLAM + Pico IMU bridge (Phase 0 DONE, Phases 1-4 remaining) |
| `docs/ARCHITECTURE-TURBOPI.md` | TurboPi server architecture |
