# Next Session: Robot Car — Hailo Safety Layer + Navigation Loop

## What

Build a two-layer autonomous navigation system for Annie's TurboPi robot car:
1. **Hailo reactive safety layer** — always-on YOLOv8s at 30 FPS on Pi 5's AI HAT+, auto-stops motors if obstacle < 30cm. No LLM, no network, pure reflex.
2. **`navigate_robot` tool** — Annie runs sense-think-act loops from Titan: capture photo from Pi, read obstacles, send a single combined multimodal vLLM call (image + goal + obstacles → action), execute drive command. Up to 10 cycles per navigation attempt.

This follows Brooks' subsumption architecture (1986): lower layer (Hailo reflexes) overrides higher layer (Titan brain) for safety.

## Plan

`~/.claude/plans/splendid-pondering-wirth.md`

**Read the plan first — it has the full implementation, all 30 adversarial review findings (8 CRITICAL, 10 HIGH, 11 MEDIUM), and every design decision.**

## Key Design Decisions (from adversarial review)

1. **`_uart_lock` (threading.Lock)** wraps ALL `board.set_motor_duty()` calls — both from asyncio executor and safety daemon thread. Without this, two threads writing UART simultaneously produces garbled serial that can make the car accelerate into an obstacle during an ESTOP.

2. **`loop.call_soon_threadsafe(_stop_event.set)`** — asyncio.Event is NOT thread-safe. The safety daemon thread must use `call_soon_threadsafe` to signal the asyncio event loop. Direct `.set()` from a background thread can silently miss wakeups.

3. **`_hardware_estop = threading.Event()`** replaces relying on `_estop_flag` (plain bool) for cross-thread safety. The bool has a check-then-act race: flag can be set between the check and lock acquisition.

4. **Combined multimodal vLLM call** — instead of 2 calls per cycle (photo/describe to Titan + nav decision to Titan), navigate_robot calls `/photo` (raw base64, Pi-only) then sends ONE multimodal call with image + goal + obstacles → action. This eliminates the Titan→Pi→Titan roundtrip and halves GPU load.

5. **FrameGrabber replaces `_cap`** — the old `_cap = cv2.VideoCapture()` global is REMOVED. FrameGrabber thread is the single camera reader. `/photo` and safety daemon both read from shared buffer. V4L2 cannot handle two openers on the same `/dev/video0`.

6. **Rate limiter increased to 30/min** (from 10/min) — the safety daemon now provides actual collision protection, so the rate limiter's original purpose (prevent LLM tool-call loops from runaway car) is handled by a better mechanism.

7. **Max 10 cycles** (not 20) — at ~3.5s per cycle, this caps navigation at ~35s. Annie is unresponsive during navigation (Pipecat blocking tool call). User can send `/estop` via Telegram's direct Pi access.

8. **Conditional Hailo import** — `try: from hailo_platform import ... except ImportError: _HAILO_AVAILABLE = False`. Server must work on dev/test Pi without HailoRT.

9. **`thinking: false`** MUST be in EVERY vLLM call — especially the nav decision call (`max_tokens=20`). Without it, Gemma 4 emits `<think>` tokens that fill the 20-token budget, leaving zero tokens for the action word.

10. **Auth on /obstacles** — uses `Depends(_verify_token)` same as all other endpoints.

## Files to Modify

### Pi (services/turbopi-server/)
1. `frame_grabber.py` — CREATE: FrameGrabber thread (camera reader, shared frame, stale detection)
2. `safety.py` — CREATE: HailoSafetyDaemon, Detection dataclass, estimate_distance, conditional Hailo import
3. `main.py` — MODIFY: Remove `_cap`; add FrameGrabber + safety daemon in lifespan; add `_uart_lock` + `_hardware_estop`; update `/photo` to use grabber; add `/obstacles` endpoint; increase rate limit; add throttle check to `/health`
4. `requirements.txt` — MODIFY: Note hailo_platform is optional system package
5. `turbopi-server.service` — MODIFY: Add SAFETY_DISTANCE_CM env var

### Annie (services/annie-voice/)
6. `robot_tools.py` — MODIFY: Add `handle_navigate_robot`, `handle_robot_obstacles`, `_ask_nav_combined`, reusable httpx client
7. `tool_schemas.py` — MODIFY: Add `NavigateRobotInput`, `RobotObstaclesInput`
8. `text_llm.py` — MODIFY: Add 2 ToolSpec entries (group="robotics", gated=True)
9. `tool_adapters.py` — MODIFY: Add 2 ToolAdapter entries

### Tests
10. `services/turbopi-server/test_frame_grabber.py` — CREATE
11. `services/turbopi-server/test_safety.py` — CREATE (mock Hailo)
12. `services/annie-voice/tests/test_robot_tools.py` — MODIFY (+navigate, +obstacles)

### Docs (for blog)
13. `docs/ARCHITECTURE-ROBOT-NAVIGATION.md` — CREATE: Subsumption arch, safety stack, FrameGrabber pattern, perf numbers
14. `docs/ARCHITECTURE-SUBSUMPTION-PATTERN.md` — CREATE: Brooks 1986, layers, why not ROS

## Prerequisites (Phase 0, ~15 min)

```bash
# On Pi (192.168.68.61):
sudo raspi-config nonint do_i2c 0 && sudo reboot  # Enable I2C for sonar
hailortcli fw-control identify                      # Verify Hailo-8 26T
pip3 show hailo_platform                             # Check Python bindings
```

## Start Command

```bash
cat ~/.claude/plans/splendid-pondering-wirth.md
```

Then implement the plan phase by phase. All adversarial findings are already addressed in it.

## Verification

1. **Unit tests:** FrameGrabber (stale detection, thread safety), safety daemon (ESTOP on obstacle, confidence filter, camera stale), navigate (ESTOP handling, max_cycles, LLM parse)
2. **Pi integration:** Start server with safety daemon → `/health` shows safety_healthy=true; place hand in front → ESTOP within 1s; `/obstacles` returns objects; `/photo` works alongside daemon
3. **E2E:** Telegram → "Annie, explore the room" → multi-cycle → summary; obstacle mid-navigation → auto-stop; "go to kitchen" → goal-directed navigation
4. **Blog docs:** Diagrams match implementation; perf numbers verified; accessible to non-roboticists

## Known Hardware Issues

- **Sonar reads 99999**: Fixed by enabling I2C (Phase 0)
- **Battery reads 0/None**: Check UART timing (Phase 0b); degrade gracefully if still broken
- **Throttle 0x50000**: Under-voltage from X-UPS1; navigate aborts if throttled
- **Camera black frames**: FrameGrabber threshold lowered to 1.0 for dark rooms
