# Next Session: Sonar Integration + Visual Return-to-Start

## What Was Done (Session 45)

Research and planning session. Explored the full robot car codebase (safety.py, main.py, robot_tools.py, lidar.py, all tests). Designed a 6-phase plan for integrating the sonar sensor into collision avoidance and adding visual teach-and-repeat navigation.

**Key finding:** The sonar sensor (I2C at `0x77`) is already physically connected and initialized in `main.py:198-202`, exposed on `/health`, `/drive`, `/distance` endpoints — but **completely ignored by the safety daemon and navigation loop**. It returns `99999` because I2C is not enabled on the Pi 5.

**Plan file:** `~/.claude/plans/splendid-finding-zebra.md`

## What to Implement

### Phase 0: Enable I2C on Pi (PREREQUISITE — do this first)

SSH to Pi (`192.168.68.61`):
```bash
sudo raspi-config nonint do_i2c 0
sudo reboot
```
After reboot, verify:
```bash
i2cdetect -y 1          # should show 0x77
curl http://192.168.68.61:8080/health  # distance_mm should be real (not 99999)
```
If still 99999: `i2cget -y 1 0x77` to debug I2C bus.

### Phase 1: SonarPoller Thread (`services/turbopi-server/safety.py`)

Add `SonarPoller` class following the `LidarDaemon` pattern (`lidar.py:223-298`):
- Daemon thread polling `sonar.getDistance()` at ~10 Hz
- Thread-safe cache under `threading.Lock`
- Validity filter: only `20 < distance_mm < 5000` (reject 0, 99999, noise)
- Staleness: `get_distance_mm()` returns `None` if reading >500ms old
- `stop()` for clean shutdown

Constants:
```python
SONAR_MIN_MM = 20       # below = sensor noise / min range
SONAR_MAX_MM = 5000     # above = out-of-range / garbage
SONAR_STALE_MS = 500    # ignore stale readings
```

**Why separate thread:** I2C reads are 20-50ms blocking. Adding them to the safety daemon's YOLO loop would halve FPS from 30 to 15. Cached reads = zero latency in the safety loop.

### Phase 2: Wire Sonar into Safety Daemon (`services/turbopi-server/safety.py`)

Modify `HailoSafetyDaemon`:
- Add `sonar_poller` parameter to `__init__()` (same pattern as `lidar` param at line 134)
- In `run()`, **after** YOLO ESTOP check (line ~196), add independent sonar check:
  ```python
  if self._sonar_poller is not None:
      sonar_mm = self._sonar_poller.get_distance_mm()
      if sonar_mm is not None and sonar_mm < self._safety_distance_cm * 10:
          if self._estop_callback:
              self._estop_callback(f"Sonar: obstacle at {sonar_mm / 10:.0f}cm")
  ```
- Add `get_sonar_mm()` method for HTTP layer

**Key:** Sonar triggers ESTOP independently — it's forward-only (no bearing), can't fuse into per-detection lidar fusion. Acts as separate frontal bumper guard.

### Phase 3: Wire into main.py (`services/turbopi-server/main.py`)

**Lifespan (`lifespan()`):**
- Add `_sonar_poller` global (like `_lidar_daemon` at line 153)
- Start `SonarPoller(_sonar)` before safety daemon (after line 344)
- Pass `sonar_poller=_sonar_poller` to `HailoSafetyDaemon()` (line 357)
- Add `_sonar_poller.stop()` to cleanup (after line 383)

**`/obstacles` endpoint (line 576):**
- Add `sonar_cm` field (float or None) to response
- Augment `safe_forward`: also False when sonar < SAFETY_DISTANCE
- Backward-compatible

**`/health` endpoint (line 397):**
- Add `sonar_healthy` field (True if poller has recent valid reading)

### Phase 4: Navigation Prompt (`services/annie-voice/robot_tools.py`)

**`handle_navigate_robot()` (line 314):**
- Extract `sonar_cm` from `/obstacles` response
- Pass to `_ask_nav_combined()`

**`_ask_nav_combined()` (line 221):**
- Add `sonar_cm: float | None = None` parameter
- Add to prompt after lidar section: `SONAR (forward, ground-level): 25cm`
- Include in `emit_event` data

### Phase 5: Tests

**`services/turbopi-server/test_safety.py` — ~10 new tests:**

Follow `_FakeLidar` pattern (line 328-339) for `_FakeSonarPoller`.

TestSonarPoller:
- `test_returns_none_initially`
- `test_valid_reading_cached` (mock getDistance()=350)
- `test_garbage_99999_rejected`
- `test_garbage_zero_rejected`
- `test_stale_reading_returns_none` (>500ms)
- `test_stop_sets_flag`

TestSonarEstop:
- `test_sonar_triggers_estop_on_close` (150mm < 30cm threshold)
- `test_sonar_no_estop_when_far` (500mm > 30cm)
- `test_sonar_no_estop_when_unavailable` (None → no ESTOP)
- `test_sonar_none_when_no_poller` (sonar_poller=None)

**`services/annie-voice/tests/test_robot_tools.py` — ~3 new tests:**
- `test_navigate_sonar_passed_to_prompt`
- `test_navigate_sonar_unavailable_continues`
- `test_navigate_sonar_blocks_safe_forward`

### Phase 5b: Visual Return-to-Start (`services/annie-voice/robot_tools.py`)

**Concept: Visual Teach-and-Repeat.** During outbound navigation, the car already captures a camera image each cycle. Store these as waypoints. On return, use VLM to compare current view against stored waypoint images (in reverse order) and navigate toward each one.

**Waypoint data structure:**
```python
@dataclass
class NavWaypoint:
    cycle: int
    image_b64: str          # camera snapshot at this point
    lidar_summary: str      # 360-degree sector summary
    sonar_cm: float | None  # frontal distance
    action_taken: str       # what action was taken FROM this point
```

**Outbound (Teach):** In the existing nav loop, after capturing photo + sensors but before driving, append a `NavWaypoint` to a list. Zero overhead — photo is already captured.

**Return (Repeat):** New `_return_via_waypoints()` helper:
1. Reverse waypoints (last = closest to start)
2. For each target waypoint:
   - Capture current photo from `/photo`
   - Send VLM **two images**: target waypoint + current view
   - Prompt: "Navigate toward scene in Image 1. Image 2 is current view. Choose: forward/backward/left/right/scene_matched"
   - If `scene_matched` → next waypoint
   - If movement → drive, re-sense, retry (max 3 attempts per waypoint)
   - ESTOP aborts return

**New VLM function: `_ask_nav_return(target_b64, current_b64, target_lidar, current_lidar, waypoint_num, total_waypoints)`**

**Budget:** ~50-100KB per waypoint image, 10 waypoints = ~1MB. 2-image VLM call. Max return cycles = waypoints * 3. ~15-30s for 5-waypoint route.

**Tests (~5 new):**
- `test_waypoints_captured_during_nav`
- `test_return_uses_visual_matching`
- `test_return_advances_on_scene_matched`
- `test_return_respects_estop`
- `test_return_skipped_when_false`

### Phase 6: E2E Verification

**Pre-flight:**
1. `curl http://192.168.68.61:8080/health` → `sonar_healthy: true`, real `distance_mm`
2. Place hand 20cm in front → `curl /distance` shows ~200mm
3. `cd services/turbopi-server && python -m pytest`
4. `cd services/annie-voice && python -m pytest tests/test_robot_tools.py`

**Sonar E2E:**
1. Battery on, car in open area
2. Telegram: "Annie, explore the room"
3. Place low object (book, cable) in front → ESTOP fires
4. Check Pi logs: `journalctl -u turbopi-server -f` → "Sonar: obstacle at Xcm"

**Visual Return E2E:**
1. Mark starting position with tape
2. Telegram: "Annie, explore and come back" (return_to_start=True)
3. Outbound: 5-10 cycles storing waypoints
4. Return: car navigates toward each waypoint image in reverse
5. Measure drift from tape (expect visual matching to beat blind reversal)
6. Logs: "Return waypoint 3/5: scene_matched"

**Degradation:** Unplug sonar I2C → system continues with YOLO+lidar only

## Hardware State

| Component | Status |
|-----------|--------|
| Pi 5 | Running, SSH @ 192.168.68.61 |
| turbopi-server | systemd active, port 8080 |
| Sonar | Connected, I2C 0x77, **returns 99999 (I2C not enabled)** |
| LidarDaemon | healthy, scanning |
| HailoSafetyDaemon | healthy, 30 FPS YOLO |
| RPLIDAR C1 | /dev/lidar (udev symlink) |
| Motor battery | USB-C |
| Annie (Titan) | Running, port 7860 |

## Critical Files

| File | Lines | What to Change |
|------|-------|----------------|
| `services/turbopi-server/safety.py` | 372 | Add `SonarPoller` + wire into `HailoSafetyDaemon` |
| `services/turbopi-server/main.py` | 673 | Start poller, pass to daemon, `/obstacles` + `/health` |
| `services/annie-voice/robot_tools.py` | 436 | `sonar_cm` in nav prompt + visual return-to-start |
| `services/turbopi-server/test_safety.py` | 507 | ~10 new sonar tests |
| `services/annie-voice/tests/test_robot_tools.py` | ~430 | ~8 new tests (3 sonar + 5 visual return) |

## Reuse (existing patterns)

- `LidarDaemon` in `lidar.py:223-298` — thread structure, lock, `is_healthy()`, `stop()`
- `_FakeLidar` in `test_safety.py:328-339` — mock pattern for `_FakeSonarPoller`
- `_fuse_lidar_distances()` in `safety.py:220-245` — shows where sonar check goes
- `Sonar` SDK imported at `main.py:54`, initialized at `main.py:198-202`
- `NavWaypoint` structure mirrors `SectorData` in `lidar.py:124-133`

## Known Gotchas

| Gotcha | Fix |
|--------|-----|
| Sonar returns 99999 | I2C not enabled — `sudo raspi-config nonint do_i2c 0 && reboot` |
| Sonar returns 0 at close range | Min range ~2cm, filter with `SONAR_MIN_MM = 20` |
| I2C read blocks 20-50ms | Separate `SonarPoller` thread, not inline in safety loop |
| Sonar is forward-only | Can't fuse into per-detection lidar fusion — independent ESTOP |
| Ultrasonic misses angled surfaces | Complementary to YOLO+lidar, not replacement |
| Two-image VLM call for return | Gemma 4 handles multi-image; use separate `_ask_nav_return()` |
| Waypoint images in memory | ~1MB for 10 waypoints — fine for Pi/Titan RAM |

## Start Command

Read this file, then implement Phases 0→1→2→3→4→5→5b→6 in order. Plan file: `~/.claude/plans/splendid-finding-zebra.md`.
