# Next Session: Sunday Demo Prep (April 12, 2026)

---

## 🚀 SESSION KICKOFF PROMPT (paste this into a fresh Claude Code session)

> Read `docs/NEXT-SESSION-SUNDAY-DEMO.md`, `docs/RESEARCH-WONDERECHO-PRO.md`, and `docs/RESEARCH-TURBOPI-CAPABILITIES.md` (the MentorPi section). Also check MEMORY.md session 48.
>
> **Goal:** Sunday April 12 I have guests coming and I want to show off the TurboPi robot car. Annie is the host. Dual-channel interaction: Telegram inline-keyboard buttons (reliable primary) + voice via the WonderEcho Pro mic/speaker (bonus wow layer). ~1.5 days.
>
> **Start with Track 1 (Telegram) — do NOT touch voice until Track 1 is solid.** Build order:
> 1. Verify the 4 Hiwonder demos we plan to use actually run on the Pi: ColorTracking, Avoidance, QuickMark, ColorDetect. SSH to `pi` (192.168.68.61), they're at `~/TurboPi/Functions/*.py`. First issue: our turbopi-server holds `/dev/video0` via FrameGrabber — stop it before running Hiwonder demos, or they'll fail to open the camera.
> 2. Calibrate `lab_config.yaml` for the demo room lighting (ColorTracking needs good color thresholds for red/blue/green balls).
> 3. Headless patch — Hiwonder demos use `cv2.imshow()` which needs a display. Patch to no-op when `HEADLESS=1`.
> 4. Build `TrafficCop.py` — copy `Avoidance.py`, graft in the color detection block from `LineFollower.py` (lines 232-280), add stop/go state machine. Car wanders forward, stops on red card, resumes on green.
> 5. Pi endpoints in turbopi-server: `POST /demo/start {name, params}`, `POST /demo/stop`, `GET /demo/status`. These must pause FrameGrabber before starting the subprocess and resume after killing it.
> 6. Annie-side: `car_demo` tool in `services/annie-voice/robot_tools.py` with action/demo/color/goal params. Register in `tool_schemas.py` and `text_llm.py`.
> 7. Telegram inline-keyboard menu — when Annie calls `car_demo(action="menu")`, she sends a message with buttons like `[🔴 Chase Red] [🔵 Chase Blue] [🟢 Chase Green] [🧭 Room Explorer] [🚦 Traffic Cop] [🚧 Obstacle Dodge] [📱 QR Navigator] [🔍 Color Spotter] [📸 What Do I See] [🎯 Look Around]`. Button callbacks trigger `car_demo(action="start", demo=...)`.
> 8. Smoke test: run all 8 demos via Telegram buttons end-to-end.
>
> **Then Track 2 (voice) — only if Track 1 is done and working.** Reuses the existing Annie phone pipeline (Whisper + Gemma 4 + Kokoro, battle-tested on Panda session 33):
> 1. Pi audio capture loop from ALSA card 2 (`plughw:2,0`), webrtcvad gate, WebSocket stream to Titan.
> 2. Titan-side wake gate: string-match "annie" in first 3 words of Whisper transcript. If missing, ignore (log to Telegram as "🎙️ Heard (ignored)"). If present, feed to Gemma 4 with car_demo tools.
> 3. Follow-up mode: 30s window after wake, no re-prefix needed.
> 4. Kokoro TTS stream back to Pi, play via `aplay -D plughw:2,0`.
> 5. PipeWire `libpipewire-module-echo-cancel` so Annie doesn't hear herself.
>
> **Verified facts (don't re-verify):**
> - WonderEcho Pro is a USB composite device: QinHeng hub (1a86:8091) → CH340 serial (/dev/ttyUSB0) + JMTek audio (ALSA card 2, mic + speaker). Both mic and speaker work.
> - CI1302 firmware is BLANK on our module — `/dev/ttyUSB0` at 115200 returns zero bytes on "Hello HiWonder". Do NOT try to flash firmware for Sunday. We don't need the onboard wake word — Whisper-based "annie" wake gate works better anyway.
> - Pi has 8 Hiwonder demos at `~/TurboPi/Functions/` but FaceTracking/GestureRecognition are BLOCKED (mediapipe doesn't have Python 3.13 aarch64 wheels).
> - lidar sonar YOLO safety layers are all working (sessions 46-47), `navigate_robot` + `drive_robot` + `robot_photo` + `robot_look` are deployed.
> - Props: red/blue/green balls are in the Hiwonder kit box. Batteries fully charged. Red/green cards + 4 QR codes still need to be printed.
>
> **Critical gotchas:**
> - Camera conflict: FrameGrabber thread holds `/dev/video0` exclusively. Must release before any Hiwonder demo that uses Camera.py, reclaim after.
> - Sonar reads false distances sometimes (21cm when lidar shows 1.3m). May affect Avoidance demo. Mitigation: test in demo room first.
> - Dead reckoning drifts — don't promise precise return-to-start in the demo.
> - Voice latency is ~2-3s end-to-end. Add a thinking sound on Kokoro like we do for the phone.
> - If Track 2 fails at any point, drop it and ship Track 1 — Telegram-only is still a mind-blowing demo.
>
> **First concrete step:** SSH to Pi, `cd ~/TurboPi && python3 Functions/ColorTracking.py` — see if it actually runs, what errors appear, and whether the camera is free or held by turbopi-server.

---

## Goal
Annie shows off her car to Sunday guests. She's the host — guests interact with Annie, not with terminals. "Annie, what tricks can you do?" → menu → pick one → she explains → she performs it.

## Core Idea
**Annie is the ONLY interface.** No SSH, no script switching, no developer behind the curtain. The guest talks to Annie (Telegram or voice), she presents what she can do, they pick, she runs it. One seamless experience.

## Context
- See `docs/RESEARCH-TURBOPI-CAPABILITIES.md` for the full capability audit
- 10 Hiwonder demos exist on the Pi at `~/TurboPi/Functions/` — never run once
- FaceTracking and GestureRecognition are BLOCKED (mediapipe, Python 3.13)
- ~1.5 days to build and test

## What's Already Deployed
- `drive_robot` — forward/backward/strafe/spin/drift
- `robot_photo` + describe — camera + Titan VLM description
- `robot_look` — pan/tilt gimbal
- `navigate_robot` — VLM sense-think-act with lidar + YOLO safety
- `robot_obstacles` — Hailo YOLO + lidar + sonar
- `robot_status` — battery, sonar, uptime, daemon health
- All triggered via Telegram ("Annie, go explore the room")

---

## The New Feature: `car_demo` Tool

**One new Annie tool that wraps ALL Hiwonder demos + existing capabilities into a single interactive experience.**

### User Flow (Telegram)
```
Guest: "Annie, what can your car do?"

Annie: "I've learned quite a few tricks! Here's what I can show you:

🔴 Chase Ball — I'll follow a red, blue, or green ball around the room
🧭 Room Explorer — I'll navigate autonomously using my AI brain (I can see obstacles, plan paths, and even find my way back)
🚦 Traffic Cop — I'll wander around and obey your red/green cards (red = stop, green = go)
🚧 Obstacle Dodge — I'll wander and dodge anything in my way
📱 QR Navigator — Show me QR codes and I'll follow directions
🔍 Color Spotter — I'll identify colors I see
📸 What Do I See? — I'll describe what's in front of me
🎯 Look Around — Tell me where to look

Which one would you like to see?"

Guest: "Chase the blue ball!"

Annie: "Chase Ball mode! 🔵 I'll track the blue ball and drive toward it.
Put it on the floor and I'll follow it around. Try moving it slowly at first!
Starting in 3 seconds... (say 'Annie stop' when you're done)"

[Annie starts ColorTracking on Pi with color=blue, car chases ball]

Guest: "Annie, now the red one!"

Annie: "Switching to red 🔴"
[Annie calls setTargetColor('red') on the running demo]

Guest: "Annie stop"

Annie: "That was fun! Want to try another trick? 🎮"
```

### Alternative flow: Room Explorer (the AI showpiece)
```
Guest: "Annie, explore the room!"

Annie: "Room Explorer 🧭 — I'll use my AI brain to navigate. I can see obstacles
with my lidar and camera, and I'll figure out where to go. Want me to explore
freely, or go find something specific?"

Guest: "Find the sofa"

Annie: "On it! I'll look around and drive toward it. This might take a minute —
I think carefully about every move to avoid bumping into anything."

[Annie calls navigate_robot(goal="find the sofa", return_to_start=true)]
[Car drives autonomously, sensing obstacles, reasoning about the scene]

Annie: "Found it! 🛋️ Heading back to where I started now."

[Car returns via dead reckoning]
```

### Implementation

**New tool: `car_demo`**
```python
class CarDemoInput(BaseModel):
    action: Literal["menu", "start", "stop"]
    demo: Optional[Literal[
        "chase_ball",       # ColorTracking.py (color param: red/blue/green)
        "room_explorer",    # navigate_robot (existing, VLM sense-think-act)
        "traffic_cop",      # NEW — Avoidance wander + red/green card detection
        "obstacle_dodge",   # Avoidance.py
        "qr_navigator",     # QuickMark.py
        "color_spotter",    # ColorDetect.py
        "what_do_i_see",    # robot_photo describe (existing)
        "look_around",      # robot_look (existing)
    ]] = None
    color: Optional[Literal["red", "blue", "green"]] = None  # for chase_ball
    goal: Optional[str] = None  # for room_explorer (e.g., "find the sofa")
```

**LineFollower + VisualPatrol DROPPED** — both require laying tape/colored lines on the floor, which is too messy for a house demo.

**Traffic Cop (new demo)** — grafts the red/green card detection from `LineFollower.py` onto the wander+avoidance logic from `Avoidance.py`. Car drives forward, dodges obstacles via sonar, AND watches for red/green cards via camera. Red card → stop. Green card → resume. Guest controls the car by holding up cards. No tape required.
- Code: new file `~/TurboPi/Functions/TrafficCop.py` — copy Avoidance.py + import the color detection blob logic from LineFollower.py lines 232-280 + add stop/go state machine.
- Bonus: set sonar RGB LEDs to match the detected card color (red card = car shows red, green card = car shows green).
- ~1-2 hours to build, tests in-place.

**How it works:**
1. Annie calls `car_demo(action="menu")` → returns list of available demos with descriptions
2. Annie presents them as a Telegram message with descriptions (or inline keyboard buttons)
3. User picks one → Annie calls `car_demo(action="start", demo="chase_ball")`
4. Pi-side: turbopi-server stops FrameGrabber, launches `ColorTracking.py` as subprocess
5. User says "stop" → Annie calls `car_demo(action="stop")` → kills subprocess, restarts FrameGrabber

**Critical: Camera handoff**
- turbopi-server's FrameGrabber holds `/dev/video0` exclusively
- Before starting a Hiwonder demo: release camera (stop FrameGrabber thread)
- After demo stops: restart FrameGrabber
- New Pi endpoint: `POST /demo/start {name: "ColorTracking"}` and `POST /demo/stop`
- Server manages the lifecycle: camera release → subprocess → camera reclaim

**Headless mode fix:**
- Hiwonder demos use `cv2.imshow()` which needs a display
- Modify: set environment var `DISPLAY=` (empty) or patch `cv2.imshow` to no-op
- Or: wrap demos in headless runner that monkey-patches imshow
- Car still moves — only the debug window is suppressed

### Pi-Side Changes (turbopi-server/main.py)

New endpoints:
```
POST /demo/start    {name: "ColorTracking", params: {color: "red"}}
POST /demo/stop
GET  /demo/status   → {running: true, name: "ColorTracking", pid: 1234, uptime: "45s"}
```

Logic:
1. `/demo/start`:
   - If safety daemon is running with camera, pause FrameGrabber (release /dev/video0)
   - Set `HEADLESS=1` env var (suppress cv2.imshow)
   - `subprocess.Popen(["python3", f"~/TurboPi/Functions/{name}.py"])` 
   - Track PID for cleanup
2. `/demo/stop`:
   - Kill subprocess (SIGINT for clean shutdown — demos handle SIGINT)
   - Restart FrameGrabber
   - Resume safety daemon camera loop
3. `/demo/status`:
   - Return running demo name, PID, uptime

### Annie-Side Changes (annie-voice/)

New tool in `robot_tools.py`:
- `car_demo(action, demo)` — HTTP calls to Pi `/demo/*` endpoints
- Demo descriptions stored as a dict (not hardcoded in LLM — the tool returns them)
- Annie's system prompt gets: "When asked about car tricks, use car_demo(action='menu') to show available demos"

### Telegram Inline Keyboard (nice-to-have)

Instead of text menu, send Telegram inline keyboard buttons:
```
[🔴 Chase Ball] [⬛ Line Follower]
[🚧 Obstacle Dodge] [📱 QR Navigator]
[🔍 Color Spotter] [🧭 Room Explorer]
[📸 What Do I See?] [🎯 Look Around]
```
Guest taps a button → triggers the demo. More polished than typing.

---

## KILLER FEATURE: Dual-channel interaction (Voice + Telegram)

**Discovery (2026-04-11):** The WonderEcho Pro is a USB composite device with **both microphone AND speaker** that both work. See `docs/RESEARCH-WONDERECHO-PRO.md`.

**Voice is unreliable** (background noise, mishearing, latency, echo feedback, network hiccups). So Sunday's demo is a **dual-channel architecture** where both paths drive the same backend:

1. **Telegram (reliable primary)** — full interactive menu with inline keyboard buttons. Guests tap "🔴 Chase Ball" → Annie starts the demo. This ALWAYS works, no voice, no latency issues. Rajesh can also drive the demo from his own phone as the "host" while guests tap buttons.

2. **Voice (bonus wow layer)** — guests can also talk to the car if they want. "Hey Annie, chase the red ball" → same demo triggers. When voice works, it's magical. When it fails, Telegram is still there.

3. **Shared backend** — one `car_demo` tool handles both input channels. Annie responds the same way regardless of input. Whether the request comes from a Telegram button tap or a voice command, Gemma 4 gets the same structured input and makes the same tool call.

4. **Transcript & state visibility** — Annie posts Whisper transcripts + her responses to Telegram on every voice turn. Guests can see "did she understand me?" and Rajesh has a live debugging lens: is the car misbehaving because of bad STT, bad LLM reasoning, or a demo bug?

### Architecture

```
┌────────────────┐              ┌─────────────────┐
│ Guest (Voice)  │              │ Guest/Rajesh    │
│ "Hey Annie,    │              │ (Telegram)      │
│  chase red"    │              │ [tap 🔴 button] │
└───────┬────────┘              └────────┬────────┘
        │                                │
        ▼                                ▼
┌────────────────┐              ┌─────────────────┐
│ WonderEcho mic │              │ Telegram webhook│
│ → Pi capture   │              │ → callback data │
│ → VAD          │              └────────┬────────┘
│ → WebSocket    │                       │
│ → Titan        │                       │
│ → Whisper      │                       │
│ → "hey annie   │                       │
│   chase red"   │                       │
└───────┬────────┘                       │
        │                                │
        └────────────┬───────────────────┘
                     ▼
              ┌──────────────┐
              │  Gemma 4     │
              │  car_demo    │  ← same tool, same handling
              │  tool call   │
              └──────┬───────┘
                     │
        ┌────────────┼───────────────┐
        ▼            ▼               ▼
┌────────────┐ ┌────────────┐ ┌────────────┐
│ Pi: start  │ │ Kokoro TTS │ │ Telegram:  │
│ Hiwonder   │ │ → Pi speak │ │ transcript │
│ demo       │ │            │ │ + response │
└────────────┘ └────────────┘ └────────────┘
```

### The flow
```
Guest walks up to car (no phone, no terminal, no buttons)
     │
     │ "Hey Annie, what tricks can you do?"
     ▼
WonderEcho Pro mic (noise-canceled, 5m far-field) ───► Pi
     │
     ├─► VAD (webrtcvad) detects speech — starts capture
     │
     ▼ WebSocket audio chunks
Titan: Annie voice pipeline (existing — same as phone)
     ├── Whisper STT  ───► transcript "hey annie what tricks can you do"
     │
     ├── Wake-word gate: does transcript contain "annie" in first 3 words?
     │   ├── YES  ──► proceed to Gemma 4
     │   └── NO   ──► discard (post to Telegram as "🎙️ Heard (ignored): ...")
     │                (prevents Annie from reacting to ambient conversation)
     │                       │
     │                       └──► Telegram: "🎙️ Heard: hey annie what tricks can you do"
     │
     ├── Gemma 4 26B (with car_demo tools available)
     │    Response: "Hi! I've learned quite a few tricks. I can chase colored
     │               balls, explore the room, be a traffic cop with red and
     │               green cards... what would you like to see?"
     │                       │
     │                       └──► Telegram: "🤖 Annie: Hi! I've learned..."
     │
     └── Kokoro TTS
            │
            ▼ WebSocket audio chunks back
Pi: aplay -D plughw:2,0 (WonderEcho Pro speaker, 3W)
     │
     ▼
Guest hears Annie's response from the car
```

### Wake-word "Annie" — implementation

**No extra library needed.** Since Whisper runs on every VAD-triggered chunk anyway, the wake-word check is a string-match on the transcript:

```python
WAKE_WORDS = {"annie", "hey annie", "ok annie", "hi annie"}

def is_wake_triggered(transcript: str) -> bool:
    # Normalize: lowercase, strip punctuation
    words = re.findall(r"\w+", transcript.lower())[:3]  # first 3 words
    # Check if any wake variant matches
    return "annie" in words
```

**Why "Annie" near the start (first 3 words) rather than anywhere in the sentence:**
- Prevents accidental wake from sentences like "I went to see Annie yesterday"
- Matches natural call patterns: "Annie, stop" / "Hey Annie, chase the ball"
- Annie can be dropped after the first call in a conversation (session-based: once woken, she listens for follow-ups for ~30s without re-requiring "Annie")

**Follow-up mode:**
- After first wake, Annie stays "listening" for 30 seconds
- Follow-ups don't need "Annie" prefix: "stop" / "now the blue ball" / "explore the room"
- 30s of silence → goes back to wake-word mode
- Telegram shows which mode we're in: "🎙️ (wake)" vs "🎙️ (follow-up)"

**Edge case: guest has a friend named Annie in the room**
- If a real Annie is present, the car might wake when her name is said
- Graceful fallback: if Gemma 4 doesn't understand the command (not a car-related request), Annie says "Sorry, I thought you were talking to me. I'll go back to listening."
- Rajesh can mute voice mode via Telegram button if it becomes a problem

### Why this works with minimal code
The entire Titan-side pipeline already exists — it's what Annie uses with the Panda phone (Whisper + Gemma 4 + Kokoro + WebSocket streaming). We're only swapping the audio SOURCE from "Panda Bluetooth HFP" to "Pi USB audio (WonderEcho Pro)". Everything else is unchanged.

**Telegram as debugging lens:** for each voice turn, Annie posts two messages:
- `🎙️ Heard: "<whisper transcript>"` — shows exactly what she understood
- `🤖 Annie: "<response text>"` — shows exactly what she said back

Guests can see if misunderstandings are STT errors (mishearing) vs LLM errors (wrong interpretation). Rajesh gets a live debugging lens — if the car misbehaves, glance at Telegram to see what went wrong.

### Implementation — Prioritized by reliability

**Track 1: Telegram interface (MUST HAVE — reliable primary)**
| # | Task | Effort |
|---|------|--------|
| T1 | Pi-side: `/demo/start`, `/demo/stop`, `/demo/status` endpoints + camera handoff (FrameGrabber pause/resume) | ~2-3h |
| T2 | Build `TrafficCop.py` from Avoidance + LineFollower color detection | ~1-2h |
| T3 | Headless patch for Hiwonder demos (suppress `cv2.imshow`) | ~30m |
| T4 | Annie-side: `car_demo` tool with menu/start/stop + color/goal params | ~2h |
| T5 | Telegram inline keyboard buttons — full menu with emoji buttons | ~1h |
| T6 | Wire transcript/response logging to Telegram (shared with Track 2) | ~30m |

**Telegram subtotal: ~7-9 hours** — this is the reliable path. Every demo guest-facing can happen with just this.

**Track 2: Voice pipeline (BONUS — wow layer on top)**
What DOESN'T need to be built (already exists, battle-tested with Panda phone):
- ✅ Whisper STT on Titan
- ✅ Gemma 4 26B LLM on Titan with tool calling
- ✅ Kokoro TTS on Titan
- ✅ WebSocket audio streaming protocol
- ✅ Turn-taking, barge-in handling

| # | Task | Effort |
|---|------|--------|
| V1 | Pi audio capture loop (`arecord` or `pyaudio` from `plughw:2,0`) | ~1h |
| V2 | Pi WebSocket client — connect to Annie's existing session endpoint, stream audio up | ~1h |
| V3 | VAD gate on Pi (`webrtcvad`) | ~30m |
| V4 | Pi audio playback — receive TTS from Titan, play via `aplay -D plughw:2,0` | ~30m |
| V5 | Wake-word gate on Titan — string match "annie" in first 3 words of Whisper transcript | ~15m |
| V6 | Follow-up mode — 30s window after first wake | ~30m |
| V7 | **PipeWire echo cancellation** — `libpipewire-module-echo-cancel` (proven on Panda) | ~1h |

**Voice subtotal: ~4.5 hours** — only build after Track 1 is working.

**Total: ~11-13 hours** (Track 1 + Track 2) — tight for 1.5 days. If short on time, ship Track 1 only; add Track 2 opportunistically.

### Build order
1. Track 1 FIRST (reliable primary path) — finish T1-T6 before touching voice
2. Smoke test: run through all 8 demos via Telegram buttons
3. Track 2 SECOND (bonus layer) — only if Track 1 is solid
4. Final rehearsal: run both channels in parallel

### Risks
1. **Echo feedback** — most likely problem. WonderEcho Pro's noise cancellation may or may not include echo suppression. Mitigation: PipeWire `libpipewire-module-echo-cancel` (proven on Panda).
2. **Latency** — Pi→Titan round trip + Whisper + Gemma 4 + Kokoro ≈ 2-3 seconds end-to-end. Guests might think car is slow. Mitigation: a soft "mmm..." thinking sound on Kokoro, same as phone does.
3. **Demo conflict** — while a Hiwonder demo (ColorTracking, Avoidance, etc.) is running, the Pi subprocess holds the camera. Voice must be independent. Audio device is separate from camera so no direct conflict, but we need to make sure the audio capture thread survives demo subprocess starts/stops.
4. **Multiple speakers** — if guests talk over each other, Whisper will transcribe nonsense. Mitigation: far-field mic helps; worst case, Annie responds "I didn't catch that, can you say it again?"

### Fallback hierarchy
Telegram is the reliable primary — voice is the bonus. Graceful degradation:

| State | What works | What the demo looks like |
|-------|-----------|-------------------------|
| Everything working | Telegram buttons + voice in + voice out | Guest talks to car OR taps buttons, hears Annie through speaker, sees transcripts in Telegram |
| Voice capture broken | Telegram buttons + voice out (TTS only) | Guest taps buttons, Annie speaks responses through car speaker |
| Car speaker broken | Telegram buttons, text responses | Guest taps buttons, reads Annie's text replies in Telegram (still functional, just less magical) |
| All voice broken | Telegram buttons only | Pure tap-to-control. Works 100% reliably even without WiFi if Pi and phone are on same LAN |
| Telegram broken | Direct SSH to Pi | Last resort. Rajesh runs Hiwonder scripts manually from laptop. Guests see the car do tricks but not Annie |

**Key insight: every tier still puts on a show.** Even worst case, the car does its thing. The tiers just reduce how "interactive" Annie feels.

### Bonus: car buzzer acknowledgement
- When Annie's VAD detects speech, short chirp on car buzzer ("I heard you, processing...")
- When Annie responds, soft chime before the TTS plays ("here's my answer")
- Gives the audio interaction tactile feedback even before Annie speaks

## Bonus: Car Personality During Demos

If time permits, add flair:

- **Sonar RGB LEDs** — change color based on active demo mode
  - Chase Ball: red pulse. Traffic Cop: green (go) / red (stop). Obstacle Dodge: yellow flash. Idle: soft blue.
  - `POST /rgb {r, g, b}` endpoint (simple)

- **OLED Display** — show current mode name
  - "🔴 Chasing..." / "🚦 Traffic Cop..." / "🧭 Exploring..."
  - Endpoint already exists (`/oled`)

- **Buzzer** — sound on demo start/stop
  - Happy chirp on start, done tone on stop
  - Endpoint already exists (`/buzzer`)

---

## Critical Prep Tasks

### Physical Props
- [x] Colored balls — Hiwonder included red, blue, and green balls with the kit
- [x] Batteries — 2x 18650 2200mAh, fully charged
- [ ] Print red and green cards (Traffic Cop — stop/go signals)
- [ ] Print QR codes with values "1", "2", "3", "4" (QuickMark)
- [ ] Clear a space in the living room for the demos (no tape needed — autonomous nav + ball chase + card control are all floor-safe)

### Software Testing (BEFORE building the tool)
- [ ] SSH into Pi, run ColorTracking, Avoidance, QuickMark, ColorDetect manually to verify they work
- [ ] Calibrate `lab_config.yaml` for demo room lighting (ColorTracking + Traffic Cop need this for red/blue/green detection)
- [ ] Test headless mode — can demos run without `cv2.imshow`?
- [ ] Test camera handoff — stop FrameGrabber, start demo, stop demo, restart FrameGrabber
- [ ] Test Annie navigate_robot in the demo room with a concrete goal ("find the sofa")
- [ ] Verify Avoidance demo with sonar (known false readings issue — if unreliable, switch to lidar-based wander)
- [ ] **Build Traffic Cop** — copy Avoidance.py, graft in color detection from LineFollower.py, wire stop/go state machine

### Implementation Order
1. **Test Hiwonder demos manually** — ColorTracking, Avoidance, QuickMark, ColorDetect (~1 hour)
2. **Headless patch** — make demos run without display (~30 min)
3. **Camera handoff** — FrameGrabber pause/resume mechanism (~1 hour)
4. **Build Traffic Cop** — new `TrafficCop.py` combining Avoidance + color detection (~1-2 hours)
5. **Pi endpoints** — `/demo/start`, `/demo/stop`, `/demo/status` (~2 hours)
6. **Annie tool** — `car_demo` tool with menu/start/stop + color + goal params (~2 hours)
7. **Telegram menu** — inline keyboard buttons for demo selection (nice-to-have, ~1 hour)
8. **Car personality** — RGB LEDs + OLED + buzzer per mode (nice-to-have, ~1 hour)
9. **End-to-end rehearsal** — run through all 8 demos in sequence (~1 hour)

**Total: ~9-11 hours for core (steps 1-6), ~2 hours for polish (steps 7-8), ~1 hour rehearsal**

### The Final 8 Demos (what guests will see)
| # | Demo | Source | Prep Needed |
|---|------|--------|:---:|
| 1 | 🔴 Chase Ball (red/blue/green) | ColorTracking.py + color param | Balls (✓) |
| 2 | 🧭 Room Explorer | navigate_robot (existing) | Clear space |
| 3 | 🚦 Traffic Cop | NEW: Avoidance + red/green cards | Print cards |
| 4 | 🚧 Obstacle Dodge | Avoidance.py | Clear space |
| 5 | 📱 QR Navigator | QuickMark.py | Print 4 QR codes (1-4) |
| 6 | 🔍 Color Spotter | ColorDetect.py | None |
| 7 | 📸 What Do I See? | robot_photo describe (existing) | None |
| 8 | 🎯 Look Around | robot_look (existing) | None |

---

## Known Issues & Mitigations

| Issue | Impact | Mitigation |
|-------|--------|-----------|
| FaceTracking BLOCKED (mediapipe) | Can't demo face tracking | Skip it. Use Hailo person detection as future alternative |
| GestureRecognition BLOCKED | Can't demo gesture control | Skip it |
| Sonar false readings | Avoidance may behave oddly | Test in demo room. If unreliable, drop from menu |
| Camera conflict | Demo won't start if FrameGrabber holds camera | Camera handoff mechanism (step 3) |
| cv2.imshow needs display | Demos crash headless | Headless patch (step 2) |
| Dead reckoning drifts | Return-to-start isn't precise | Don't demo return. Just do outbound exploration |
| WiFi drops | Annie can't reach car | Hiwonder demos run locally on Pi — still work without WiFi |
| lab_config.yaml uncalibrated | ColorTracking won't find colors | Calibrate for demo room lighting (prep task) |

## Fallback Plan
If the `car_demo` tool integration isn't ready in time:
- Annie handles her existing tools normally (photo, navigate, drive, look)
- Rajesh manually starts Hiwonder demos via SSH on laptop (less impressive but works)
- Key insight: test the Hiwonder demos early. If they don't work, we have time to debug.
