# Next Session: Hailo YOLO Ball Tracker + Sonar LED Fix + Enhanced Demos

> **Supersedes** `docs/NEXT-SESSION-SUNDAY-DEMO-FIX-V5.md` (Track B only).
> Session 55 (2026-04-11) was a planning session. No code was written.

## TL;DR

Replace Hiwonder's fragile LAB-based ball detection with Hailo YOLO inference.
The Pi already has Hailo-8 NPU + `yolov8s_h8.hef` running 30 FPS for the safety
daemon. During demos, the safety daemon is disabled, so the NPU is free. Create
`HailoTracking.py` that detects "sports ball" (COCO class 32) regardless of
color, lighting, or camera tint.

**Bonus discovery:** The ultrasonic sensor's blue LED (causing the pink camera
tint that kills red ball detection) is software-controllable via I2C. Three lines
in `_init_hardware()` turn it off — no electrical tape needed.

## Three tracks

1. **Sonar LED fix** (~5 min) — turn off ultrasonic I2C RGB LEDs at server startup
2. **HailoTracking.py** (~2 hours) — new ball tracker using Hailo YOLO
3. **Enhanced demos roadmap** (future) — person following, face tracking, segmentation

## Git state at handoff

- Branch: `main` at commit `1fd9e7a`
- Pushed to `origin/main`
- Plan file: `~/.claude/plans/hidden-imagining-balloon.md` (approved, not yet executed)
- No code changes from this session

## Pi state

- `turbopi-server`: active, idle
- Hailo safety daemon: running (will be auto-stopped when demo starts)
- Power: `throttled=0x0`, stable
- Sonar LEDs: still ON (blue), causing pink camera tint
- **Car is on its back** — put it right-side up before testing

---

## Track 1: Sonar LED Fix (5 min)

### Discovery

The ultrasonic sensor (`HiwonderSDK/Sonar.py`) has 2 I2C RGB LEDs at address
`0x77`. They are NOT power LEDs — they are programmable via `setPixelColor()`.
Hiwonder's own `TurboPi.py:72-74` turns them off at startup, but our custom
`turbopi-server/main.py` never does.

### Fix

**File:** `services/turbopi-server/main.py:211-215`

```python
def _init_hardware() -> tuple[Board, Sonar]:
    """Initialize Board + Sonar. Camera is handled by FrameGrabber."""
    board = Board(device="/dev/ttyAMA0")
    sonar = Sonar()
    # Kill sonar's blue LEDs — they cause pink camera tint
    sonar.setRGBMode(0)
    sonar.setPixelColor(0, (0, 0, 0))
    sonar.setPixelColor(1, (0, 0, 0))
    return board, sonar
```

### Quick test (before full deploy)

```bash
ssh pi "python3 -c \"from HiwonderSDK.Sonar import Sonar; s=Sonar(); s.setRGBMode(0); s.setPixelColor(0,(0,0,0)); s.setPixelColor(1,(0,0,0))\""
```

### Verify

Camera image should no longer have pink tint. Red ball LAB detection may now
work without any code changes. Test with existing Chase Red Ball demo.

---

## Track 2: HailoTracking.py — Implementation Guide

### Architecture

Copy ColorTracking.py's full skeleton, replace ONLY the `run(img)` body with
Hailo YOLO inference. Reuse safety.py's proven Hailo pattern.

```
ColorTracking:  run(img) → LAB threshold → contour → center_x, center_y, radius
HailoTracking:  run(img) → Hailo YOLO → bbox class 32 → center_x, center_y, radius
                                     └→ optional LAB color filter on bbox crop
```

The PID loops, servo tracking, chassis translation, and `move()` thread are
100% identical. The only new code is the `run(img)` body (~40 lines) and the
Hailo init/cleanup (~30 lines).

### Source files to reference

| Source file | What to copy | Lines |
|-------------|-------------|-------|
| `vendor/turbopi/Functions/ColorTracking.py` | Full skeleton: globals, PIDs, move(), lifecycle | All 309 lines |
| `services/turbopi-server/safety.py` | Hailo import, model loading, inference, postprocessing | 24-38, 338-438 |

### File to create: `vendor/turbopi/Functions/HailoTracking.py`

#### Section 1: Imports and Hailo conditional import

Copy from `safety.py:24-38`:
```python
import os, sys, cv2, time, math, signal, threading, numpy as np
sys.path.append('/home/pi/TurboPi/')
import yaml_handle
import HiwonderSDK.PID as PID
import HiwonderSDK.Misc as Misc
import HiwonderSDK.mecanum as mecanum

# Conditional Hailo import (same pattern as safety.py)
_HAILO_AVAILABLE = False
try:
    from hailo_platform import (
        FormatType, HEF, InferVStreams,
        InputVStreamParams, OutputVStreamParams, VDevice,
    )
    _HAILO_AVAILABLE = True
except ImportError:
    print("hailo_platform not available — HailoTracking disabled")
```

#### Section 2: Module globals (copy from ColorTracking.py:18-46)

```python
board = None
car = mecanum.MecanumChassis()
servo1 = 1500; servo2 = 1500; servo_x = servo2; servo_y = servo1
color_radius = 0; color_center_x = -1; color_center_y = -1
car_en = False; wheel_en = False; size = (640, 480)
target_color = (); __isRunning = False

car_x_pid = PID.PID(P=0.25, I=0.001, D=0.0001)
car_y_pid = PID.PID(P=1.00, I=0.001, D=0.0001)
servo_x_pid = PID.PID(P=0.06, I=0.0003, D=0.0006)
servo_y_pid = PID.PID(P=0.06, I=0.0003, D=0.0006)

lab_data = None; servo_data = None
```

#### Section 3: Hailo model globals and functions

```python
SPORTS_BALL_CLASS = 32  # COCO class index
MIN_CONFIDENCE = 0.3
_model_path = os.environ.get("HAILO_MODEL_PATH", "/usr/share/hailo-models/yolov8s_h8.hef")

# Hailo state (initialized in init(), cleaned up in stop())
_hef = None; _vdevice = None; _network_group = None
_input_info = None; _input_shape = None
_activated_ng = None; _pipeline = None

def _load_hailo_model():
    """Load HEF model onto Hailo NPU. Pattern from safety.py:338-369."""
    global _hef, _vdevice, _network_group, _input_info, _input_shape
    global _activated_ng, _pipeline
    _hef = HEF(_model_path)
    _vdevice = VDevice()
    _network_group = _vdevice.configure(_hef)[0]
    _input_info = _hef.get_input_vstream_infos()[0]
    _input_shape = tuple(_input_info.shape)  # (H, W, C)
    # CRITICAL: activate network group BEFORE creating InferVStreams
    ng_params = _network_group.create_params()
    _activated_ng = _network_group.activate(ng_params)
    _activated_ng.__enter__()
    input_vstream_params = InputVStreamParams.make(
        _network_group, quantized=False, format_type=FormatType.FLOAT32)
    output_vstream_params = OutputVStreamParams.make(
        _network_group, quantized=False, format_type=FormatType.FLOAT32)
    _pipeline = InferVStreams(
        _network_group, input_vstream_params, output_vstream_params)
    _pipeline.__enter__()
    print(f"HailoTracking: model loaded, input={_input_shape}")

def _cleanup_hailo():
    """Release Hailo resources. Pattern from safety.py:297-309."""
    global _pipeline, _activated_ng
    if _pipeline is not None:
        try: _pipeline.__exit__(None, None, None)
        except Exception: pass
        _pipeline = None
    if _activated_ng is not None:
        try: _activated_ng.__exit__(None, None, None)
        except Exception: pass
        _activated_ng = None
```

#### Section 4: Lifecycle functions (copy from ColorTracking.py:47-154)

Copy verbatim: `load_config()`, `initMove()`, `range_rgb`, `car_stop()`,
`set_rgb()`, `reset()`, `setTargetColor()`, `setVehicleFollowing()`,
`getAreaMaxContour()`.

Modify `init()` to also load Hailo:
```python
def init():
    print("HailoTracking Init")
    load_config()
    reset()
    initMove()
    if _HAILO_AVAILABLE:
        _load_hailo_model()
```

Modify `stop()` and `exit()` to also cleanup Hailo:
```python
def stop():
    global __isRunning
    reset(); initMove(); car_stop()
    __isRunning = False
    set_rgb('None')
    _cleanup_hailo()
    print("HailoTracking Stop")
```

`start()` — copy verbatim from ColorTracking.

#### Section 5: move() thread (copy verbatim from ColorTracking.py:170-243)

The ENTIRE `move()` function and thread start — copy exactly as-is. This is the
PID servo/chassis control loop that reads `color_center_x/y` and `color_radius`
globals. Including `car.translation(-dx, -dy)` sign fix.

#### Section 6: run(img) — THE ONLY NEW CODE

```python
def run(img):
    global __isRunning, color_radius
    global color_center_x, color_center_y

    if not __isRunning or _pipeline is None:
        return img

    img_h, img_w = img.shape[:2]

    # 1. Hailo YOLO inference
    h, w = _input_shape[0], _input_shape[1]
    resized = cv2.resize(img, (w, h))
    input_data = np.expand_dims(resized, axis=0).astype(np.float32) / 255.0
    feed = {_input_info.name: input_data}
    raw_output = _pipeline.infer(feed)

    # 2. Parse — extract ONLY sports ball detections (class 32)
    balls = []
    for output_tensor in raw_output.values():
        batch = output_tensor[0] if isinstance(output_tensor, list) else output_tensor
        if not isinstance(batch, list) or SPORTS_BALL_CLASS >= len(batch):
            continue
        class_dets = batch[SPORTS_BALL_CLASS]
        if not hasattr(class_dets, 'shape') or class_dets.shape[0] == 0:
            continue
        for det_row in class_dets:
            conf = float(det_row[4])
            if conf < MIN_CONFIDENCE:
                continue
            y1, x1, y2, x2 = det_row[:4]  # normalized [0,1]
            balls.append((conf, x1, y1, x2, y2))

    # 3. Optional color filter
    if target_color and balls:
        balls = _filter_by_color(img, balls, img_w, img_h)

    # 4. Best ball → update globals for move() thread
    if balls:
        balls.sort(key=lambda b: b[0], reverse=True)
        conf, x1, y1, x2, y2 = balls[0]
        cx = ((x1 + x2) / 2.0) * img_w
        cy = ((y1 + y2) / 2.0) * img_h
        bw = (x2 - x1) * img_w
        bh = (y2 - y1) * img_h
        radius = max(bw, bh) / 2.0

        color_center_x = int(cx)
        color_center_y = int(cy)
        color_radius = int(radius)

        if color_radius > 300:
            color_radius = 0
            color_center_x = -1
            color_center_y = -1
            return img

        cv2.circle(img, (color_center_x, color_center_y), color_radius, (0, 255, 255), 2)
        cv2.putText(img, f"ball {conf:.2f}",
                    (color_center_x - 30, color_center_y - color_radius - 5),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 255), 1)
    else:
        color_radius = 0
        color_center_x = -1
        color_center_y = -1

    return img


def _filter_by_color(img, balls, img_w, img_h):
    """Post-filter: crop YOLO bbox, check mean LAB against lab_config."""
    if lab_data is None:
        return balls
    filtered = []
    for conf, x1, y1, x2, y2 in balls:
        px1 = max(0, int(x1 * img_w))
        py1 = max(0, int(y1 * img_h))
        px2 = min(img_w, int(x2 * img_w))
        py2 = min(img_h, int(y2 * img_h))
        if px2 <= px1 or py2 <= py1:
            continue
        crop = img[py1:py2, px1:px2]
        crop_lab = cv2.cvtColor(crop, cv2.COLOR_BGR2LAB)
        mean_lab = crop_lab.mean(axis=(0, 1))
        for color_name in target_color:
            if color_name not in lab_data:
                continue
            lo = lab_data[color_name]['min']
            hi = lab_data[color_name]['max']
            if (lo[0] <= mean_lab[0] <= hi[0] and
                lo[1] <= mean_lab[1] <= hi[1] and
                lo[2] <= mean_lab[2] <= hi[2]):
                filtered.append((conf, x1, y1, x2, y2))
                break
    return filtered
```

#### Section 7: Signal handler (copy from ColorTracking.py:296-303)

```python
def manual_stop(signum, frame):
    global __isRunning
    __isRunning = False
    car_stop()
    initMove()
    _cleanup_hailo()
```

### Files to modify (5 allowlist + button updates)

#### 1. `services/turbopi-server/pi-files/_headless_runner.py`

**Line 39:** Add `"HailoTracking"` to `ALLOWED_SCRIPTS`:
```python
ALLOWED_SCRIPTS = {"ColorTracking", "HailoTracking", "Avoidance", "TrafficCop", "QuickMark", "ColorDetect"}
```

**After line 108:** Add init block:
```python
elif script_name == "HailoTracking":
    mod.init()
    mod.start()
    if demo_color:
        mod.setTargetColor((demo_color,))
    # No default — empty target_color = chase ANY ball
    mod.setVehicleFollowing(True)
```

#### 2. `services/turbopi-server/demo_runner.py:44-46`

Add `"HailoTracking"` to `ALLOWED_DEMO_SCRIPTS` frozenset.

#### 3. `services/turbopi-server/test_demo_runner.py:68-70`

Update `test_scripts_allowlist_exact` to include `"HailoTracking"`.

#### 4. `services/telegram-bot/car_demo_handler.py`

| Location | Add |
|----------|-----|
| `ALLOWED_DEMOS` (line 51) | `"hailo_chase"` |
| `DEMO_TO_SCRIPT` (line 73) | `"hailo_chase": "HailoTracking"` |
| `DEMO_BUTTONS` (line 89, after chase ball group) | 4 buttons: any/red/blue/green |
| `DEMO_LABELS` (line 108) | `"hailo_chase": "Hailo Chase Ball"` |
| Color passthrough (~line 433) | `if demo not in ("chase_ball", "hailo_chase"):` |

Telegram buttons:
```python
("🎯", "Hailo Chase (any)", "car_demo:hailo_chase"),
("🎯", "Hailo Red",         "car_demo:hailo_chase:red"),
("🎯", "Hailo Blue",        "car_demo:hailo_chase:blue"),
("🎯", "Hailo Green",       "car_demo:hailo_chase:green"),
```

#### 5. `services/telegram-bot/tests/test_car_demo_handler.py`

- Update `test_demos_set_exact` to include `"hailo_chase"`
- Add `test_hailo_chase_any` — callback `car_demo:hailo_chase` → script `HailoTracking`, no color
- Add `test_hailo_chase_red` — callback `car_demo:hailo_chase:red` → color `"red"`
- Add `test_hailo_buttons_under_64_bytes`

### Unit tests: `vendor/turbopi/Functions/test_hailo_tracking.py`

Mock `hailo_platform` (same pattern as `test_safety.py`). Key tests:
- `test_run_detects_ball` — mock infer → sports ball at known coords → verify center/radius
- `test_run_ignores_non_ball` — only person detection → center stays -1
- `test_run_no_detections` — empty output → center stays -1
- `test_color_filter_match` — matching LAB crop → ball accepted
- `test_color_filter_reject` — wrong color → ball rejected
- `test_no_color_chases_any` — empty target_color → all balls accepted
- `test_cleanup_on_stop` — verify `__exit__` called
- `test_interface_exports` — module has init, start, stop, run, setTargetColor, etc.

---

## Track 3: Enhanced Demos Roadmap (future sessions)

The Pi has HEF models pre-installed at `/usr/share/hailo-models/`:

| Model | Capability | Demo button idea |
|-------|-----------|-----------------|
| `yolov8s_h8.hef` | 80-class detection | **🎯 Hailo Chase Ball** (this session) |
| `yolov8s_pose_h8.hef` | Human pose estimation | **🧍 Follow Person** — detect person → PID follow + skeleton overlay |
| `yolov5s_personface_h8l.hef` | Person + face detection | **😊 Face Tracker** — pan/tilt servo tracks nearest face |
| `yolov5n_seg_h8.hef` | Instance segmentation | **🎨 Segment & Describe** — segment scene → Gemma 4 description |

All use the same `_load_hailo_model()` + YOLO output format. Only difference:
which HEF to load, which COCO class(es) to filter, what action to take.

**OpenCV enhancements** (no Hailo needed):
- **📊 Object Distance** — YOLO bbox + pinhole model → distance overlay
- **🔄 Optical Flow** — `cv2.calcOpticalFlowPyrLK` motion tracking
- **🎯 ArUco Navigation** — `cv2.aruco` markers for waypoint-based driving

Each demo follows the `Functions/XxxTracking.py` pattern, registered in
`_headless_runner.py`, exposed via Telegram buttons. ~200 lines each, reusing
the proven PID/servo/move() infrastructure.

---

## Deploy & verify

```bash
# 1. Deploy vendor/ to Pi
scripts/deploy-turbopi.sh

# 2. Restart turbopi-server (picks up sonar LED fix + new HailoTracking)
ssh pi "sudo systemctl restart turbopi-server"

# 3. Verify sonar LEDs off — camera should show no pink tint
# 4. Deploy telegram-bot (on Titan)
./stop.sh telegram && ./start.sh telegram

# 5. Test via Telegram:
#    - Tap 🎯 Hailo Chase (any) → place any ball in view → car should track + follow
#    - Tap 🎯 Hailo Red → only red balls tracked
#    - Tap 🔴 Chase Red Ball (old LAB) → compare with Hailo version
```

---

## Known issues carrying forward

1. **§5 guard-loop `jq -e` predicate** — uses `.phase` vs `.demo_phase`, `.frame_grabber_healthy` doesn't exist
2. **Red ball LAB detection** — may be fixed once sonar LEDs are off. Test before assuming it's broken.
3. **4 pre-existing test failures** in `test_bot.py::TestHandleText` and `TestDebounce` (localhost:7860 auth issue, unrelated)

## Prompt for next session

```
Implement Hailo YOLO ball tracker. Read docs/NEXT-SESSION-HAILO-BALL-TRACKER.md.

Three tracks:
1. Sonar LED fix (5 min) — 3 lines in _init_hardware() to kill blue LED
2. HailoTracking.py (2 hours) — YOLO ball detection replacing LAB
3. Enhanced demo roadmap (future) — person following, face tracking

All code patterns are in the handoff doc with exact file references.
Plan file: ~/.claude/plans/hidden-imagining-balloon.md (approved, ready to execute)
```
