LENS 09 — TRADEOFF RADAR: Cross-Lens Connections ======================================================= PRIMARY CONNECTIONS ------------------- LENS 04 (Network / Latency analysis) Direction: Bidirectional reinforcement Connection: Lens 04 independently identifies a WiFi cliff edge at 100ms where VLM update rate becomes insensitive above 15 Hz. Lens 09's Robustness axis score of 35 for the VLM-primary approach is the quantitative expression of exactly this mechanism. The two lenses converge on the same structural finding from different entry points: "58 Hz inference + 2.4 GHz WiFi jitter = effective 15–20 Hz". Lens 04 measures the network side; Lens 09 maps what it costs on the tradeoff radar. Hailo-L1 update: Lens 09's cyan "Annie + Hailo L1" overlay shows the Robustness axis jumping 35 → 65 when the idle 26-TOPS Hailo-8 is activated as a local safety layer (YOLOv8n @ 430 FPS, <10ms, zero WiFi). This is the specific structural remedy for the WiFi cliff Lens 04 identified: the safety path is decoupled from the WiFi-dependent semantic path, so a network brownout no longer compounds both failure modes at once. Implication for Annie: Lens 04's network reliability audit now has a concrete first action — activate Hailo-L1 before the next multi-query VLM optimization pass, since inference throughput above 15 Hz has no user-facing benefit while the safety path still shares the WiFi failure domain with the semantic path. LENS 13 (Path-of-least-resistance / cheapest effective move) Direction: Lens 13 → Lens 09 → Lens 13 (closed loop) Connection: Lens 13 asks "what is the cheapest move that meaningfully improves the system?" Lens 09's cyan overlay answers that question quantitatively on the radar: activating the already-installed Hailo-8 produces the single biggest single-axis delta (Robustness 35 → 65, +30 points) available anywhere on the chart without swapping hardware. No other move — multi-query dispatch, SigLIP sidecar, buffered commands — produces a comparable axis jump. The hardware is already on the robot, already wired, already idle. HailoRT + TAPPAS + model compilation add a modest complexity tax (Impl. Simplicity 40 → 32) but the Pi 5 reference examples at github.com/hailo-ai/hailo-rpi5-examples shorten the curve to days. Implication for Annie: the path-of-least-resistance recommendation from Lens 13 is now anchored in Lens 09 radar evidence: 3× robustness improvement per unit complexity added beats every other candidate move in the current backlog. LENS 07 (System landscape / positioning map) Direction: Lens 07 → Lens 09 Connection: Lens 07 identifies a 12-system scatter plot and claims Annie is targeting the empty "edge-rich" quadrant — high semantic richness with low computational overhead. Lens 09's radar makes explicit what "targeting" actually costs: the amber polygon's 30% Spatial Accuracy and 35% Robustness scores are the price of admission to the edge-rich quadrant. No other system in Lens 07's scatter occupies that position precisely because those axis scores are this low. Annie's occupancy of that quadrant is not free; it is purchased by accepting a spatial accuracy deficit that SLAM-primary systems do not face. Implication for Annie: the "unoccupied quadrant" may be unoccupied for a reason. Lens 09 provides the radar evidence for WHY — and the cyan overlay shows the Hailo-L1 path as a way to partially reclaim the Robustness deficit without leaving the edge-rich quadrant. LENS 14 (Research-to-reality gap) Direction: Lens 09 → Lens 14 Connection: Lens 14 notes that the research community describes the Waymo pattern (lidar-primary with camera supplement) and then does the opposite (VLM-primary). Lens 09 explains the mechanism: papers benchmark Robustness and Spatial Accuracy in controlled lab settings where the VLM inference node is co-located (no WiFi hop). In that context, VLM-primary looks approximately as robust as SLAM-primary. In Annie's deployed home environment — WiFi, multiple 2.4 GHz devices, Zenoh bridging, llama-server process lifetime — the Robustness gap (35 vs 88) emerges and is not measured in any published system. Implication for Annie: the research-to-reality gap (Lens 14) is precisely the Robustness axis discrepancy (Lens 09). They are the same problem described at different levels of abstraction. The IROS dual-process paper (arXiv 2601.21506) that validates the Hailo-L1 + VLM split is the narrowest existing bridge across this research-to-reality gap. LENS 16 (Hardware-as-leverage / idle capacity audit) Direction: Lens 16 → Lens 09 Connection: Lens 16 asks "what on the robot is installed but not yet used?" The Hailo-8 AI HAT+ is the canonical answer — 26 TOPS of NPU compute sitting idle on the Pi 5 while Annie routes every perception task over WiFi to Panda. Lens 09's cyan overlay monetizes this idle capacity on the tradeoff radar: the Robustness axis moves +30 points with zero additional hardware cost. This is the first time in the radar's change history that a non-tuning, non-software-rewrite move produces a meaningful axis delta. Implication for Annie: the Lens 16 audit should be re-run after Hailo activation — the next idle-capacity candidate (Pi 5 CPU headroom? Panda's second CUDA stream?) may produce a similar high-leverage move that is currently hidden by the prominence of the Hailo opportunity. LENS 22 (Cognitive-load / single-developer constraint) Direction: Bidirectional reinforcement Connection: Lens 22 treats implementation complexity as a first-class runtime constraint for a single-developer project. Lens 09's Implementation Simplicity axis is the radar expression of this concern. The cyan overlay's trade — dropping from 40 to ~32 for +30 robustness — is the quantitative form of the Lens 22 question: "is the cognitive load of HailoRT + TAPPAS worth the robustness gain?" The answer is yes, because (a) the complexity delta is 8 points vs a robustness delta of 30, and (b) the alternative path to similar robustness (SLAM-primary, Impl. Simplicity 30) costs more complexity for a gain that's only 23 points higher at the ceiling. Implication for Annie: Lens 22's rule — "budget complexity like VRAM" — now has a concrete first application: the Hailo-L1 activation fits within the complexity budget because the learning curve is in days. SECONDARY CONNECTIONS --------------------- LENS 01 (Constraint hierarchy) Lens 01 identifies a 10-layer constraint stack where physics constraints sit above convention constraints. Lens 09's Spatial Accuracy axis is the physical constraint that the VLM approach cannot fully satisfy at the layer below lidar geometry. The ESTOP at 200mm is the physics constraint that cannot be violated regardless of which approach is used. Both lenses agree: the bottom of the constraint stack is non-negotiable, and it is owned by the lidar + IMU tier, not the VLM tier. The Hailo-L1 overlay adds a second hardware-owned safety tier alongside lidar ESTOP. LENS 02 (Abstraction leaks) Lens 02 identifies the Pico RP2040 REPL crash as an "invisible abstraction leak." Lens 09's Robustness score of 35 is a radar expression of the same class of failure: invisible dependencies (WiFi reliability, Zenoh version pinning, llama-server process health) that do not appear in architecture diagrams but dominate real deployment. Both lenses point to the same structural gap: the system model does not include the deployment substrate. Hailo-L1 breaks the worst instance of this leak (WiFi as a safety-critical dependency). LENS 03 (Dependency / blocker graph) Lens 03 identifies llama-server embedding blocking as the highest-leverage addressable dependency. Lens 09's VRAM Efficiency axis (45 for VLM-primary) includes the cost of running E2B for embedding extraction when a lightweight SigLIP 2 sidecar (800MB VRAM) would suffice. The dependency Lens 03 flags is one of the two tradeoffs Lens 09 identifies as "movable by a different approach" — switching from full E2B to SigLIP improves VRAM Efficiency without sacrificing Semantic Richness. LENS 10 (Post-mortem analysis) Lens 10's finding — "we built the fast path, forgot the slow path" — maps directly onto Lens 09's radar structure. The "fast path" is the amber polygon's upper axes (Perception Depth, Semantic Richness, Latency). The "slow path" is the amber polygon's lower axes (Robustness, Spatial Accuracy). The radar makes the asymmetry legible: the fast path scores are 80–90; the slow path scores are 30–35. Same finding, different visual language. The Hailo-L1 overlay begins the repair of the slow path. SCORE JUSTIFICATION SUMMARY ----------------------------- Annie VLM-Primary axis scores with specific evidence: Perception Depth 85: E2B single-pass output: room type + obstacle + goal bearing Semantic Richness 90: "LEFT MEDIUM" + "hallway" + "chair" in one pipeline Latency 80: 18ms/frame via llama-server (session 67 measurement) VRAM Efficiency 45: 3.5 GB Panda VRAM consumed by E2B alone Robustness 35: Session 89 Zenoh version fix = 1 full session; WiFi jitter unmitigated Spatial Accuracy 30: Qualitative output "LEFT MEDIUM" — no metric localization Impl. Simplicity 40: ask_vlm() is simple; slam_toolbox lifecycle is not SLAM-Primary axis scores with specific evidence: Perception Depth 30: Occupancy grid — binary filled/empty, no object labels Semantic Richness 20: Float coordinates (x, y, θ) only Latency 55: A* + lifecycle overhead ~50–80ms for full tactical cycle VRAM Efficiency 80: CPU-only on Pi 5 ARM, zero GPU usage Robustness 88: All-local, no network, deterministic scan-matching Spatial Accuracy 92: RPLIDAR C1 + slam_toolbox: ~10mm localization accuracy Impl. Simplicity 30: rmw_zenoh source build, IMU frame_id, EKF tuning (session 89) Annie + Hailo L1 (projected) axis scores with specific evidence: Perception Depth 85: Same VLM path for semantic description Semantic Richness 90: Same VLM path Latency 82: Safety-critical obstacle detection now local (<10ms); VLM unchanged VRAM Efficiency 45: Hailo-8 uses NPU, not GPU — Panda VRAM footprint unchanged Robustness 65: Safety path (YOLOv8n @ 430 FPS on Hailo) no longer WiFi-dependent; semantic path still on WiFi, so ceiling is below SLAM's 88 Spatial Accuracy 35: YOLO bounding boxes give pixel-precise obstacle locus (qualitative direction still, but pixel-precise where) Impl. Simplicity 32: HailoRT + TAPPAS install + YOLOv8n compilation; Pi 5 reference examples (github.com/hailo-ai/hailo-rpi5-examples) shorten curve to days KEY SYNTHESIS FINDING --------------------- The radar makes explicit that the choice between VLM-primary and SLAM-primary is not a choice between two complete solutions. It is a choice between two different incomplete profiles. The correct system engineering question is not "which approach wins" but "which axes can be patched from the other approach at acceptable marginal cost": Annie's spatial gap (30) → patch with lidar ESTOP (already done) + SLAM localization (Phase 1) Annie's robustness gap (35) → PRIMARY patch: activate Hailo-L1 (35 → 65, biggest axis move on the radar); SECONDARY patch: buffered command queuing + Panda health watchdog SLAM's semantic gap (20) → patch with VLM scene labels on occupancy grid (Phase 2c, VLMaps pattern) SLAM's perception depth gap (30) → patch with alternating VLM queries (Phase 2a) The hybrid architecture (4-tier hierarchical fusion, where L1 = Hailo safety, L2 = Gemma 4 E2B goal tracking on Panda, L3 = multi-query scene analysis, L4 = Titan strategic planning) is the correct response to the complementary anti-profile structure of the radar — not because hybridization is always correct, but because these specific profiles have almost zero overlap in their failure modes, making each a strong patch for the other's weaknesses. CROSS-LENS FLAGS ---------------- The following lenses should re-run or update after Hailo-L1 is activated: - Lens 04 (re-measure effective VLM rate with safety path decoupled from WiFi) - Lens 13 (log this as the first high-leverage idle-capacity activation) - Lens 16 (re-audit idle hardware; next candidate surfaces) - Lens 22 (validate the complexity-budget prediction — did HailoRT really take days?)