LENS 18 — Decision Tree: Cross-Lens Connections PRIMARY CONNECTIONS Lens 07 (Landscape Map) — The scatter plot reveals WHY the decision tree has the shape it does. Lens 07 mapped 12 systems across sensor richness vs autonomy level. The empty "edge+rich" quadrant — where Annie sits — exists because two dominant funding structures simultaneously exclude it: academic labs assume controllable pre-exploration (which excludes the single-robot, always-on constraint), and industry assumes sensor budgets (which excludes the camera+lidar-only hardware). The decision tree's branch structure mirrors this: most branches terminate in "don't use VLM-primary" precisely because most builders are either (a) in a dynamic environment that requires industry-grade sensor suites, or (b) in a lab setting with pre-exploration assumptions. The fleet branch maps directly to Lens 07's industry tier (Tesla, Waymo, GR00T N1). All three industry systems are characterized by fleet-scale training data. The decision tree correctly routes them off the multi-query hybrid path toward end-to-end VLA training — exactly matching Lens 07's observation that industry systems achieve autonomy through training-at-scale, not through hybrid integration. The newly-added local-NPU branch is a Lens 07 refinement: the "edge compute density" axis now recognizes TWO compute elements per robot (NPU + edge GPU), not one. Annie's Hailo-8 + Panda pair sits further along the density axis than a Panda-only configuration — and the dual-process pattern leverages that density to escape the VLM-only 10 Hz constraint. --- Lens 12 (Anti-Pattern Gallery) — The decision tree's thresholds are anti-pattern boundaries disguised as binary questions. The 10 Hz branch is the quantified form of Lens 12's Anti-Pattern 4: "Switch to the 26B Titan model for better nav decisions." The anti-pattern gallery describes the failure mode experimentally — session 92 routing nav to Titan at ~2 Hz produced visibly worse driving than Panda E2B at 54 Hz. The decision tree converts that experimental finding into a structural gate: below 10 Hz, the architecture is wrong, not just suboptimal. The newly-added local-NPU branch is the STRUCTURAL FIX for Anti-Pattern 4: a Hailo-8 at 430 FPS moves the reactive loop off the shared GPU entirely, making the slow-vs-fast model debate irrelevant for safety. The fleet branch directly encodes Lens 12's Anti-Pattern 2: "A custom end-to-end neural planner is more elegant." The anti-pattern gallery explains why this seduction fails for single-robot contexts — Tesla trained on millions of miles, RT-2 requires millions of demonstrations, Annie has one robot. The decision tree routes anyone with fleet data toward the end-to-end path while routing single-robot builders toward the hybrid integration that Lens 12 validates as the appropriate architecture under that constraint. The wrong-choice condition for transparent obstacles (glass doors, mirrors) is Lens 12's Anti-Pattern 3 stated as a decision boundary: "The VLM sees the world — why run lidar separately?" Lidar ESTOP chain remains mandatory on every terminal "right choice" node — including the new dual-process branch. The single-query monopoly failure (Lens 12's Anti-Pattern 1) is still implicit: the multi-query pipeline is assumed at the terminal "right choice" node regardless of whether you arrived via VLM-only or dual-process paths. --- Lens 14 (Research Contradiction / Fiducial Markers) — The newly-added fiducial branch is a direct uplift of Lens 14's core finding. Classical CV (cv2.aruco + solvePnP at ~78 µs on Pi ARM CPU) dominates any VLM substitute for registered markers. Annie's homing path (DICT_6X6_50 id=23) is the canonical example; the decision tree now codifies that ANY fiducial task should exit the tree at level 3, not just homing. Lens 14 separately identified that the VLM-primary hybrid research document describes the Waymo pattern in Part 1 and then proposes the opposite for Annie. The decision tree resolves this: Waymo satisfies none of the YES conditions in the tree — Waymo operates in highly dynamic environments (branch 4: NO), has full sensor suites, has fleet-scale training data. The decision tree shows that Waymo's architecture is correct FOR WAYMO, and that Annie's architecture is correct precisely because it operates under exactly the opposite constraint set. Suggests a future Lens 14 addition: a meta-pattern for "when should a nav task be redesigned around a printed marker?" The decision tree's level-3 early exit means this question becomes the FIRST question a practitioner asks about any new task. --- Lens 16 (ArUco Homing / Geometric Precision) — Empirical grounding for the fiducial branch. Annie's aruco_detect.py + homing.py (camera-pan-first, SOLVEPNP_ITERATIVE, approximate intrinsics) achieves search + acquire + approach end-to-end with classical CV, no VLM involvement. The session-83 tuning data (right-turn undershoot, achieved_deg prediction) shows that the failure modes of classical CV are GEOMETRIC and CORRECTABLE — unlike VLM failure modes (hallucination) which are STOCHASTIC and UNCORRECTABLE. The decision tree's fiducial branch gives Lens 16's homing work a structural home: not "a nice optimization for one task" but "the default correct architecture for any fiducial-based task." --- Lens 17 (Hardware Inventory / Accelerator Utilization) — The new local-NPU branch depends on accurate accelerator accounting. Annie's Hailo-8 AI HAT+ (26 TOPS) is the canonical "idle high-leverage hardware" — surfaced by the session 119 hardware audit as the highest-impact activation available (430 FPS YOLOv8n, <10 ms, zero WiFi). The dual-process upgrade is a configuration change, not a rewrite. Suggests Lens 17 should add an "idle accelerator audit" pattern: for each device in the bill of materials, verify it is being exercised by the current code path. If not, either activate it (dual-process) or remove it from the BOM (cost). Annie's Hailo-8 has been on the BOM but off the critical path since day one; the decision tree's level-4 branch makes the cost of that idle state explicit. --- SECONDARY CONNECTIONS Lens 04 (Rate/Latency Analysis) — The 10 Hz threshold is the operational codification of Lens 04's finding that VLM rate insensitivity exists above 15 Hz. The newly-added local-NPU branch sidesteps this entirely: NPU at 430 FPS is so far above threshold that rate becomes a non-issue. IROS 2601.21506's 66% latency reduction quantifies the gain. Lens 08 (Neuroscience Analogy) — The decision tree's minimum viable context maps to "hippocampal replay" mechanisms: a persistent semantic map built incrementally over multiple sessions. Phase 2c (semantic map annotation) is only valuable in the static indoor environment (branch 4: YES) because it assumes labels accumulate meaningfully across sessions. Lens 10 (Post-Mortem) — Lens 10's "we built the fast path, forgot the slow path" maps to tier separation. The newly-added local-NPU branch makes the fast path EVEN FASTER (L1 at 430 FPS local) and lets the slow path (L2 VLM) go even slower without safety consequence. Lens 11 (Adversarial Perspectives) — The fleet branch addresses Lens 11's open-source race-to-zero concern. If open-source VLA models trained on crowd-sourced demonstrations become available, the fleet threshold drops to 2-3 robots. If Hailo-style NPUs become standard on hobby robots, the local-NPU branch becomes the default path, not an upgrade. --- CONVERGENCE FINDING The primary connections — Lens 07, 12, 14, 16, 17 — converge on one finding: the VLM-primary hybrid architecture is correct for a NARROW constraint set, and two new early exits (fiducial target, local NPU) route many cases to better-specialized solutions. The decision tree makes this operational: a practitioner can answer, in six yes/no questions, whether their specific project falls inside or outside that constraint set — and which SPECIFIC alternative architecture applies if they fall outside. The two new branches represent two distinct "escape hatches": - Level 2 (fiducial): escape to classical CV (cv2.aruco + solvePnP) - Level 4 (local NPU): upgrade to dual-process (IROS 2601.21506 pattern) The decision tree is what Lens 07's landscape map, Lens 12's anti-pattern gallery, Lens 14's contradiction resolution, Lens 16's homing work, and Lens 17's hardware inventory all point toward: a concrete, branch-by-branch disambiguation of when the architecture applies — and when it doesn't.