LENS 03: DEPENDENCY TELESCOPE "What's upstream and downstream?" The dependency telescope reveals a system that is far more fragile at its upstream joints than its engineering confidence suggests. The four-tier hierarchical fusion architecture reads as robust modularity. But each tier is tethered to an upstream it does not control. The most consequential upstream is not the obvious WiFi dependency. It is llama-server's inability to expose intermediate multimodal embeddings. This single API gap in an open-source inference server blocks Phase 2d entirely — embedding extraction and place memory — and forces the deployment of a separate SigLIP 2 model that consumes 800 megabytes of Panda's already-constrained 8 gigabytes of VRAM. A limitation in one upstream layer manufactured a hardware budget problem in another. The WiFi dependency is the system's hidden single point of failure — not because it is unknown, but because it has no engineering mitigation. Every other dependency has a documented workaround or fallback. But if household WiFi degrades, the Pi-to-Panda camera link drops from 54 hertz to below 10 hertz, and the system runs degraded silently. Lens 04 identified this as the WiFi cliff edge at 100 milliseconds. What the Dependency Telescope adds is the cascade: degraded VLM throughput degrades scene classification, which degrades semantic map annotation quality, which degrades Phase 2c room labeling accuracy. A single uncontrolled radio frequency environment poisons three downstream phases. The Session 119 hardware audit surfaced a mitigation hiding in plain sight. The Pi 5's Hailo-8 AI HAT Plus is already on the robot and idle. Running YOLO V8 nano at 430 frames per second locally, with zero WiFi traffic, it can serve as an L1 safety layer underneath the VLM. The cascade rewrites: WiFi degrades, semantic features degrade, safety stays local. The dependency on WiFi doesn't disappear — it gets demoted from safety-critical to semantic-only, which is exactly where an uncontrolled RF medium belongs. The Phase 1 SLAM prerequisite chain is the upstream that gates the most downstream value. Phases 2c, 2d, and 2e are all in a single-file queue behind one deployment. If Phase 1 SLAM suffers a persistent failure — Zenoh crash, lidar dropout, IMU brownout — the downstream timeline does not slip by one phase, it slips by three simultaneously. The downstream surprises are equally instructive. The semantic map, framed as a navigation primitive, becomes a qualitatively different capability when the voice agent consumes it: spatial memory answerable by voice. Annie can tell you where the charger is, when she last visited the kitchen, or whether the living room is currently occupied — without any additional training. Neither this downstream consumer nor the Context Engine's spatial fact integration are mentioned in the research roadmap. The most valuable accidental enablement is the one most likely to create an integration mismatch when it arrives. KEY FINDINGS: Highest-leverage blocker: Patching llama-server or switching inference servers to expose multimodal embeddings directly would unblock Phase 2d without any hardware change and reclaim 800 megabytes of Panda VRAM. Cost: one to two engineering sessions. Hidden single point of failure: Household WiFi has no programmatic fallback. A watchdog that detects round-trip latency above 80 milliseconds and steps down the VLM query rate — with an alert to Annie — would convert a silent failure into a managed degradation. Most likely to change in two years: The Gemma 4 E2B model. Google's cadence makes a successor highly probable before Phase 2e is deployed. The architecture is correctly abstracted — the ask-VLM function is model-agnostic — but GGUF conversion and llama.cpp compatibility will need re-validation for each generation. Accidental downstream: Voice-queryable spatial memory. This capability is unplanned and unscoped. It will arrive before anyone has designed a consent model for "Annie, who was in my bedroom yesterday?" Downstream dependency demotion (mitigation available): The Pi 5's Hailo-8 AI HAT Plus is on-hand hardware, currently idle, capable of YOLO V8 nano at 430 frames per second with zero WiFi traffic. Activating it as an L1 safety layer converts WiFi from a safety-critical dependency into a semantic-only dependency — the highest-leverage dependency restructuring available without new hardware purchase.