LENS 22 — LEARNING STAIRCASE: CROSS-LENS CONNECTIONS === CONNECTION TO LENS 03 (Dependency Graph / Bottleneck) === Lens 03 identified the llama-server embedding blocker as the highest-leverage addressable dependency in the entire system. Lens 22 sees the same structural pattern at the learning level: the SLAM infrastructure stack is the embedding blocker of the learning journey. Just as llama-server's inability to cleanly expose intermediate multimodal embeddings gates Phase 2d (place recognition, loop closure, topological mapping), the infrastructure plateau gates Levels 5 and 6 of the staircase entirely. Both are single-point dependencies with outsized consequence: one blocks a capability, the other blocks a class of learner from ever reaching that capability. The mitigation pattern is identical in both cases — isolate the dependency, provide a clean interface over it, and let the dependent layer not care about what's underneath. For llama-server, this means a separate SigLIP 2 ViT-SO400M process. For the learning plateau, this means a pre-debugged Docker Compose that a learner can treat as a black box: sensor goes in, (x, y, heading) comes out. Quantitative overlap: Lens 03 found the embedding blocker affects Phases 2d and 2e (P(success) = 55% and 50%). Lens 22 finds the infrastructure plateau affects the same phases — not coincidentally, because the embedding capability (SigLIP 2 on Panda, 800MB competing with 1.8GB VLM for 4GB GPU) is itself an infrastructure problem, not an ML problem. === CONNECTION TO LENS 05 (System Reliability / Runtime Floor) === Lens 05 established WiFi as the runtime reliability floor — the 100ms cliff edge where VLM inference RTT causes the NavController to drop commands, and below which the entire fast path (58 Hz, 18ms) becomes irrelevant. Lens 22's learning staircase has a structural parallel: the SLAM infrastructure plateau is the reliability floor of the learning journey. Just as a system running on unreliable WiFi cannot deliver the 58 Hz architecture regardless of how well the VLM is designed, a learner stuck at Level 4 cannot progress to semantic mapping regardless of how well they understand the VLM architecture. The parallel goes further: Lens 05 found the research spent zero words on "what does the system do at 0 Hz — when the network is gone?" Lens 22 finds the research spent zero words on "what does the learner do at Level 4 — when the SLAM stack won't start?" The research describes the happy path with precision (90% probability, 1 session). It does not describe the failure path or the recovery path. Both lenses converge on the same documentation gap: graceful degradation — of the runtime system and of the learning journey — was not designed. The Hailo-8 rung (Rung 4B) is a partial answer to Lens 05 as well: because the Hailo-8 pipeline runs entirely on the Pi 5 with no WiFi in the loop, it gives Annie an obstacle-avoidance layer that continues working at 0 Hz of network. The "what does the system do when the network is gone?" question now has an answer: the L1 safety layer keeps running at 30+ Hz locally. Rung 4B is the reliability-floor mitigation that Rung 4A alone does not provide. === CONNECTION TO LENS 15 (Hidden Bottlenecks / Dormant Resources) === Lens 15 examines bottlenecks that are invisible because they are not on the dependency graph — resources that are neither blocking nor contributing, just sitting unused. Rung 4B is Lens 22's direct expression of Lens 15's framing applied to learning: the Hailo-8 AI HAT+ is a 26 TOPS accelerator that has been installed on the robot for the entire VLM build-out and has contributed zero compute cycles to navigation. It is not a bottleneck in the traditional sense — it is not holding anything back — but it is also not pulling its weight, and the absence of a rung that says "turn it on" is itself a bottleneck in the learning roadmap. Lens 15's pattern ("find the idle transistors") maps directly onto Lens 22's new rung ("the staircase has invisible rungs"). The Beast (second DGX Spark, sitting idle while Titan serves both phone and WebRTC) and the Orin NX 16 GB (owned, future-robot earmarked, unpowered) are Lens 15 examples that would each justify their own learning-staircase rungs in a production deployment lens. === CONNECTION TO LENS 16 (Resource Inventory / Audit Cadence) === Lens 16 argues that a periodic hardware + software inventory is worth more than a periodic feature-roadmap review, because the inventory surfaces capacity that the roadmap silently assumes away. The invisible-rung principle is Lens 16 applied to the learning staircase: audit your hardware inventory every time you feel plateaued. The reason practitioners stall at Level 4 is often that they never look at the physical device — the Hailo-8 is visible as a board attached to the Pi, but invisible in the software roadmap, because no line in RESOURCE-REGISTRY.md currently says "Hailo-8 assigned to: idle." Lens 16 recommends an inventory row per dormant component with a "planned activation sprint" field; Lens 22 recommends that each such row becomes a candidate rung on the staircase. The two lenses are complementary: Lens 16 catalogues the silicon, Lens 22 sequences the skills required to activate it. === CONNECTION TO LENS 18 (Reversibility / Rollback Safety) === Lens 18 examined which architectural decisions are reversible and which are load-bearing commitments. Lens 22 applies this question to the learning path: which levels of the staircase are reversible, and which create irreversible technical commitments? Levels 1–3 are fully reversible. You can run the VLM pipeline on a laptop, decide it doesn't suit your use case, and walk away with no infrastructure debt. Level 4 (the plateau) is where irreversibility begins. Rung 4A creates significant irreversibility: a specific Linux distribution locked to ROS2 Jazzy's apt sources, a Zenoh source build pinned to a specific commit, sensor calibration tuned to a physical hardware configuration. Rung 4B, by contrast, is largely reversible — HailoRT and TAPPAS are self-contained, the .hef model file is portable, and the pipeline can be disabled by not starting the service. The reversibility delta between the two rungs is itself a reason to prefer Rung 4B as the first Level 4 step: it is cheaper to enter and cheaper to exit. === CONNECTION TO LENS 24 (Composability of Owned Parts) === Lens 24 examines whether the parts you already own can be composed into a configuration that exceeds what any single part delivers. Rung 4B is a Lens 24 realization: the Pi 5, the Hailo-8 AI HAT+, and the USB camera are three owned components that, composed via the HailoRT runtime, deliver 430 FPS YOLOv8n locally — a capability none of them delivers alone and that was never planned as an explicit capability of the system. Lens 24's broader claim is that composability lives in the seams between components, and the seams are exactly what a new rung teaches. The dual-process architecture at Level 5 (Hailo L1 + VLM L2) is a second-order Lens 24 composition: the Hailo-8 pipeline and the llama-server pipeline were designed in isolation, but composed they deliver the IROS-validated dual-process architecture (66% latency reduction, 67.5% task success vs 5.83% VLM-only). === CONNECTION TO LENS 25 (Procurement-vs-Activation Framing) === Lens 25 distinguishes two classes of capability gap: procurement gaps (need to buy something) and activation gaps (own something, haven't turned it on). The framing is load-bearing because procurement gaps have a vendor lead time and a purchase approval cycle, while activation gaps have neither — they are entirely engineering-bound. Rung 4B is the canonical activation-gap rung in this codebase: Hailo-8 is owned, installed, and powered on. The only thing separating "idle" from "430 FPS YOLOv8n" is engineering time (~1–2 sessions per the research doc). Lens 25's practical recommendation — run the activation-gap audit before the procurement review — is exactly what the invisible-rung principle asks a practitioner to do. The corollary from Lens 25: every new piece of hardware added to the inventory should be accompanied by a Level 4 rung definition before it is installed, so that the activation path is planned rather than discovered months later. === SYNTHESIS: THE TWO-STAIRCASE ARCHITECTURE + THE INVISIBLE-RUNG PRINCIPLE === All cross-lens connections point to the same structural insight. This system — and the learning path through it — has a two-staircase architecture. The first staircase (Levels 1–3) is an ML skills domain, iteration cycles in seconds. The second staircase (Levels 5–6) is an infrastructure + ML integration domain, iteration cycles in hours to days. The cliff between them (Level 4) is the highest-risk, highest-cost, highest-stakes point in the entire learning journey. The new contribution from this update: Level 4 has at least two rungs, and one of them (Rung 4B) is effectively invisible in published roadmaps because roadmaps describe models and algorithms, not dormant silicon. The invisible-rung principle generalizes: whenever a practitioner feels plateaued, the next move may not be "learn harder ML" or "buy more compute" — it may be "activate the hardware already in your hand." Lenses 15, 16, 24, and 25 all support this framing from different angles (hidden bottlenecks, inventory audit, composability, procurement-vs-activation). Lenses 03 and 05 identify the consequences (dependency density at Level 4, reliability-floor mitigation via a local-only L1). The actionable recommendation: build a Level 4 shortcut that includes BOTH rungs. The SLAM Docker Compose + sensor health script (Rung 4A infrastructure) AND a HailoRT/TAPPAS template repo cloned from hailo-rpi5-examples with a pre-compiled YOLOv8n .hef and a ROS2-free Python wrapper that publishes obstacle boxes (Rung 4B activation). A skilled ML practitioner could traverse both rungs in under a week with this scaffolding in place. Neither exists yet in the codebase. Writing them is the highest-leverage learning investment available.