LENS 25: BLIND SPOT SCAN "What's invisible because of where you're standing?" Session 119 validated this lens in the most literal way possible. The single highest-impact architectural finding of the session was a blind spot that became visible only because a targeted hardware-audit pass forced a full inventory of powered devices in the house. The Hailo-8 AI HAT+ had been installed on the Pi 5 months ago. It sits two inches from the camera ribbon cable, a 26 TOPS neural processing unit capable of running YOLOv8n at 430 frames per second with under 10 milliseconds of latency and zero WiFi dependency. And it was idle for navigation. Every latency budget, every WiFi cliff-edge diagnosis, was drawn on a canvas that did not include it. That is the exact structure this lens predicts: a blind spot is not ignorance, it is position. Eight blind spots are now identified in this research — and they share a common cause: the research was written by an engineer, in English, in a WiFi-saturated daytime environment, using a camera-primary paradigm inherited from the Western robotics literature, with an architecture-of-record that listed drawn components but not owned ones. Each assumption is so embedded in the researcher's position that it was never articulated as an assumption at all. BLIND SPOT ONE: THE VLM SPEAKS ENGLISH. The entire semantic navigation layer — room labels, goal phrases, obstacle tokens — is in English. The household it will operate in speaks Hindi. The scene classifier asks "What room is this?" and expects answers like "kitchen," "bedroom," or "bathroom." But the house also contains a pooja room — a space with no Western equivalent, no entry in the VLM's training distribution, and no bucket in the scene classifier's vocabulary. When Mom says "pooja ghar mein jao," the request flows through an English-primary STT pipeline, arrives at a semantic layer that has no such category, and silently fails. The SLAM map will never correctly annotate that room. Navigation to it is permanently impossible. This is not a missing feature — it is a missing category. The research never identifies this because the engineer never navigates using Hindi. BLIND SPOT TWO: WESTERN FLOOR PLANS. Every research reference — Waymo, Tesla, VLMaps, OK-Robot, AnyLoc — was developed in wide-corridor, Western-layout environments. Indian homes are structurally different. Narrow 60-to-70 centimeter passages between furniture, floor-level seating such as gadda and charpai, rangoli patterns on floors that confuse texture segmentation, shoes piled at every threshold, and a pooja room that constitutes a fundamental spatial anchor in tens of millions of households. The robot's sonar and lidar profiles were tuned for the hallways in the papers, not the hallways in this house. The VLM's visual training distribution almost certainly has no examples of these spatial features. The mismatch is invisible from the engineer's desk. BLIND SPOT THREE: MOM IS NOT IN THE EVALUATION FRAMEWORK. Mom appears in the research only as a delivery destination — "bring tea to Mom" as a goal phrase. She is a waypoint, not a person. The evaluation metrics in Part 7 are ATE, VLM obstacle accuracy, scene consistency, place recognition precision and recall, and navigation success rate. All are defined from the engineer's perspective. None of them ask: Was Mom comfortable? Did she know the robot was coming? Was she able to stop it? Did she understand why it behaved as it did? A system that scores perfectly on all five metrics could still be unusable — or alarming — to its actual primary user. This is the deepest human blind spot. The engineer's frame has no instrumentation for it. BLIND SPOT FOUR: WIFI AS GIVEN INFRASTRUCTURE. The four-tier architecture routes every VLM inference call from the robot's Raspberry Pi to the Panda server at 192.168.68.57 over WiFi — a channel that Lens 04 already identified as the single cliff-edge parameter. Below 100 milliseconds the system is stable. Above it the system collapses. But Indian households face regular load-shedding — scheduled power cuts that take down not just the WiFi access point but the Panda inference server itself. The robot becomes a brick at exactly the moments when an intelligent home assistant would be most valuable. The research has no offline degradation path, no cached last-known map, no simple sonar-only avoidance mode for when the network is down. This is invisible because the engineer tests when power is on. The Hailo-8 discovery reshapes the remedy — with an L1 NPU running local obstacle detection, loss of WiFi becomes graceful degradation to "safe local wander," not a brick. BLIND SPOT FIVE: LIGHTING CONDITIONS. All session logs, SLAM maps, and VLM evaluations occurred under normal daytime ambient light. Indian households face tube-light flicker at 50 hertz, which produces banding artifacts in monocular camera frames. They face transition states — one room lit by a single incandescent bulb while adjacent rooms are completely dark — that do not appear in any cited VLM evaluation benchmark. Room classification accuracy at 11pm under load-shedding lighting is completely unknown. The VLM scene classifier has never been evaluated under these conditions because the engineer's testing schedule follows the engineer's schedule. BLIND SPOT SIX: CAMERA AS THE ONLY EYE. The research inherited camera-first from the research corpus. Waymo uses cameras. Tesla uses cameras. VLMaps uses cameras. Therefore Annie uses a camera. But an outside observer — say, someone designing assistive technology for people with visual impairments — would immediately ask: what other signals does this environment produce? The kitchen emits exhaust fan noise, heat, and the sound of cooking. The bathroom emits humidity and reverb. The living room emits television audio. A robot that listens for two seconds before navigating could classify rooms with high reliability using two dollars of microphone hardware, no GPU inference, and no WiFi connection. The camera solves a hard problem when easier signals are available. The choice was never made — it was inherited. BLIND SPOT SEVEN: WE OWN A 26 TOPS NPU WE AREN'T USING. The Hailo-8 AI HAT+ was installed on the Pi 5 months ago. It sits two inches from the camera ribbon cable. It can run YOLOv8n at 430 frames per second with under 10 milliseconds of latency and zero WiFi dependency. The research spent dozens of sessions routing every obstacle-detection frame over WiFi to Panda's RTX 5070 Ti — 18 to 40 milliseconds plus the jitter cliff identified in Lens 04 — while the 26 TOPS NPU on the same board as the camera stayed idle. This is the canonical "missed what we owned" blind spot. The architecture diagrams never listed the Hailo in the inventory, so it was never in the design space. The IROS dual-process paper, arXiv 2601.21506, describes exactly the L1-reactive and L2-semantic split that Hailo-on-Pi plus VLM-on-Panda would make free — 66 percent latency reduction, 67.5 percent success versus 5.83 percent VLM-only. BLIND SPOT EIGHT: THE AUDIT PATTERN NEVER ASKED "WHAT DO WE OWN?" Across 26 lenses of self-critique, not one asked what hardware the user already owns that does not appear in the architecture diagrams. Asked once, that question surfaces the Hailo-8 NPU at 26 TOPS, the Beast — a second DGX Spark with 128 GB unified memory, always-on, idle workload — and the Orin NX 16GB at 100 TOPS. Three pieces of compute capable of transforming the nav stack were invisible because the review started from the drawn system, not the owned system. This is a meta-blind-spot: the research checklist reviewed everything on the diagram and nothing off it. The fix is one line added to every future audit: list every powered device in the house; explain why each is or isn't in the diagram. KEY FINDING ONE: Session 119 is the canonical Blind Spot Scan success story. The Hailo-8 — 26 TOPS, on the Pi, idle — was the highest-impact discovery of the session. Once listed, it becomes the obvious L1 safety layer, turning Lens 04's WiFi cliff from a "brick" failure into graceful degradation to "safe local wander." KEY FINDING TWO: Language is structural, not cosmetic. "Pooja ghar" is not a translation problem. It is a category that does not exist in the VLM's world model, and the semantic navigation layer will silently fail for an entire class of destination that this household uses every day. KEY FINDING THREE: Mom is a stakeholder who does not appear in the evaluation framework. A system can score well on all five Part 7 metrics while remaining unusable by its actual primary user. KEY FINDING FOUR: Audit the owned system, not just the drawn system. A one-line addition to every architecture review — "list every powered device in the house; explain why each is or isn't in the diagram" — would surface the Hailo, the always-on idle Beast, and the dormant Orin NX three months earlier than twenty-six lenses of critique did. KEY FINDING FIVE: Camera-first is inherited, not chosen. An acoustic room classifier costs two dollars of hardware, requires no GPU, and works in the dark during a power cut.