LENS 23: Energy Landscape "What resists change — and what would lower the barrier?" --- The adoption barrier chart for VLM-primary navigation reveals a stark asymmetry: multi-query pipeline sits at 15% activation energy, SLAM deployment sits at 85%. Both appear in the research document as sequential phases. But they are not remotely comparable undertakings. Multi-query is a one-line change inside NavController's run loop — a cycle count modulo dispatch that alternates goal-tracking, scene classification, obstacle awareness, and place recognition across frames. The research assigns it a 90% probability of success in one session. SLAM deployment consumed six dedicated debugging sessions, three running services, a Docker container, a patched Zenoh RMW build, and still exhibits residual queue drops due to a hardcoded C++ constant in slam_toolbox that cannot be changed without patching the C++ source. The six-times gap in activation energy between these two items is the key finding of this lens. The "good enough" competitor that VLM-primary navigation must displace is not Roomba. It is the existing VLM-only pipeline that Annie already has. A robot navigating to named goals at 54 frames per second, faster than Tesla FSD's perception rate, is a surprisingly capable incumbent. Every Phase 2 capability must justify its activation energy against that baseline, not against a dumb obstacle-avoidance product. The switching cost for SLAM is not just technical. It is political capital measured in trust. One dramatic failure — SLAM loses localization mid-run, Annie drives confidently into the glass door — resets the trust meter regardless of how many successful runs preceded it. Trust is asymmetric: easy to spend, expensive to rebuild. SLAM's activation energy therefore includes not just engineering hours but the potential trust-recovery sessions required after an unpredictable failure during a Mom-witnessed demonstration. Who has to say yes for adoption to happen? There is exactly one decision-maker: Mom. She does not care about loop closure precision-recall curves or embedding dimensionality. She cares about one question: does the robot do what I asked, without drama, and stop when I tell it to stop? The adoption activation energy is therefore dominated by trust, not by technical complexity. Multi-query lowers the barrier precisely because it produces visible, audible richness without adding any new failure mode. Annie narrates: "I can see a chair on my left and this looks like the hallway." Annie knows more. Annie explains more. The robot becomes legible to its human, and legibility is the currency that buys trust. The catalytic event is multi-query going live. Here is the mechanism: when Annie narrates scene context instead of silently driving, Mom begins to model Annie's perception as a competency rather than a mystery. A robot that explains itself is a robot that can be trusted incrementally. That trust accumulation lowers the activation energy for every downstream decision — more hardware, SLAM deployment, semantic maps — because Mom has a mental model of what Annie can see and a track record of Annie being right. Now the literal energy landscape, measured in watts, reveals a seven-times asymmetry that nobody has priced yet. Routing the safety layer through Panda and WiFi costs about fifteen watts per inference cycle: the RTX five-thousand-seventy Ti burns about ten watts on active inference, and the WiFi radios on both ends, Pi 5 transmitter and Panda receiver, add another three to five watts during the sustained frame stream. The same detection running on the already-installed, currently-idle Hailo-8 AI hat costs about two watts — YOLO version eight nano at four hundred thirty frames per second, entirely on-robot, zero radio traffic. That is a seven-times reduction in continuous power draw for identical safety output. On a forty-four to fifty-two watt-hour battery pack, thirteen watts of avoidable inference-plus-radio overhead is not a rounding error. It is measurable minutes of missing autonomy per charge. The inverse case is equally counterintuitive. Beast has been always-on since session four-four-nine, burning forty to sixty watts idle regardless of workload. Any ambient observation or background reasoning scheduled onto Beast has a marginal power cost of zero, because those watts are already flowing into the wall socket. Not all always-on is equal. Always-on-idle is sunk cost, and scheduling work onto sunk cost is free energy. Hardware cost, at $500 to $800 for the full stack, is not the binding constraint. It is a trailing indicator. Adoption does not start with hardware. It starts with: does the software convince a skeptical household member that the robot is worth having? Trust first, then complexity, then cost. The adoption energy landscape is serial, not parallel. The three barriers that cannot be engineering-solved are SLAM complexity, WiFi reliability, and trust. SLAM complexity is an infrastructure problem — it takes time and multiple debugging sessions regardless of skill. WiFi reliability is environmental — you cannot guarantee sub-100-millisecond latency in every home. Trust is human — it accumulates through repeated demonstration, not through architecture documents. Multi-query addresses the third barrier directly and cheaply. The first two barriers matter only after the third is crossed. KEY FINDINGS: The six-times activation energy gap between multi-query and SLAM is the load-bearing asymmetry. Both appear as sequential phases in the research, but they belong to fundamentally different implementation classes. Executing multi-query first does not delay SLAM. It builds the trust reservoir that makes SLAM worth attempting. The "good enough" incumbent is Annie herself, not Roomba. Phase 2 capabilities must justify their activation energy against an already-working VLM pipeline. Multi-query justifies itself immediately. SLAM must justify itself against five debugging sessions and three new services — and that justification is earned through the trust account that multi-query builds first. Trust is the rate-limiting reagent. Mom's "yes" lowers every other barrier. Multi-query is the cheapest trust-building instrument available. It narrates Annie's perception aloud, turning a mystery into a competency. Every adoption decision downstream becomes easier once the human has a mental model of what Annie can see. Two literal-energy wins are sitting unclaimed. Robot battery: moving the safety layer from Panda-plus-WiFi at fifteen watts to the idle Hailo-8 at two watts is a seven-times power reduction for identical output, reclaiming meaningful minutes of autonomy per charge and removing the WiFi radio from the safety path entirely. Beast cycles: the GB10 DGX Spark is already burning forty to sixty watts idle, so any ambient observation or overnight analytics scheduled onto it has a marginal power cost of zero. Always-on-idle is sunk cost, and scheduling work onto sunk cost is free energy. These two wins are the new ground-floor of the energy landscape — cheaper than multi-query, more impactful than SLAM. Cross-reference Lens 06 on hardware topology, Lens 15 on the WiFi cliff-edge, Lens 19 on Hailo activation, and Lens 24 on Beast sunk-cost reasoning.