LENS 15: CONSTRAINT RELAXATION — "What if the rules changed, or what if they were already negotiable?" The "last 40% accuracy costs 10x the hardware" observation is the load-bearing truth of this architecture. Annie's nav stack at 60% goal-finding accuracy needs one Pi 5, one lidar, one USB camera — under $150 total. Annie at 90% accuracy adds a Panda Orange Pi 5 Plus with 8 gigabytes of VRAM, a dedicated WiFi channel, and a 4-tier software stack across three machines. The marginal 30 percentage points of accuracy cost roughly 2.5 times the total hardware budget and all of the distributed-system complexity. That tradeoff is not obviously worth making for a home robot whose worst-case failure mode is "turn around and try again." There is a relaxation pattern even cheaper than buying a smaller model. Call it DORMANT-HARDWARE ACTIVATION. Before any new purchase, Annie's owner already has three idle compute tiers the original architecture did not count. First: the Hailo-8 AI HAT+ on the Pi 5 — 26 TOPS, sitting idle for navigation today, capable of YOLOv8n at 430 frames per second with sub-10-millisecond latency and zero WiFi dependency. Second: Beast, a second DGX Spark with 128 gigabytes of unified memory, always-on but workload-idle since session 449. Third: an Orin NX 16-gigabyte module at 100 TOPS Ampere, already owned, reserved for a future Orin-native robot chassis. The VRAM ceiling that forced Gemma 4 E2B to juggle four jobs, the WiFi cliff-edge that made safety feel fragile, the compute budget that capped multi-model pipelines — all become negotiable without buying anything. This is zero-capex relaxation. Unlike spending $250 on an Orin NX or $500 on a bigger GPU, activating hardware you already own costs only engineering time. Three constraints are relaxable today for under $200 combined. First, speed: dropping from 1 meter per second to 0.3 meters per second costs nothing and eliminates turn overshoot and WiFi-induced drift. Second, accuracy target: accepting 60% first-try with a retry loop produces 85% task success at zero GPU cost, no Panda required. Third, WiFi to USB tether: an $8 cable eliminates the cliff edge at the cost of a 2-meter reel. The constraint the user does not actually care about is SLAM accuracy. For fetch-the-charger and avoid-Mom, Annie does not need a globally consistent map. The VLM alone handles the real questions at 60 to 70% accuracy, recoverable with retry — at the cost of three SLAM services and five debugging sessions. Hardware trends will relax the VRAM constraint within 18 to 24 months, but dormant-hardware activation collapses that timeline to weeks. The Jetson Orin NX 16-gigabyte — already owned — doubles Panda's VRAM ceiling at zero incremental cost the day it is activated. Beast hosts specialist models without touching Panda's budget. Hailo-8 carries the safety layer off-GPU entirely. The household does not have to wait for 2027; the dormant compute is already on-site. The most architecturally disruptive relaxation is right-sizing the model to the task. Every "LEFT MEDIUM" command currently pays Gemma 4 E2B's full autoregressive cost for a job that is really detection. Open-vocabulary detectors close this gap: NanoOWL at 102 frames per second for noun goals, GroundingDINO 1.5 Edge at 75 frames per second with 36.2 AP zero-shot for richer prompts. Both fit TensorRT on Panda at a fraction of Gemma's 3.2 gigabytes. Route goal-finding to them; keep Gemma resident for questions that actually require language. Add Hailo-8 as L1 safety, and the architecture finally matches the dual-process result — 66% latency reduction, 67.5% versus 5.83% success — without a single new hardware purchase. Nova's note. Three idle compute tiers make zero-capex relaxation a real option, not an aspiration. Right-size the model to the task — two tools sized to their job beat one tool overpaying for generality on every frame. Speed is a free constraint. SLAM accuracy is a constraint the user does not care about. The household already owns the answer; the work is activation, not acquisition.