HER-OS RESEARCH ANALYSIS

VLM-Primary
Hybrid Navigation

58 Hz vision meets SLAM geometry. Analyzed through 26 lenses across 8 categories. Waymo. Tesla. VLMaps. Deconstructed.

Session 86 · April 13, 2026 · 352 lines · 21 KB

← Back to Lens Catalog
Version 1 — April 13, 2026
v1 (Apr 13) ← you are here v2 (Apr 14) → enriched v3 (Apr 16) → Hailo-8 / Orin NX / dual-process

DECOMPOSE

Strip to structure

LENS 01

First Principles X-Ray

"What must be true for this to work?"

CONSTRAINT LAYERS: PHYSICS → ENGINEERING → CONVENTION
PHYSICS
Light needs line-of-sight

Camera useless around corners. Lidar compensates with 360°.

PHYSICS
Motors have inertia

5° turn at speed 30 overshoots by 30+°. Not a bug — physics.

HARDWARE
WiFi links Pi ↔ Panda

18ms inference is meaningless if network adds 50ms. WiFi is the real bottleneck.

CONVENTION
"VLM must output text"

Could output embeddings directly (Capability 4). Text is convenient, not required.

CONVENTION
"Need encoder odometry"

TurboPi has no encoders. Using rf2o laser odometry. Convention broken successfully.

The deepest truth: at 1 m/s with 58 Hz VLM, the world moves only 1.7 cm between frames. Temporal consistency is free — consecutive answers should agree because the scene barely changed. Any disagreement is either hallucination or a genuine scene transition, detectable through variance tracking.

The most important convention-disguised-as-physics: VLM must output text. The 150M-param ViT encoder produces a 280-token feature vector in 14ms — text decoding adds 4ms. For place recognition, the embedding IS the output.

Temporal surplus is the foundation. 58 Hz gives so many redundant frames that single-frame errors are statistically insignificant with EMA filtering.

WiFi, not VLM speed, is the binding constraint. Network jitter turns 18ms inference into 70ms round-trip.

What constraint does everyone treat as fundamental but is actually a choice?

"One question per frame." At 58 Hz, alternating queries across frames gives 4 parallel perception tasks at 15 Hz each. The single-query assumption comes from slower systems.

Click to reveal

LENS 02

Abstraction Elevator

"What do you see at each altitude?"

SAME SYSTEM, FOUR ALTITUDES
30,000 FT
Robot that navigates to named goals

"Go to the kitchen" — understands rooms, avoids obstacles, reports what it sees.

10,000 FT
4-tier fusion hierarchy

Strategic (Titan, 1 Hz) → Tactical (Panda VLM, 29 Hz) → Reactive (Pi lidar, 10 Hz) → Kinematic (IMU, 100 Hz).

GROUND
cycle_count % N in _run_loop()

EMA alpha=0.3, scene labels in SLAM grid cells, sonar ESTOP at 250mm.

BYTE LVL
18ms/frame, 280-token ViT, 150M params

llama-server → Gemma 4 E2B → 1-2 token response. USB serial IMU at 100 Hz.

The abstraction leak between 10,000 ft and ground level is WiFi. The clean 4-tier hierarchy assumes instant inter-tier communication, but tiers run on different hardware (Titan, Panda, Pi) connected by household WiFi. Another leak: the 30,000 ft pitch says "named goals" but ground-level outputs "LEFT MEDIUM" — qualitative directions, not coordinates. Semantic mapping (Phase 2c) bridges this gap.

WiFi leaks across all abstractions. Clean tier diagrams hide that Tier 2 and Tier 3 talk over household WiFi.

"LEFT MEDIUM" is the glass ceiling until Phase 2c maps VLM text to grid cells.

LENS 03

Dependency Telescope

"What's upstream and downstream?"

Annie VLM Hybrid Nav
Upstream: Gemma 4 E2B — Google could retire model
Upstream: llama-server — can't expose embeddings (blocks Phase 2d)
Upstream: WiFi — uncontrollable household RF
Downstream: Annie voice agent — spatial questions
Downstream: Context Engine — spatial memories
Downstream: Home automation — room occupancy

The most dangerous dependency: llama-server can't expose intermediate embeddings. Phase 2d is blocked. The workaround — deploying separate SigLIP 2 — costs 800MB VRAM on already-constrained Panda. This upstream limitation cascades into hardware budget decisions. The downstream potential is massive: if semantic maps work, spatial memory feeds voice agent, context engine, and home automation.

Single point of failure: Panda's GPU. VLM + potential SigLIP 2 + other services all compete for 8GB VRAM. No redundancy.
LENS 04

Sensitivity Surface

"Which knob matters most?"

PARAMETER SENSITIVITY
WiFi latency
CLIFF EDGE
Robot speed
high
SLAM accuracy
medium-high
EMA alpha
medium
VLM query rate
low above 15 Hz
Model size
surprisingly low

WiFi has a cliff edge. Below 20ms, fine. At 50ms, degraded. Above 100ms, blind for multiple frames. No graceful degradation — a phase transition. The surprise: VLM query rate barely matters above 15 Hz. The multi-query pipeline's value isn't speed — it's using surplus frames for different questions.

Robot speed is the second most sensitive parameter. At 0.5 m/s, VLM sees every obstacle twice. At 2 m/s, obstacles pass within 4-5 frames.

EVOLVE

Trace the arc

LENS 05

Evolution Timeline

"How did we get here?"

2020

Active Neural SLAM

Learned mapper + classical planner hybrid.

2022

SayCan / Inner Monologue

LLM plans, robot executes, VLM feedback loop.

2023

VLMaps + AnyLoc

CLIP embeddings on occupancy grids. Universal place recognition.

2024

OK-Robot + GR00T N1

"Clean integration > fancy models." Dual-rate: VLM 10 Hz + actions 120 Hz.

2025-26

Annie: 58 Hz multi-query + SLAM

Faster than Tesla's 36 Hz. Multi-query splits surplus frames.

2027?

Edge VLA end-to-end?

When 1B VLAs can be fine-tuned on 100 demos, the pipeline simplifies to one model.

Every 1-2 years, the "learned vs classical" boundary shifts. Annie sits at the pragmatic fusion point — off-the-shelf VLMs for perception, classical SLAM and A* for planning. The 2027 question: will 1B VLAs become trainable on small datasets? If so, the 4-tier hierarchy collapses into one model. But OK-Robot's lesson persists — clean integration of replaceable components may remain more practical for low-volume robotics.

LENS 06

Second-Order Effects

"Then what?"

VLM Hybrid Nav succeeds → Annie knows room layout
Conversational spatial recall: "What's in the kitchen?"
Context Engine enrichment: spatial + conversation memory
Home automation: room occupancy → smart lighting
Privacy risk: camera-based robot maps every room
VRAM pressure: success drives more VLM services onto Panda

The killer second-order effect: spatial memory meets conversational memory. "Mom mentioned needing her glasses" (Context Engine) + "glasses on bedroom nightstand" (semantic map) + "Mom sounded tired" (SER) = proactive care without being asked. The concerning effect: a camera robot mapping rooms creates comprehensive surveillance, even unintentionally. Needs explicit consent architecture.

POSITION

Map the landscape

LENS 07

Landscape Map

"Where does this sit among alternatives?"

TRAINING EFFORT vs SPATIAL UNDERSTANDING
Training Effort →
↑ Spatial Understanding
Pure SLAM
OK-Robot
VLMaps
Annie
Active NSLAM
Tesla FSD

Annie occupies the sweet spot: zero training with medium-high spatial understanding. The only system that's better without training is VLMaps — Annie's Phase 2c target. The empty quadrant (top-left: high understanding + zero training) suggests an opportunity: a plug-and-play semantic SLAM using foundation model embeddings.

LENS 08

Analogy Bridge

"What is this really?"

SUBMARINE SONAR + PERISCOPE

SUBMARINE

Sonar: 360° geometry. Can't identify.

Periscope: Narrow FOV. Can identify.

Rule: Sonar never trusts periscope over sonar for safety.

ANNIE

Lidar: 360° geometry. Can't identify.

VLM: Narrow camera FOV. Can identify.

Rule: "VLM proposes, lidar disposes."

The analogy predicts something new: submarines use sound signatures to classify contacts through sonar alone. Annie could similarly use lidar scan patterns — chair legs have distinctive shapes in lidar returns — for obstacle classification without VLM. "Lidar fingerprinting" isn't in the research but the analogy suggests it.

LENS 09

Tradeoff Radar

"What are you sacrificing?"

VLM+SLAM vs PURE SLAM
SPEED ACCURACY SEMANTICS ROBUST COST SCALABLE
VLM+SLAM Pure SLAM

The tradeoff is clear: VLM+SLAM trades robustness for semantics. Pure SLAM is more robust (no WiFi, no hallucinations, no GPU) but understands nothing. The hidden tradeoff: deployment complexity. Setting up VLM pipeline is far harder than SLAM-only, but this doesn't show on the radar.

STRESS-TEST

Break & challenge

LENS 10

Failure Pre-mortem

"It's October 2026 and this failed. Why?"

APR

Multi-query deployed

Phase 2a works. Team optimistic.

MAY

WiFi degrades during monsoon

Humidity + congestion. 5% of frames timeout.

JUN

Glass door collision

VLM says "CLEAR" through glass. Lidar beam passes at angle. Both sensors wrong.

AUG

VRAM exhaustion

SigLIP 2 + VLM exceed Panda GPU. Phase 2d abandoned.

OCT

Reverted to lidar-only

"Too many moving parts."

Most likely failure: WiFi degrades under real conditions (monsoon, family streaming). The glass door scenario is the unknown unknown — both VLM and lidar agree on the wrong answer. No temporal smoothing fixes systematic errors.

The "VLM proposes, lidar disposes" rule assumes at least ONE sensor is correct. Glass and mirrors fool BOTH simultaneously.
LENS 11

Red Team Brief

"How would an adversary respond?"

COMPETITOR

$200 Robot Vacuum

$5 depth sensor does obstacle avoidance without any VLM. Why spend $200 on GPU for 2 tokens?

SKEPTIC

1B Params / Token

2B-param model outputs 2 tokens. That's 1 billion parameters per output token.

ADVERSARY

Mirror Attack

Place mirror in hallway. VLM sees open path. Lidar confused by reflective surface.

REGULATOR

Camera Always On

Camera images over WiFi to GPU. Even local, what if network is compromised?

The CTO's challenge is hardest: why 2B params for 2 tokens? The value isn't in those tokens — it's in the 150M-param vision encoder's scene understanding. Text output is lossy compression. Phase 2d (embeddings) makes this explicit.

LENS 12

Anti-Pattern Gallery

"What looks right but leads nowhere?"

DON'T

VLM for metric distance. "1.2m away" — monocular depth is ambiguous.

Trust every frame. 2% hallucination = 1 wrong answer per 0.86s.

Fine-tune on Annie's home. Overfits. Breaks when furniture moves.

vs

DO

VLM for qualitative direction. "LEFT MEDIUM" — play to VLM strengths.

EMA smoothing (alpha=0.3). Filters hallucinations, tracks real changes.

Off-the-shelf models. OK-Robot principle: clean integration wins.

Most seductive mistake: asking VLM for metric estimation. "How far is that chair?" feels natural but monocular depth is fundamentally ambiguous. The research wisely keeps VLM outputs qualitative and uses lidar for all geometry.

LENS 13

Constraint Analysis

"What must hold?"

ConstraintFragilityIf Broken
WiFi < 20ms P95HIGHReal-time loop breaks
Panda GPU availableMEDIUMAll VLM stops
llama-server compatMEDIUMInference breaks on update
Single cameraARTIFICIALCould add $15 rear camera
2D nav onlyARTIFICIALCould add depth sensor

Two constraints are artificially imposed: single camera and 2D-only. Both relaxable with $15-50 hardware. The constraint most likely relaxed by technology: VLM model size. In 2 years, 1B params will match today's 2B capability, freeing VRAM.

GENERATE

Create new ideas

LENS 14

The Inversion

"What if you did the opposite?"

CURRENT

58 Hz VLM (many approximate answers)

VLM proposes, lidar disposes

Explicit SLAM map

invert

INVERTED

1 Hz VLM with deep analysis per frame

Lidar proposes (A*), VLM confirms "path clear"

No map — implicit memory via embeddings (PRISM-TopoMap)

Most productive inversion: "lidar proposes, VLM confirms." Let SLAM compute A* path, then ask VLM "is this clear?" once per second. This IS Phase 2c, reframed. The radical inversion: no explicit map. PRISM-TopoMap uses embeddings as spatial memory. Eliminates SLAM drift but can't do metric path planning.

LENS 15

Constraint Relaxation

"What if the rules changed?"

IF COMPUTE FREE

Run 26B Gemma 4 every frame. Eliminate Tier 1/Tier 2 split — one model does everything.

&

IF ACCURACY HALVED

400M model on Pi 5 CPU at 100+ Hz. No Panda, no WiFi. Entire complexity exists to push that last 40%.

Most revealing: if accuracy dropped by half, you could run on Pi 5 CPU directly. No GPU, no WiFi, no Panda. The entire system complexity exists to push accuracy from "barely useful" to "reliable." That last 40% costs 10x the hardware. Constraint to be relaxed soonest: VRAM per model, as vision encoders get more efficient.

LENS 16

Composition Lab

"What if you combined them?"

CombinationEmergesFeasibility
Semantic map + Voice agent"Annie, what's in the kitchen?" — conversational spatial recallHIGH
VLM embeddings + Context EngineMulti-modal memory: what was said WHEREMEDIUM
Multi-query + SER emotionAnnie navigates gently when user sounds stressedMEDIUM
VLM + Audio SLAMRoom acoustics fused with scene labels. Untried.SPECULATIVE

Killer combination: semantic map + voice agent. "Last time I was in the bedroom, I saw your glasses on the nightstand." This bridges Context Engine conversation memory with navigation spatial memory. Neither alone is this powerful. Untried: VLM + audio SLAM — room acoustics differ between rooms and nobody uses this for indoor robots.

LENS 17

Transfer Matrix

"Where else would this thrive?"

SECURITY

Patrol Robots

VLM checks doors/people. SLAM maps building. Multi-query: access + person + anomaly.

AGRICULTURE

Greenhouse Nav

VLM identifies plant health. SLAM maps rows. Multi-query: health + nav + species ID.

RETAIL

Shelf Inspection

VLM reads labels. SLAM navigates aisles. Multi-query: count + price + compliance.

STARTUP

Open-Source VLM Nav

Extract multi-query dispatch + smoothing + semantic map as a ROS2 package.

The multi-query VLM pipeline transfers to any edge robot with a camera and VLM. Security patrols, greenhouses, retail all benefit. Startup opportunity: package this as open-source ROS2 middleware before the space gets crowded.

APPLY

Decide & build

LENS 18

Decision Tree

"When should you choose this?"

Has camera?
NO
Lidar-only SLAM
YES
Need semantics?
NO
Lidar-only
YES
Edge VLM >5 Hz?
NO
Cloud VLM or lidar-only
YES
VLM+SLAM hybrid

Three binary questions: camera, semantics needed, edge VLM speed. All yes = use VLM+SLAM hybrid. This is wrong for pure obstacle avoidance (lidar cheaper) and wrong for cloud robots (latency kills fusion).

LENS 19

Scale Microscope

"What changes at 10x?"

1 robot, 1 home
Works well
10 robots, building
Shared GPU needed
100 robots, campus
Inference cluster
1000 robots, city
Wrong architecture

Phase transition at ~10 robots: "dedicated GPU per robot" breaks. Need shared inference service. At 1000, need fleet learning (Tesla's approach). Annie's architecture is explicitly artisanal, not industrial.

LENS 20

Day-in-the-Life

"Walk through a real scenario."

T+0s

Tier 1: Titan parses "check stove"

Path computed on SLAM map to kitchen.

T+2s

Tier 2: Multi-query VLM starts

"Where is kitchen?" → "RIGHT MEDIUM." "What room?" → "hallway."

T+5s

Tier 3: Lidar clears doorframe

270mm clearance. ESTOP not triggered.

T+35s

Scene change: "kitchen" detected

VLM switches to stove query.

T+42s

"Stove appears off"

Annie reports back by voice. Total: 42 seconds.

The 42-second journey reveals: the value isn't speed (walking is faster). It's not having to get up. The architecture serves a lifestyle use case, not efficiency. Debugging spans 4 tiers on 3 machines — the hidden cost of distributed hybrid systems.

HUMAN

People & adoption

LENS 21

Stakeholder Kaleidoscope

"Who sees what?"

DEVELOPER

"Elegant architecture"

4-tier hierarchy, each tier independently testable. Architecturally satisfying.

MOM (USER)

"Just don't bump things"

Doesn't care about architecture. Judges by outcomes only.

VISITOR

"Is it recording me?"

Camera always on. Even local-only, feels like surveillance.

COMPETITOR

"Over-engineered"

Rule-based + depth sensor does 80% at 1/10th complexity.

Gap between Rajesh (sees architecture) and Mom (sees behavior). The 4-tier hierarchy is invisible to users. Phase 2a has zero user-visible benefit unless paired with a capability like "Annie, what room are you in?" Visitor's perspective is most underrepresented — needs a privacy mode.

LENS 22

Learning Staircase

"Path from novice to expert?"

15 MIN
Motor commands

/drive/forward, /drive/turn

1 HOUR
VLM goal tracking

"Go to X" with camera.

1 DAY
Multi-query pipeline

Scene + obstacle awareness.

1 WEEK
SLAM integration (PLATEAU)

ROS2 + Docker + Zenoh. Where people get stuck.

1 MONTH
Semantic map + place memory

"Where is the kitchen?" with accumulated spatial knowledge.

The plateau is SLAM integration. Sessions 86-92 were spent on ROS2, Zenoh, MessageFilter bugs. VLM side (Levels 1-3) is straightforward prompting. SLAM requires deep expertise. If packaged for others, SLAM deployment must be radically simplified.

LENS 23

Energy Landscape

"What resists change?"

SLAM deployment
6+ sessions debugging
WiFi reliability
uncontrollable
"Good enough" lidar
incumbent inertia
Multi-query code
1 session, low barrier

Biggest barrier isn't technical — it's the "good enough" incumbent. Lidar-only works for basic avoidance. The VLM hybrid's activation energy is high until you need "go to the kitchen." Catalytic event: a pre-built Docker Compose that works in one command.

DISCOVER

Find the gaps

LENS 24

Gap Finder

"What's not being said?"

VLM inference pipeline
SLAM fusion architecture
Temporal consistency
Evaluation framework
Battery management during exploration

No power-aware planning discussed.

Multi-floor navigation

Stairs/elevator not mentioned.

Night / low-light performance

VLM needs light. Indian homes have intermittent lighting.

Voice + nav interaction

What happens when Rajesh talks to Annie mid-navigation?

SLAM map corruption recovery

No recovery protocol for inconsistent maps.

Most significant gap: night/low-light. Indian homes have intermittent hallway lighting. Solutions: IR illumination, lidar-only fallback, ambient light sensor adjusting VLM trust. The voice + nav interaction gap is critical for her-os — "what do you see?" mid-navigation requires VLM to switch from nav to descriptive queries.

LENS 25

Blind Spot Scan

"What's invisible from your vantage point?"

WIFI

Always Connected

Developed in WiFi-saturated environment. Many homes have dead spots.

COMPARISON

"Faster Than Tesla"

58 Hz vs 36 Hz ignores Tesla has 8 cameras. Information-per-frame matters.

CULTURAL

Western Layout

"Kitchen, bedroom, bathroom" — but what about pooja room, terrace, servant quarters?

CV VIEW

"Just Use Depth"

$50 OAK-D Lite gives real depth at 30 FPS on CPU. 1/100th the compute.

Cultural blind spot: Annie navigates an Indian home with room types the VLM won't recognize — "pooja room" is unlikely in Gemma 4's training data. The CV researcher's point is valid: a $50 depth camera eliminates the glass-door problem. VLM value is specifically semantic understanding, not geometry.

LENS 26

Question Horizon

"What new questions become askable?"

VLM + SLAM + Conversational AI = ?
Conversation maps? Not just spatial — emotional map: "Mom was happy in kitchen this morning"
Audio + VLM nav? "I hear water running → probably bathroom"
Minimum VLM? Could 400M model at 100+ Hz on Pi 5 CPU suffice?
Shared maps? "Annie visited Grandma's and already knows the layout"
Self-improving? Fine-tune on 6 months of Annie's own labeled frames

Most exciting: conversation maps fusing spatial + emotional context. "Mom mentioned glasses" (Context Engine) + "glasses on nightstand" (semantic map) + "Mom sounded tired" (SER) = proactive care. Most practical: what's the minimum VLM? If 400M works on Pi 5 CPU, the entire Panda GPU dependency disappears.

Synthesis: Cross-Lens Convergence

Five innovation signals where multiple lenses independently converged:

1. WiFi Is the Achilles' Heel

Four lenses flag WiFi as critical fragility. Innovation: On-Pi fallback VLM (400M, 2 Hz CPU) that activates when WiFi drops. Catastrophic failure becomes graceful degradation.

Lenses: 04, 10, 13, 25

2. Multi-Query Pipeline = Highest-Value, Lowest-Risk

Temporal surplus enables it, decision tree confirms fit, energy landscape shows lowest barrier. Innovation: Build as open-source ROS2 package. Transferable to any camera-equipped robot.

Lenses: 01, 17, 18, 19, 23

3. Semantic Map + Voice = Killer App

"Annie, what's in the kitchen?" Combining spatial memory with conversational memory creates personal spatial-conversational AI. Innovation: No current product offers this combination.

Lenses: 06, 16, 20, 21, 26

4. Glass Door Problem Has No Current Solution

Both VLM and lidar fail on transparent surfaces. Innovation: Add $50 depth camera (OAK-D Lite). Structured light bounces off glass, filling the gap where both primary sensors fail.

Lenses: 10, 11, 12, 13

5. Transfer Potential Is Massive

Multi-query VLM pipeline works for security, agriculture, retail. Innovation: Extract and publish as standalone framework before the space gets crowded.

Lenses: 05, 17, 19, 23

TTS Settings

Stored locally in your browser only.

Ready