# Next Session: Assemble v2 Perspectives HTML

## Context
The `/research_perspectives` skill has been run against `docs/RESEARCH-VLM-PRIMARY-HYBRID-NAV.md`. All 26 deep-dive agents completed successfully in the previous session with excellent quality. However, agent outputs were in conversation context only and are lost. The scaffold HTML exists with bug fixes and versioning. This session needs to **re-run all 26 agents and assemble the final v2 HTML**.

## What Already Exists
- **Scaffold**: `docs/perspectives-vlm-primary-hybrid-nav-v2-scaffold.html` (v1 + click-to-reveal fix + version nav + source hash `f26269eb`)
- **v1**: `docs/perspectives-vlm-primary-hybrid-nav-v1.html` (shallow first draft, needs forward link to v2)
- **Skill file**: `.claude/commands/research_perspectives.md` (755 lines, hybrid architecture, proven agent prompts)
- **Catalog**: `docs/research-perspectives-catalog.html` (26 lenses, 8 categories)
- **Research doc**: `docs/RESEARCH-VLM-PRIMARY-HYBRID-NAV.md` (351 lines, hash `f26269eb`)

## What This Session Must Do

### Step 1: Re-run all 26 deep-dive agents
Use the EXACT same agent prompt pattern from the skill file (Phase 3). Launch in 5 batches of ~6 agents. **CRITICAL DIFFERENCE from last session**: each agent must WRITE its output to a file instead of just returning it in context.

Each agent should write its output to: `docs/.perspectives-build/lens-{NN}.html`

Modify the agent prompt to include:
```
After generating your output, WRITE the improved HTML section to:
docs/.perspectives-build/lens-{NN}.html

And write the TTS text to:
docs/.perspectives-build/lens-{NN}-tts.txt
```

This way outputs persist across context and can be assembled later.

### Step 2: Assembly
Once all 26 files exist in `docs/.perspectives-build/`:
1. Read the scaffold HTML
2. For each lens (01-26), read `docs/.perspectives-build/lens-{NN}.html` and replace the corresponding `<section>` in the scaffold
3. Update the synthesis section with cross-lens convergence (read all cross-lens notes)
4. Write the final HTML to `docs/perspectives-vlm-primary-hybrid-nav-v2.html`
5. Generate `docs/perspectives-vlm-primary-hybrid-nav-v2-sections.json` from TTS text files

### Step 3: Version linking
1. Update v1 HTML to add forward link to v2
2. Ensure v2 has backward link to v1

### Step 4: QA + Open
1. Open v2 in browser
2. Test click-to-reveal, theme toggle, TTS buttons, scroll animations
3. Fix any bugs

## Key Findings from Previous Session's 26 Agents (for Synthesis)
These convergence points emerged across multiple lenses:

1. **WiFi is the Achilles' heel** (Lenses 04, 10, 13, 25) — cliff edge at 100ms, no graceful degradation
2. **Multi-query pipeline is highest-value lowest-risk** (Lenses 01, 17, 18, 23) — one-line code change, zero hardware
3. **Semantic map + voice = killer app** (Lenses 06, 16, 20, 21) — "Annie, what's in the kitchen?"
4. **Glass door problem unsolved** (Lenses 10, 11, 12, 13) — both sensors fail on transparent surfaces
5. **Transfer potential is massive** (Lenses 05, 17, 19) — NavCore open-source middleware opportunity
6. **Research describes Waymo then does opposite** (Lens 14) — VLM-primary vs lidar-primary inconsistency
7. **Last 40% accuracy costs 10x hardware** (Lens 15) — 3 constraints relaxable now for <$200
8. **Build the map to remember, not to navigate** (Lens 16) — Annie's value is persistent spatial witness
9. **Voice-to-ESTOP gap** (Lens 21) — Mom can't say "Stop!" with <5s latency
10. **Bypass text-language layer** (Lens 26) — 3 independent question-types converge on this
11. **3 neuroscience mechanisms untried** (Lens 08) — saccadic suppression, predictive coding, hippocampal replay
12. **18-gap Ghost Inventory** (Lens 24) — camera-lidar calibration is hidden prerequisite

## Agent Prompt Template (proven, use as-is)
For each lens, the prompt is:
```
You are a deep-analysis specialist for Lens {N}: {LENS_NAME}.

Read these files:
1. docs/research-perspectives-catalog.html — find lens {N}, extract core question, all driving questions, "Why this works"
2. docs/RESEARCH-VLM-PRIMARY-HYBRID-NAV.md — the research
3. docs/perspectives-vlm-primary-hybrid-nav-v2-scaffold.html — read the lens-{NN} section for context

This lens uses the {PATTERN} visual pattern.

YOUR TASK: [lens-specific instructions from skill file]

After generating, WRITE outputs to:
- docs/.perspectives-build/lens-{NN}.html (the complete <section> HTML)
- docs/.perspectives-build/lens-{NN}-tts.txt (plain text for TTS)
- docs/.perspectives-build/lens-{NN}-crosslens.txt (cross-lens notes)
```

## Lens → Visual Pattern Mapping (for agent prompts)
| Lens | Name | Pattern |
|------|------|---------|
| 01 | First Principles X-Ray | LAYERS |
| 02 | Abstraction Elevator | LAYERS |
| 03 | Dependency Telescope | TREE |
| 04 | Sensitivity Surface | BARS |
| 05 | Evolution Timeline | TIMELINE |
| 06 | Second-Order Effects | TREE |
| 07 | Landscape Map | SCATTER |
| 08 | Analogy Bridge | COMPARE |
| 09 | Tradeoff Radar | RADAR (SVG) |
| 10 | Failure Pre-mortem | TIMELINE |
| 11 | Red Team Brief | CARDS |
| 12 | Anti-Pattern Gallery | COMPARE |
| 13 | Constraint Analysis | MATRIX |
| 14 | The Inversion | COMPARE |
| 15 | Constraint Relaxation | COMPARE |
| 16 | Composition Lab | MATRIX |
| 17 | Transfer Matrix | CARDS |
| 18 | Decision Tree | FLOW |
| 19 | Scale Microscope | BARS |
| 20 | Day-in-the-Life | TIMELINE |
| 21 | Stakeholder Kaleidoscope | CARDS |
| 22 | Learning Staircase | LAYERS |
| 23 | Energy Landscape | BARS |
| 24 | Gap Finder | CHECKLIST |
| 25 | Blind Spot Scan | CARDS |
| 26 | Question Horizon | TREE |
