# RESEARCH: Observability-First Architecture for HerOS

> **Status:** Active brainstorm (Session 36, Feb 23 2026)
> **Trigger:** Rajesh identified observability as a prerequisite to building the OS
> **Decision:** Observability is not a feature — it's the architecture itself

---

## 1. The Core Philosophy

> "How do I build an OS that is fundamentally transparent by design?"

Observability isn't a dashboard you bolt on. **It's the architecture itself.** Every subsystem must:

1. **Announce** what it's doing
2. **Explain** why
3. **Show** what it considered and rejected
4. **Accept overrides** mid-stream
5. **Report** what changed as a result

This means: no subsystem can exist in HerOS without implementing the Observable interface. If Annie can't explain a decision, she shouldn't be making it. If Rajesh can't see a process running, it shouldn't be running.

---

## 2. The Problem: Why NudgeMe's Pattern Doesn't Scale

### NudgeMe Pipeline (what worked)

The NudgeMe app (Omi hackathon) has a **PipelineViz** widget — a horizontal strip showing 8 sequential audio processing stages:

```
BLE → Decode → Gain+HPF → VAD → Debounce → Accum → STT Engine → Transcript
```

**What made it excellent:**
- **8 stage cards** (80×72px): icon, label, metric, sub-label
- **Color-coded states:** grey (idle), green (active), blue (processing), orange (waiting), red (blocked)
- **Bottleneck detection:** orange dot (●) between stages when data stops flowing — instant diagnosis
- **Telemetry row:** `#1 Whisper 324ms lat · 0.5× RT · ↑ 45 KB · ↓ 120 ch · 1.2s audio`
- **500ms refresh rate:** fast enough to feel alive, slow enough to not flicker
- **Immutable data tuples:** `_VizData` typedef with Selector pattern — rebuilds only when data changes

**Why it worked:** Linear pipeline. One direction. One data flow. Easy to read left→right.

**Reference:** `/home/rajesh/workplace/nudge-omi-app/nudge_omi/lib/widgets/pipeline_viz.dart` (386 lines)

### HerOS: A Different Beast

HerOS is not a pipeline. Annie is **alive** — simultaneously:
- Listening to a transcript
- Remembering something from last week
- Deciding whether to nudge
- Checking if an email draft needs approval
- Noticing it's almost time for a meeting
- Holding back a suggestion because trust isn't high enough yet

**The differences:**

| NudgeMe Pipeline | HerOS |
|---|---|
| 8 sequential stages | ~15-20 concurrent subsystems |
| Linear strip (1D) | Multi-dimensional (parallel + time-triggered + event-triggered) |
| One data flow direction | Multiple concurrent flows, some circular |
| Always running when recording | Some subsystems are time-driven, some event-driven |
| No click interaction | Need to drill into reasoning |
| Fixed stages | Subsystems appear/disappear based on context |

---

## 3. Metaphors Explored

During brainstorming, we explored several mental models:

### 3.1 Event Timeline (scrolling log)
Timestamped stream of internal events, like a chat log for Annie's brain. **Rejected:** too linear, doesn't show parallelism or relationships.

### 3.2 Swimlane Diagram (parallel lanes)
Parallel lanes per subsystem with cross-connections. Like NudgeMe pipeline but multi-lane. **Partially useful:** good for sequential flows within lanes, but doesn't capture the ambient/always-on nature of HerOS.

### 3.3 Inner Monologue (natural language narration)
Annie narrates what she's doing in real-time. **Useful as a component:** the "reasoning" field on each process, but not as the primary visualization (too unstructured).

### 3.4 Conductor's Score
See all subsystems as parallel musical staves. The "conductor" (Rajesh) sees all parts simultaneously, can cue entries, silence sections, adjust tempo. **Good analogy** for the relationship between operator and system.

### 3.5 Newsroom
Multiple stories developing on screens. Editor (Rajesh) decides what leads, what gets killed, what needs more digging. **Good analogy** for prioritization and editorial control.

### 3.6 ★ The Aquarium → Mindscape (CHOSEN)
**Selected as primary visualization metaphor.** *Renamed from "Aquarium" to "Mindscape" in session 48 — better captures the spatial exploration of Annie's inner world.*

You don't control an aquarium moment by moment. You set up the environment — temperature, pH, light cycle, feeding schedule — and then you **watch**. You observe the fish (processes) swimming around, interacting, feeding. When something looks wrong, you intervene. When everything flows, you just watch.

Annie's brain IS the aquarium. You see all the "fish" (active processes) swimming in real-time. Some are fast (transcript processing). Some are slow (trust evolution over weeks). Some only appear at certain times (morning briefing assembly). You watch the ecosystem and intervene when needed.

### 3.7 Process Plant Control Room
Big-screen schematic showing pipes, valves, reactors, tanks with live readings. Green = normal, yellow = attention, red = intervene. **Good for operational monitoring** but less intuitive for a "living" system.

### 3.8 Stage Director's Monitor
Multiple camera feeds, each showing a different "scene" in progress. Director can call "cut" on any scene. **Good for the multi-scene nature** of HerOS but overly hierarchical.

---

## 4. The Aquarium Model (Detailed Design)

### 4.1 The Tank

A visual 2D/3D space representing Annie's mind. Three zones (left → right):

| LISTENING (input) | THINKING (processing) | ACTING (output) |
|---|---|---|
| Transcript receiver | Memory search | Nudge delivery |
| Calendar monitor | Entity extraction | Email draft/send |
| Email inbox watcher | Trust evaluation | Voice response |
| Moltbook observer | Sentiment analysis | Briefing assembly |
| Acoustic monitor | Context synthesis | Backup/maintenance |
| | Habit tracking | |

### 4.2 The Fish (Processes)

Each active process is a "fish" — a visual element with:

```
┌────────────────────────┐
│ 🔍 Semantic Reindex    │  ← icon + label
│ ██████████░░  12 files  │  ← progress + metric
│ Qwen3 → FAISS + BM25   │  ← sub-label (what/how)
│ ● active · 10s          │  ← state + duration
└────────────────────────┘
```

**State → Color mapping** (same palette as NudgeMe):
- **idle:** dim grey (#4a5568) — no glow
- **active:** electric blue (#00b4d8) — glow
- **processing:** mint green (#06d6a0) — pulsing glow
- **waiting:** amber (#ffd166) — steady dim glow
- **blocked:** red (#ef476f) — alert glow
- **always-on:** purple (#8b5cf6) — subtle permanent glow

**Behavior:**
- Fish float/drift gently (not stationary — they're alive)
- Active fish grow slightly (scale 1.05)
- Idle fish are dimmer and smaller
- Fish appear/disappear based on what's active (some only exist at certain times)

### 4.3 The Currents (Data Flow)

Animated connections between fish showing data flowing:
- Luminous threads between connected processes
- Particles flow along the thread when data is moving
- Direction shows data flow (input → processing → output)
- **Bottleneck detection** (from NudgeMe): if a current has data flowing IN but downstream fish is idle → orange indicator

### 4.4 Click-for-Reasoning (The "Why")

Click any fish → detail panel reveals:
- What it's doing (metric, progress)
- **Why** it's doing it (reasoning text)
- What it **considered and rejected** (alternatives)
- What it's **waiting for** (if blocked)
- **Override controls:** approve, reject, skip, retry, edit

This is the critical differentiator from NudgeMe: Annie doesn't just show WHAT — she shows WHY.

### 4.5 The Telemetry Bar

Bottom strip (same concept as NudgeMe's telemetry row):
```
─── 03:14:32 │ 3 active · 2 waiting · 5 idle │ GPU: 42°C │ Memory: 41/128 GB │ Entities: 2,847 ───
```

### 4.6 The Timeline

Horizontal bar showing the simulation/time progression:
- Markers for when each process starts/ends
- Play/pause and speed controls
- Scrubber for time travel (replay what Annie was thinking at any moment)

---

## 5. The Observable Interface (Architecture)

Every subsystem in HerOS must implement this contract:

```python
class Observable:
    """Every HerOS subsystem must be observable."""

    @property
    def id(self) -> str:
        """Unique identifier: 'memory-search', 'trust-eval', etc."""

    @property
    def zone(self) -> str:
        """Which aquarium zone: 'listening' | 'thinking' | 'acting'"""

    @property
    def state(self) -> str:
        """Current state: 'idle' | 'active' | 'processing' | 'waiting' | 'blocked'"""

    @property
    def metric(self) -> str:
        """Current metric: '3 results in 45ms', '14 decayed', etc."""

    @property
    def sub_label(self) -> str:
        """Context: 'Qwen3 → FAISS + BM25', 'trust: Tier 2', etc."""

    @property
    def reasoning(self) -> str:
        """Natural language explanation of what and why."""

    @property
    def connections(self) -> list[str]:
        """IDs of downstream processes this feeds into."""

    @property
    def overrides(self) -> list[str]:
        """Available human interventions: ['approve', 'reject', 'skip', 'edit']"""
```

### 5.1 The Event Bus

Every state change emits to a central event bus:

```python
EventBus.emit({
    "source": "memory-search",
    "type": "state-change",
    "state": "active",
    "metric": "3 results",
    "reasoning": "Searching for: Meera birthday...",
    "timestamp": 1709142872,
    "considered": ["full graph scan", "vector-only", "hybrid"],
    "chosen": "hybrid (vector 70% + BM25 30%)",
    "rejected_reasons": {
        "full graph scan": "too slow (>500ms)",
        "vector-only": "misses keyword matches"
    }
})
```

The aquarium UI subscribes to this bus and renders in real-time. The bus also persists events for replay — you can rewind and see what Annie was thinking 10 minutes ago.

### 5.2 500ms Refresh (from NudgeMe)

Same pattern as NudgeMe's `_statsTimer`:
- Poll all Observable subsystems every 500ms
- Diff against previous state
- Update only changed fish in the aquarium
- This is the sweet spot: fast enough to feel alive, slow enough to prevent flicker

---

## 6. Prototype Plan

Building 3 visually distinct prototypes of the aquarium, all showing the same data (Annie's Scene 1: "The Nightly Garden", 10 processes, 32-second simulation):

### V1: "Deep Ocean" (CSS 3D)
- Dark atmospheric underwater scene
- Fish as glassmorphic cards floating at different depths
- CSS perspective transforms for 3D depth
- Bubbles, light rays, caustic effects
- Side-view aquarium feeling

### V2: "Glass Tank" (Three.js 3D)
- Actual 3D rotatable aquarium
- Fish as glowing geometric shapes
- Camera orbiting
- Point lights on active processes
- Wire-frame tank walls

### V3: "Neural Reef" (Canvas 2D)
- Top-down bioluminescent view
- Processes as pulsing organisms
- Organic bezier-curve tentacles for connections
- Particles flowing along connections
- More abstract, microscope/neural aesthetic

**Files (archived in `docs/archive/`):**
- `docs/archive/aquarium-v1-deep-ocean.html`
- `docs/archive/aquarium-v2-glass-tank.html`
- `docs/archive/aquarium-v3-neural.html`

**Active:** `services/context-engine/dashboard/` (v4 Aquarium / Mindscape)

---

## 7. Scene 1 Process Data ("The Nightly Garden", 3:00 AM)

10 processes that run during Annie's nightly maintenance:

| # | Process | Zone | Timing | Metric | Key Detail |
|---|---------|------|--------|--------|------------|
| 1 | Health Check | listening | 0-3s | All green | GPU, disk, RAM, FAISS, PostgreSQL |
| 2 | Temporal Decay | thinking | 3-10s | 14 decayed | Half-life formula, 186 evergreen protected |
| 3 | Semantic Reindex | thinking | 10-16s | 12 files, 10s | Qwen3 embedding → FAISS + BM25 |
| 4 | Moltbook Observer | listening | 16-22s | 47→6→2 | Lurker model, confidence ceiling 0.6 |
| 5 | Knowledge Tagger | thinking | 20-24s | 2 tagged | Relevance + novelty + action flags |
| 6 | Habit Review | thinking | 22-26s | 4 habits | Morning walk 12d streak, keystone |
| 7 | Nightly Backup | acting | 24-28s | 14MB → Beast | rsync + git commit, 28min recovery |
| 8 | Consistency Check | acting | 28-30s | 0 orphans | Entity ↔ vector alignment, 200ms |
| 9 | Acoustic Monitor | listening | 0-32s | 8kHz mono | ALWAYS ON — fire alarm, distress |
| 10 | Low-Power Mode | acting | 30-32s | GPU → 3% | Whisper warm, waiting for dawn |

**Connections (data flow):**
- health-check → consistency-check
- temporal-decay → reindex
- reindex → consistency-check
- moltbook → knowledge-tag
- consistency-check → low-power

---

## 8. Open Questions

- **Q22:** How to handle processes that span very different timescales? (Temporal decay runs nightly in seconds; trust evolution happens over weeks.) Do we need a "zoom" level for the aquarium — zoomed in shows seconds, zoomed out shows days/weeks?

- **Q23:** Should the aquarium be the ONLY interface, or should there also be a text-based "audit trail" for grep-ability? (Both have value — visual for monitoring, text for debugging.)

- **Q24:** How does the aquarium interact with voice? Can Rajesh ask "Annie, what are you doing right now?" and get a voice summary of the aquarium state? (This would be Dimension 2: Voice OS reading the aquarium.)

- **Q25:** Should processes that Annie decided NOT to do also appear in the aquarium? (e.g., "Considered sending gift nudge for Meera's birthday but trust level too low" — should this appear as a ghost/phantom fish?)

- **Q26:** How to visualize multi-step reasoning chains? When Annie's Memory Search feeds Trust Evaluation which feeds Nudge Engine — should the aquarium show this as a chain of connected fish, or as one composite "decision pipeline" fish?

---

## 9. Key Insight: Observability Enables Trust

There's a deep connection between observability and the Trust Architecture (ADR-012). Rajesh can only trust Annie if he can **see** what she's doing. The aquarium isn't just a debugging tool — it's the **mechanism by which trust is built.**

Progressive autonomy (T0 observe → T4 auto-act) only works if Rajesh can:
1. Watch Annie operate at each tier
2. Verify she's making good decisions
3. Correct mistakes before they compound
4. Gradually release control as confidence builds

The aquarium makes this possible. Without it, trust is blind faith. With it, trust is earned through transparency.

---

## 9. Aquarium Prototypes (Session 37-38)

Three visualization prototypes built showing Scene 1 "The Nightly Garden" (10 processes, 32-second simulation):

| Version | Tech | File | Status | Notes |
|---------|------|------|--------|-------|
| V1 Deep Ocean | CSS | `docs/archive/aquarium-v1-deep-ocean.html` | Archived | Glassmorphic fish cards, CSS perspective depth, 3 zones. Detail panel with Annie's reasoning. Ambient effects subtle — needs enhancement. |
| V2 Glass Tank | CSS 3D | `docs/archive/aquarium-v2-glass-tank.html` | Archived | 6-face glass box via `perspective` + `preserve-3d` + `rotateX/Y`. Fish at different `translate3d` depths. Mouse-tilt rotation. Originally Three.js but WebGL blocked — rewritten to CSS 3D. Will rebuild with Three.js once WebGL works. |
| V3 Neural Reef | Canvas 2D | `docs/archive/aquarium-v3-neural.html` | Archived | Bioluminescent organisms: wavy membranes, tendrils, orbiting organelles. Organic bezier connections with flowing particles. Purple always-on Acoustic Monitor. Dark microscope/neural aesthetic. |
| **V4 Mindscape** | Vite+TS+Canvas | `services/context-engine/dashboard/` | **Active** | 27 creatures, real-time SSE events, zone layout, bezier connections with particles, time machine, detail panels. Production build 92KB/24KB gzip. |

### Chrome WebGL Issue

**Root cause:** Hybrid GPU (AMD Radeon 890M iGPU + NVIDIA RTX 5090 dGPU) on Linux causes GL library conflicts. Compounded by Chromium bug [#350117524](https://issues.chromium.org/issues/350117524).

- **Symptoms:** `getContext('webgl')` returns `null` (silent failure). `navigator.gpu.requestAdapter()` also returns `null`. Both WebGL and WebGPU hardware paths blocked.
- **Machine:** AMD Radeon 890M (iGPU, drives display, Mesa/radeonsi OpenGL 4.6) + NVIDIA RTX 5090 Laptop (dGPU, CUDA compute only). Chrome 145, Ubuntu 24.04 Wayland.
- **Key discovery:** Display GPU is AMD (via `glxinfo`), NOT NVIDIA. Chrome renders on AMD iGPU. Both `libGLX_nvidia.so` and `libGLX_mesa.so` are installed — Chrome's GL initialization hits the library conflict and fails.
- **Chrome's fallback behavior:** GPU process attempts GL initialization → fails (library conflict) → Chrome restarts GPU process with `--use-gl=disabled`. Visible in `ps aux | grep chrome.*--type=gpu`.

**All attempted fixes — NONE worked:**

| Approach | Flag | Result |
|----------|------|--------|
| GPU blocklist override | `--ignore-gpu-blocklist` | Not enough — initialization still fails |
| XWayland + blocklist | `--ozone-platform=x11 --ignore-gpu-blocklist` | GPU process still sets `--use-gl=disabled` |
| ANGLE OpenGL backend | `--use-gl=angle --use-angle=gl` | GL init fails, falls back to `--use-gl=disabled` |
| ANGLE Vulkan backend | `--use-gl=angle --use-angle=vulkan` | Vulkan init also fails, falls back to `--use-gl=disabled` |
| `--use-gl=desktop` | Native GLX | **Chrome hangs** — AMD/NVIDIA GLX library deadlock |
| `chrome://flags` override | "Override software rendering list" | Not sufficient alone |

**Why all backends fail:** Chrome's GPU process probes the system during initialization. On this hybrid GPU, both AMD (Mesa) and NVIDIA GL/Vulkan libraries are present. The probe hits conflicts regardless of which backend is requested. Chrome's safety mechanism restarts the GPU process with `--use-gl=disabled`.

**Why Firefox works:** Firefox has its own GL stack that handles multi-vendor setups correctly.

**HTTPS is NOT the issue:** WebGL has zero HTTPS requirement (confirmed via MDN spec). `http://localhost` is already a secure context.

**Chrome status:** BLOCKED — WebGL unavailable in Chrome on this hardware until Chromium fixes hybrid GPU support.

**Firefox WORKS:** Playwright Firefox confirms hardware-accelerated WebGL1 via AMD Radeon 890M (`Radeon R9 200 Series` renderer, Mesa RADV). Playwright Chromium also has WebGL1 via bundled SwiftShader (software). **Use Firefox Nightly for Three.js development, Playwright Chromium for automated tests.**

**Permanent setup (for when it works):**
- Desktop shortcut: `~/.local/share/applications/google-chrome-webgl.desktop` (named "Chrome (WebGL)")
- Launch script: `~/.local/bin/chrome-webgl`
- **Critical:** Chrome is single-instance. Must close ALL Chrome windows before relaunching with new flags.

### V3 Chosen as Foundation (Session 40)

**Decision:** V3 Neural Reef (Canvas 2D) chosen as the aquarium foundation after Rajesh reviewed all three side-by-side in Firefox.

**Why V3 wins:**
- **Zero dependencies** — pure Canvas 2D, no CDN, no library version risks
- **Works everywhere** — Chrome, Firefox, Safari, mobile. No WebGL required.
- **Best looking** — bioluminescent aesthetic is the most visually distinctive
- **Most performant** — Canvas 2D compositing is lightweight, no GPU shader compilation
- **Easiest to extend** — adding new organism types is just JavaScript class methods

**V1 and V2 remain as reference:**
- V1 Deep Ocean: CSS glassmorphic cards (useful pattern for non-canvas UI)
- V2 Glass Tank: Three.js WebGL (reference for if/when Chrome WebGL works)

### V3 Enhancement: Mythical Creature Organisms

**Problem:** All 10 organisms look identical — same membrane shape, same tendrils, same colors. When idle (dim), labels are nearly invisible. You can't glance at the aquarium and know which process is which.

**Design goals:**
1. Each of the 10 processes gets a unique **mythical creature archetype** with distinct shape, color, inner pattern, and silhouette
2. When **idle/dim**, organisms are subdued but labels remain **always readable** (dark background pill + minimum alpha)
3. Each creature's visual identity should be **meaningful** — the shape evokes what the process does

**Creature assignments and rationale:**

| Process | Creature | Why | Accent Color | Shape |
|---------|----------|-----|-------------|-------|
| Health Check | Phoenix | Rises at the start, verifies all systems alive | Amber `#FFA500` | Flame-like membrane, upward wisps |
| Temporal Decay | Jellyfish | Things slowly drift down and fade | Soft pink `#FF69B4` | Dome top + trailing tentacles |
| Semantic Reindex | Kraken | Reaches out to grab and reorganize knowledge | Teal `#00CED1` | Central body + 8 long arms |
| Moltbook Observer | Owl (Oracle Eye) | Silent watcher, all-seeing | Gold `#FFD700` | Large central eye, feathered edge |
| Knowledge Tagger | Starfish | Classifies and sorts into categories | Coral `#FF6B6B` | 5-pointed star membrane |
| Habit Review | Ouroboros | Cycles, streaks, recurring patterns | Emerald `#50C878` | Ring/loop shape, self-referential |
| Nightly Backup | Dragon | Protective guardian, breathes data to Beast | Crimson `#DC143C` | Spiky membrane, wing-like tendrils |
| Consistency Check | Griffin | Noble guardian of truth, cross-references | Silver `#C0C0C0` | Eagle-front, lion-back silhouette |
| Acoustic Monitor | Siren | Always-on listener, mythical sound creature | Purple `#8B5CF6` | Wave-form membrane, sound ripples |
| Low-Power Mode | Luna Moth | Nocturnal, gentle, settles into sleep | Moonlight `#E0E0FF` | Wing-like lobes, antennae tendrils |

**Label readability fix:**
- Dark translucent background pill behind all labels: `rgba(0,0,0,0.6)` with `border-radius: 4px`
- Minimum label alpha raised from 0.25 to **0.55** (idle) — always legible
- Active labels get full brightness + glow
- Creature name shown as secondary label below process name

**Technical approach:**
- Add `creatureType` config to each PROCESS definition
- `Organism` constructor branches on `creatureType` to generate: membrane points (shape), tendril count/style, inner pattern (eye, star, ring, flames, etc.), accent color
- New `drawCreatureInner()` method for type-specific inner details (e.g., eye iris for Owl, star arms for Starfish, flame wisps for Phoenix)
- Labels drawn with `ctx.fillRect` background pill before text

### Three.js V2 Rebuild — In Progress (Firefox)

**Decision:** Use Firefox Nightly for Three.js development. Chrome WebGL remains blocked.

- **Firefox Nightly** (snap `firefox 149.0a1`, edge channel): Hardware-accelerated WebGL1 via AMD Radeon 890M (Mesa RADV). Renderer reports "Radeon R9 200 Series, or similar". Confirmed working.
- **Playwright Chromium**: Software WebGL1 via bundled SwiftShader. Good for automated testing.
- **Playwright Firefox**: Also works with hardware WebGL1. Can be used for headless CI.

V2 Glass Tank rebuilt with Three.js (WebGL1): `WebGLRenderer` for 3D scene (glass tank, caustic floor shader, particle bubbles, glowing fish orbs, connection lines) + `CSS2DRenderer` for HTML fish card labels + `OrbitControls` for mouse interaction. Three.js loaded from CDN via importmap.

---

## 10. Process Architecture Rethink (Session 43)

> **Trigger:** Rajesh asked: "What about Cron/Heartbeat? Omi transcripts? Diarization? Channel listeners? Lane Queue?"
> **Key insight:** The original 10 processes only covered Scene 1 (3:00 AM Nightly Garden). Annie's real architecture spans 5 layers.

### The 5 Process Layers

| Layer | When | Examples |
|-------|------|----------|
| **Always-on listeners** | 24/7 | Acoustic monitor, Omi stream receiver, channel watchers (email/WhatsApp/Telegram/Discord) |
| **Real-time pipeline** | When speech arrives | Lane Queue, STT + diarization, entity extraction, embedding, FAISS indexing |
| **Nightly batch** | 3:00 AM | Temporal decay, delta reindex, consistency check, backup, Moltbook observer |
| **Morning assembly** | 5:30 AM | Morning briefing, promise check, daily wonder, daily comic |
| **On-demand** | User triggers | Memory search, email draft, skill execution, voice calls, nudge delivery |

### Key Architectural Principles

1. **Cron and Heartbeat are triggers, not processes.** The aquarium shows *what work happens*, not *what triggers it*.
2. **Lane Queue is a connection pattern**, not a single organism. It's the pulse of light traveling between thinking-zone creatures in sequence.
3. **Channels are one multi-headed creature** (Hydra) — each head is a channel, scales naturally as channels are added.
4. **Diarization is a distinct process** — "who said what?" is separate from "what entities are mentioned?"

### Proposed 18-Creature Process Map

**LISTENING zone:**
1. Omi Stream Receiver → Jellyfish (passive, catching audio segments)
2. Acoustic Monitor → Siren (always singing/listening, safety-critical)
3. Channel Watcher → Hydra (multi-headed — one per channel)
4. Moltbook Observer → Owl (silent watcher, big eyes, lurking)

**THINKING zone:**
5. Lane Queue → Ouroboros (circular, serial processing)
6. Diarization → Cerberus (3 heads = splitting voices into speakers)
7. Entity Extraction → Kraken (8 arms pulling entities from conversation)
8. Knowledge Graph Builder → Dragon (forging connections between entities)
9. Embedding Generator → Unicorn (magical text→vector transformation)
10. Sensitivity Classifier → Basilisk (dangerous gaze, classifying safe vs. forbidden)
11. Confidence Scorer → Starfish (5 arms = 5 confidence factors)

**ACTING zone:**
12. Temporal Decay → Luna Moth (ephemeral, fading memories)
13. Consistency Check → Griffin (eagle eye + lion strength, cross-referencing)
14. Morning Briefing → Phoenix (rises at dawn from yesterday's ashes)
15. Nudge Engine → Fairy (gentle, light touch, compassion-first)
16. Email Agent → Centaur (half-human tone profiling + half-horse automation)
17. Nightly Backup → Pegasus (flies data between Titan→Beast)
18. Health Monitor → Minotaur (strong guardian, GPU/disk/memory/services)

### Timeline Phases (24-hour cycle)

| Phase | Time | Active organisms |
|-------|------|-----------------|
| Night garden | 3:00-4:30 AM | Luna Moth, Griffin, Pegasus, Minotaur |
| Dawn prep | 5:00-6:00 AM | Phoenix (briefing), Fairy (nudge prep) |
| Daytime | 6:30 AM-11 PM | Jellyfish (Omi), Hydra (channels), full Lane Queue pipeline |
| Always-on | 24/7 | Siren (acoustic), Owl (Moltbook — periodic) |

### Lane Queue Visualization

The Lane Queue is the **connection topology** between organisms:
```
Jellyfish (receive) → Ouroboros (queue) → Cerberus (diarize) → Kraken (extract)
  → Dragon (graph) → Unicorn (embed) → Starfish (score)
```
Visualized as a pulse of light traveling through connected organisms in sequence.

### Status: Implemented in aquarium (session 44). Pending Rajesh's visual review.

---

## 11. Event Traversal — Milestones & Micro-Steps (Session 44)

> **Trigger:** Rajesh asked: "How to display what happened at every milestone and also at every microstep? Audit, debug, optimize, understand. Traverse through the things that happen."

### Two Event Levels

| Level | What it captures | Example |
|-------|-----------------|---------|
| **Milestone** | A process completed a meaningful unit of work | "Entity Extraction found 3 entities from transcript #47" |
| **Micro-step** | An internal decision within a process | "Checking 'Priya' against graph... match, entity #247, confidence 0.91" |

### Use Cases

- **Audit**: "Why did Annie send that nudge at 2pm?" → trace backwards through pipeline
- **Debug**: "Why wasn't Priya recognized?" → drill into extraction micro-steps
- **Optimize**: "Why did this transcript take 4 seconds?" → timing waterfall across Lane Queue
- **Understand**: "What is Annie thinking right now?" → live thought stream

### Three Visualization Approaches (Proposed)

**1. Thought Bubbles (ambient layer)** — Organisms emit bioluminescent bubbles when active. Milestones = large bubbles, micro-steps = tiny bubbles. Bubbles float upward into a scrollable "thought stream" bar. Click a bubble → highlight source organism + detail panel. Bubbles fade over time (temporal decay, naturally).

**2. Timeline Magnifier (navigation layer)** — The existing timeline scrubber gets tick marks at milestones. Scroll wheel zooms in/out. Zoomed out = milestones only. Zoomed in = micro-steps between milestones. Scrubbing replays aquarium state (organisms light up, connections pulse). Step-forward/back buttons.

**3. Trace Waterfall (deep dive layer)** — Click a connection line → waterfall panel opens showing trace timing per organism (like Jaeger/Zipkin). Each row expandable to show micro-steps within that organism. Shows total latency, entity count, edge count.

| Approach | Best for | Use case |
|----------|----------|----------|
| Thought Bubbles | **Understand** | Glanceable, ambient — "what's Annie doing now?" |
| Timeline Magnifier | **Audit / Traverse** | Navigational — "what happened between 2-3pm?" |
| Trace Waterfall | **Debug / Optimize** | Deep dive — "why did this take so long?" |

### Event Model (Data Structure)

```
{
  id:        "evt-00472",
  traceId:   "trace-047",        // groups events across pipeline
  process:   "extraction",       // which organism
  level:     "milestone" | "microstep" | "thought",
  timestamp: 1708819200000,
  summary:   "Found entity: Priya (person, confidence 0.91)",
  detail:    { ... },            // arbitrary JSON
  parent:    "evt-00470",        // milestone this micro-step belongs to
  duration:  340                 // ms, for timing waterfall
}
```

The **traceId** is key — it follows a single transcript's journey from Jellyfish → Ouroboros → ... → Starfish → Phoenix/Fairy. Every organism appends events to the same trace.

### Data Infrastructure

- **Real-time**: Events emitted through Event Bus (Section 5), delivered via WebSocket to aquarium
- **In-memory**: Ring buffer (last N hours) for real-time browsing
- **Persistent**: SQLite for historical queries and replay
- **Replay mode**: Historical events read from SQLite, played back through aquarium at adjustable speed

### Status: All 3 layers approved. Navigator design in progress (Section 12).

---

## 12. Event Navigator — Fractal Timeline Panel (Session 44)

> **Trigger:** Rajesh: "I want a navigation interface on the right, inspired by Apple Time Machine. Scroll up/down = move through time. Tap = expand deeper. Study the Nudge dashboard.html timeline."
> **Reference:** `~/workplace/hackathons-2026/omi-hackathon-blr-nudge/backend/static/dashboard.html` (lines 837-8360)

### Nudge Timeline Pattern (what we're adapting)

The Nudge dashboard has a right-side vertical timeline panel with:
- **2-level hierarchy:** Day bubbles (collapsed) → Interaction bubbles (expanded)
- **Split-circle glyphs:** Top half = compliance score, bottom half = sentiment (color-coded)
- **Fish-eye effect:** Gaussian falloff (`σ=45px`, min=18px, max=58px). Items near cursor/anchor enlarge, distant ones shrink.
- **Vertical spine:** Glowing center line connecting all items (gradient: bright at top → faded)
- **Starfield background:** Subtle radial gradient dots for depth
- **Perspective:** `perspective: 400px` on track container for 3D feel
- **Selection model:** Click day → expand to interactions. Click interaction → loads that data into main area.
- **Mouse tracking:** `mousemove` on track → live fish-eye recalculation. `mouseleave` → revert to anchor-based falloff.

### Aquarium Adaptation: 5-Level Fractal Zoom

Instead of 2 levels, the aquarium navigator has 5-6 levels:

```
Year → Month → Day → Hour → Milestone → Micro-step
```

Each level is a scrollable list in the same panel. Tapping an item drills into the next level. A breadcrumb at the top shows the current path and allows jumping back up.

**Navigation state:** `expandedPath[]` breadcrumb array (e.g., `['2026', '02', '24', '03:00', 'evt-123']`)

### Bubble Visual Language per Level

| Level | Bubble shows | Color logic |
|-------|-------------|------------|
| Month/Day | Activity heat dot — how "busy" that period was | White → cyan intensity |
| Hour | Dominant zone (listening/thinking/acting) | Zone color blend |
| Milestone | Creature silhouette mini-icon | Creature's own color |
| Micro-step | Data type dot (entity/edge/confidence) | Entity type color |

### Canvas Behavior at Each Zoom Level

- **Day/Hour level:** Full aquarium, all 18 organisms. Active ones glow, idle dim. Thought bubbles (Layer 1) float up from organisms.
- **Milestone level:** Aquarium highlights the specific organism. Connections pulse showing the data path. Other organisms fade to ~20% opacity. Milestone thought bubble stays pinned.
- **Micro-step level:** Zoomed into organism detail view. Trace waterfall (Layer 3) appears. Individual micro-step thought bubbles show decisions.

### Scrolling = Time Travel

- Scroll up = go back in time, scroll down = go forward
- Fish-eye focus moves with scroll position
- Aquarium canvas animates in sync — organisms light up/dim as you scroll
- Fast scroll = time-lapse (organisms flicker). Slow scroll = step-by-step.
- Scroll is within current zoom level. Click to drill deeper.

### Component Mapping (Nudge → Aquarium)

| Nudge | Aquarium |
|-------|---------|
| `timelineDays[]` | `timelineNodes[]` — hierarchical tree |
| `expandedDay` | `expandedPath[]` — breadcrumb array |
| `renderTimeline()` | Same pattern — renders current level items |
| `applyFishEye()` | Same Gaussian — applied to current level |
| `selectInteraction()` | `selectEvent()` — updates canvas |
| `makeSplitCircle()` | `makeEventBubble()` — creature glyph or heat dot |

### Decisions (Resolved)

- **Q27 → Breadcrumb drill-down.** Rajesh chose breadcrumb (Nudge-style). Simpler, proven.
- **Q28:** Panel width TBD during implementation (~120-150px expected).
- **Q29 → Yes, canvas animates during scroll.** Rajesh wants to see data flow and parallel processes as he scrolls through time. Click on canvas for deeper exploration at any step.

### Key Interaction Model

- **Right panel scroll** = time travel (organisms light up/dim in sync)
- **Right panel tap** = drill deeper into next level (breadcrumb navigation)
- **Main canvas click** = explore what happened/is happening at that step (deep dive into organism, trace waterfall, micro-steps)
- **Canvas shows parallel execution** — multiple organisms active simultaneously, connections pulsing between them

### Usage Scenarios (validated with Rajesh)

1. **Morning check-in:** Scroll through last night's hours → watch nightly garden play out (Luna Moth→Griffin→Pegasus sequence). 30 seconds to review Annie's entire night.
2. **Debugging:** Scroll to the moment, tap milestone, expand micro-steps → see exactly where extraction failed (e.g., confidence 0.41 below 0.6 threshold).
3. **Optimizing:** Tap milestone → trace waterfall shows timing per organism → spot bottleneck (e.g., Cerberus diarization took 850ms due to noisy audio).
4. **Understanding:** Real-time view, thought bubbles rising from active organisms. Glance to see what Annie is processing right now.
5. **Auditing:** Trace any action back to its origin. Every nudge, every briefing item — tap to see consent tier, cooldown, priority, confidence.

### Status: Design approved. Ready for implementation.

---

## 13. Contextual Communication — Select → Talk → She Knows (Session 44)

> **Trigger:** Rajesh: "After I find the issue, how do I communicate to Annie? Can I select a node, a creature, an event, and directly talk to her about it — so she knows the context instantly?"
> **Key insight:** The aquarium becomes a **bidirectional interface** — not just watching Annie, but talking to her about specific things she did, with shared context.

### Selectable Elements

Everything on the canvas and navigator is selectable:

| Element | Context Annie receives |
|---|---|
| **Organism** (creature) | Process state, current inputs/outputs, last N events, performance stats |
| **Connection** (line) | Data flow history, timing, queue depth, backpressure |
| **Event/Milestone** | Full trace chain, every micro-step, timing waterfall |
| **Micro-step** | Specific decision point, inputs considered, confidence, threshold |
| **Thought bubble** | Reasoning text + source organism + triggering event |
| **Time range** (drag on navigator) | All events in window, all active organisms, aggregate stats |

### Context Bundle (assembled on selection)

```js
{
  selection: {
    type: 'organism' | 'connection' | 'event' | 'microstep' | 'thought' | 'timerange',
    id: 'kraken',
    name: 'Entity Extraction',
  },
  temporal: {
    timestamp: 1708819200000,
    navigatorPath: ['2026', '02', '24', '08:15', 'evt-047-extraction'],
    phase: 'daytime-active',
  },
  trace: {
    traceId: 'trace-047',
    events: [...],           // full event chain
    parentMilestone: 'evt-047',
    position: 3,             // 3rd step in pipeline
  },
  system: {
    memory: { used: '12.4GB', peak: '14.1GB', available: '115.6GB' },
    gpu: { utilization: 34, vram: '8.2GB/128GB' },
    cpu: { utilization: 12, cores_active: 4 },
    locks: ['graph-write-lock (held by dragon)'],
    activeProcesses: ['jellyfish', 'ouroboros', 'cerberus', 'kraken'],
  },
  data: {
    input: 'Transcript segment...',
    output: { entities: [...], skipped: [...] },
  }
}
```

Three data sources: (1) Event store (SQLite), (2) System metrics (ring buffer), (3) Organism config (thresholds/settings).

### Five Conversation Patterns

1. **Point + Ask** — Select Cerberus at 08:15. "Why did this take 850ms?" → Annie explains noisy audio, 3 re-clustering passes.
2. **Point + Command** — Select Priya-skipped micro-step. "Add Priya as alias." → Annie adds alias, offers to reprocess 4 missed mentions.
3. **Point + Compare** — Select Luna Moth nightly run. "Compare with last week." → Annie shows edge count increase, correlates with more conversations.
4. **Multi-select + Ask** — Shift-click Cerberus AND Kraken. "These are always slow. Options?" → Annie proposes 3 optimization strategies with tradeoffs.
5. **Time Range + Ask** — Drag-select 2:00-2:30 PM. "Anything unusual?" → Annie scans all events, flags anomalies.

### UI Elements

1. **Selection glow** — Golden ring around selected element (distinct from active/idle glow)
2. **Context card** — Floating panel near selection showing key stats
3. **Chat input** — Bottom of canvas or context panel. Text input + mic button for voice.
4. **Conversation thread** — Persistent per selection (like PR line comments). Click away + come back = thread preserved.

### Architecture

```
User clicks element → Context Bundle Builder queries:
  1. Event store (SQLite) — trace events ± 5
  2. System metrics — timestamp ± 30s
  3. Organism config — thresholds, stats
                    ↓
User message + Context Bundle → Claude API → Annie responds with full awareness
```

### The Paradigm Shift

This turns the aquarium from **observability dashboard** (read-only) into **observability conversation** (bidirectional). The difference: you don't describe what you're looking at — Annie already sees it.

### Status: Design approved. Ready for implementation.

---

## 14. Implementation: Canvas Interaction & Memory Visualization (Sessions 45-47)

Features built during implementation that evolved from Rajesh's iterative feedback. Not pre-designed — emerged from hands-on use of the aquarium prototype.

### 14.1 Scroll-Wheel Timeline Scrubbing

**Problem:** Progress bar click was too imprecise; play/pause was too passive. Rajesh wanted hands-on time control.

**Solution:** Canvas `wheel` event scrubs `simTime`. Scroll down = forward in time, scroll up = backward. Auto-pauses playback to avoid fighting the scroll. Normalizes `deltaMode` (pixel vs line mode for different input devices). Clamped to `[0, 32)` simTime range (~03:00-23:00 clock time).

**Design decision:** Kept the bottom controls bar (play/pause, speed buttons) alongside scroll — play/pause is "sit back and watch" mode, scroll is active investigation. May simplify later (open question).

### 14.2 Depth-Adaptive Scroll

**Problem:** After drilling into hour/milestone level via the navigator, scroll wheel still changed simTime at the coarse 24-hour granularity. Scrolling through milestones one-by-one felt impossible.

**Solution:** `handleWheelScrub()` checks `navState.currentLevel`:
- **Day level:** Continuous simTime scrub (original behavior)
- **Hour level:** Accumulates scroll delta, triggers discrete step through milestone events at 80px threshold
- **Milestone level:** Same stepping through microstep events

Each step: selects the event, jumps `simTime`, highlights nav item, flashes the organism on canvas, updates the waterfall panel. `scrollAccumulator` resets on `drillDown`/`drillUp` transitions.

**Key pattern — discrete vs continuous input:** At coarse zoom, scroll is a slider. At fine zoom, scroll is arrow-key stepping. The accumulator acts as a debounce + quantizer — absorbs trackpad micro-scrolls, fires discrete steps only when enough intent builds up.

### 14.3 Entity Memory Strata with Type Labels

**Problem:** Entity substrate dots were colored by type but unlabeled — couldn't tell what each color meant without a legend.

**Solution:** Type swim lane labels drawn as a header row just below the sediment line (entity strata boundary). 8 labels: Person, Place, Topic, Promise, Event, Emotion, Decision, Habit. Each with a colored dot matching the entity color. Navigator-aware width (`canvasWidth - 150px`) so rightmost labels aren't hidden behind the navigator panel.

**Layout evolution** (4 rounds of Rajesh feedback):
1. Labels at bottom → too close to controls
2. Labels at both top and bottom → bottom still clashed
3. Removed bottom, kept header row only → entity dots overlapped labels
4. Entity zone pushed 28px below sediment line → clean separation

### 14.4 Proportional L0/L1/L2 Tier Bands

**Problem:** Entity substrate was a flat zone — no visual distinction between memory tiers. Couldn't see the three-tier memory architecture (Episodic L0, Semantic Graph L1, Communities L2).

**Solution:** Three horizontal bands within the entity zone, sized proportionally to actual entity count per tier. L1 (semantic graph) is typically the largest band (~93 entities), L2 (~62), L0 (~45). Each band has:
- Subtle background alpha (darker = deeper tier)
- Dashed divider line between tiers
- Left-edge label with tier name + count ("L1 · consolidated (93)")

**Rajesh's initial assumption** was L2 would be largest ("nothing is deleted"). In practice, L1 accumulates the most entities because promotion from L0→L1 happens nightly, but L1→L2 (community/pattern formation) is less frequent.

### 14.5 Dynamic Tier Transitions

**Problem:** Entities were statically placed in their initial tier — no visible movement as Luna Moth runs nightly decay or entities get promoted.

**Solution:** Each `EntityNode` gets an `effectiveTier` that changes as `simTime` passes promotion events. When a tier change occurs:
- Entity drifts to its new band position (via existing layout lerp)
- Glow trail animation (2s) — downward-streaking light trail showing the transition path
- Pulse (1.5s) — brief brightness increase on arrival

**Synthetic promotion events:**
- 8 L0→L1 nightly (simTime 0.5-3.5, during Luna Moth's consolidation pass)
- 4 L1→L2 deep nightly (simTime 1.0-3.0, community detection)
- 4 L0→L1 daytime micro-promotions (simTime 10-28, Kraken high-confidence extractions)

### 14.6 Navigator ↔ Canvas Bidirectional Sync

**Problem:** Navigator panel and canvas felt disconnected — clicking through time on the right didn't clearly map to what changed on the left. Scrolling the canvas didn't update the navigator.

**Solution — three sync mechanisms:**

1. **"You are here" marker:** Current hour's nav-item gets a cyan glow ring + auto-scrolls into view. Updates every frame from the render loop via `syncNavigatorToSimTime()`. `lastSyncedHour` optimization prevents unnecessary DOM updates.

2. **Activity summary:** Small text area at top of navigator shows current clock time + nearest event in plain language (e.g., "11:12 Sensitivity Gate / Classified: Open (team standup)"). Updates as simTime changes.

3. **Organism flash:** When clicking a navigator event at hour level, the corresponding organism on canvas gets a bright expanding glow ring (1.5s duration) — an unmistakable "look here" signal.

### 14.7 Trace Waterfall Panel

Glassmorphic floating panel (340×60vh, fixed left:20px top:80px). Shows when a milestone event is selected. One row per pipeline organism in the trace. Timing bars color-coded: green (<200ms), yellow (200-500ms), red (>500ms). Rows expandable to show microsteps. Entity impact tags at bottom. Currently: fixed position, close-only (no drag, no collapse, no reopen trigger).

**Open design question:** Waterfall needs drag-to-reposition, collapse toggle, and a way to reopen after closing (see Section 15).

### Status: All features implemented and tested (228 Playwright assertions).

---

## 15. Open Design Questions (Sessions 45-47)

- **Q30: Controls bar simplification** — Bottom controls (play/pause, speed, progress bar) overlap with scroll-wheel scrubbing. Keep both? Simplify? Collapse behind a toggle?
- **Q31: Waterfall panel interaction** — Needs drag-to-reposition, collapse/expand toggle, and a reopen mechanism after closing. See Section 14.7.
- **Q32: Memory Explorer (full-screen)** — Second M press should open a dedicated full-screen memory exploration mode. Navigation metaphor, graph visualization, time dimension, connection exploration, Annie interaction — all need design brainstorm.

---

## 16. Actions & Reasoning Layer — Visualizing Annie's Outward Capabilities

**Motivation:** The current Mindscape shows Annie's *internal* processing — 18 organisms handling data that flows in from the Omi wearable. But Annie also *reaches outward*: calling tools, connecting to MCP servers, controlling browsers, conducting multi-step research, executing skills, and reasoning through Claude API calls. These capabilities are invisible today.

**The metaphor gap:** The reef is Annie's mind. But a mind doesn't just process — it *acts*. We need to visualize the boundary between thinking and doing.

### 16.1 Capability Categories

| Category | What it is | Examples | Frequency |
|----------|-----------|----------|-----------|
| **Tool Calling** | Annie invoking a function | Weather lookup, calendar query, file read, calculation | High (dozens/hour during active use) |
| **MCP Servers** | External capability providers | GitHub, Slack, filesystem, databases, email, calendar API | Medium (connected always, invoked on demand) |
| **Browser Automation** | Navigating, reading, form-filling, extracting | Research a restaurant, book a ticket, check a tracking page | Low-medium (task-driven bursts) |
| **Deep Research** | Multi-step reasoning chains | Search → read → synthesize → search again → draft → refine | Low (minutes-long sustained effort) |
| **Skill Execution** | SKILL.md actions across 3 channels | Browser automation, voice calls, desktop API interactions | Low (user-initiated or trust-gated) |
| **LLM Reasoning** | Claude API calls for analysis | Entity extraction, relationship inference, response generation, planning | Constant (every pipeline step involves reasoning) |

### 16.2 The Reef-and-Ocean Metaphor

**Core insight:** The Mindscape reef is Annie's inner world. Everything outside the reef is the *external world* — APIs, services, browsers, the internet. When Annie uses a tool, she reaches *out of the reef into the ocean*.

This maps naturally:

| Mindscape element | Maps to |
|-------------------|---------|
| Reef organisms (existing) | Internal processing |
| Ocean beyond the reef | External world (APIs, web, services) |
| Reef boundary / edge | The interface between thinking and acting |
| Portals at reef edge | Connected MCP servers / tool endpoints |
| Expedition creatures | Active research or browser sessions |
| Deep trenches below reef | LLM reasoning (deeper = longer chains) |
| Ocean currents | Data flowing in/out of the reef |
| Bioluminescent storms | Intense reasoning or research bursts |

### 16.3 Visualization Approaches

#### Approach A: Portals at the Reef Edge

**Concept:** Fixed glowing points along the canvas perimeter — each represents an MCP server or tool category. When connected: steady ambient glow. When actively called: data stream flows between the calling organism and the portal.

**Visual language:**
- Portal = small glowing ring (20px) at canvas edge, colored by category
  - GitHub: green. Slack: purple. Browser: orange. Calendar: cyan. Email: amber.
- Idle: gentle pulse (0.3 opacity)
- Active call: bright flash + data particle stream (organism → portal → organism)
- Error: red flicker + crack pattern
- Portal label appears on hover

**Pros:** Simple, predictable positions, doesn't clutter the reef. Scales to 10-15 portals.
**Cons:** Peripheral — easy to miss. Doesn't show *what* the tool call did. Static positions feel mechanical.

#### Approach B: Expedition Creatures

**Concept:** When Annie does browser research or multi-step tool work, a new temporary creature spawns from the reef and *swims outward*. You can watch it navigate external space, gather data, and return.

**Visual language:**
- Expedition = small translucent creature (jellyfish / seahorse silhouette, 12px)
- Spawns from the initiating organism with a connecting tether (fading line)
- Moves toward canvas edge, pauses at each "step" (page load, API call, read)
- Each step leaves a small breadcrumb dot showing the chain
- Returns to reef with gathered data (dot brightens, absorbed into organism)
- Research expedition: longer journey, more breadcrumbs, visible path
- Quick tool call: fast there-and-back, minimal trail

**Pros:** Narratively rich — you *see* Annie thinking then acting. Shows duration and complexity. Multi-step research becomes a visible journey.
**Cons:** Potentially chaotic with many concurrent calls. Needs careful movement choreography. More rendering cost.

#### Approach C: Tentacle / Root Network

**Concept:** The reef extends organic root-like tendrils toward the canvas edges. Each root connects to an external service. When a tool is called, the tendril pulses with light traveling outward (request) and back (response).

**Visual language:**
- Roots = curved bezier paths from reef center to edge, always present but very faint
- Active call: light pulse travels along the root (outbound = cyan, return = amber)
- Multiple concurrent calls: multiple light pulses on the same root
- Severed root (disconnected MCP): broken end with sparks
- New connection: root grows from organism to edge over 2 seconds

**Pros:** Organic, always visible, shows the infrastructure. Beautiful with multiple concurrent pulses. Mycorrhizal network metaphor (the hidden web that connects everything).
**Cons:** Lots of bezier paths could be visually noisy. Hard to distinguish which organism triggered which call. Roots are static — doesn't show the *journey* of research.

#### Approach D: Weather / Atmosphere System

**Concept:** Tool calls and reasoning create atmospheric effects *above* the reef. Calm when idle, active when busy.

**Visual language:**
- Idle: clear water, gentle ambient particles
- Light tool calling: scattered bioluminescent sparks above reef (like fireflies)
- Heavy research: growing cloud formation at top of canvas — swirling, brightening
- LLM reasoning: aurora-like waves across the top third (deeper reasoning = more intense colors)
- Browser session: focused beam of light from one organism upward through the water column
- Error: brief red lightning flash

**Pros:** Ambient and beautiful. Communicates *intensity* of activity without showing individual calls. Great for glanceable "how busy is Annie?"
**Cons:** Loses granularity — can't see individual tool calls. Purely decorative without click-to-inspect. Doesn't show *what* is happening, only *how much*.

#### Approach E: Hybrid — Portals + Expeditions + Atmosphere (Recommended)

**Concept:** Combine the best elements:

1. **Portals** for MCP servers (always present, edge of canvas) — shows infrastructure
2. **Expeditions** for multi-step actions (browser, research, skill execution) — shows journeys
3. **Atmosphere** for LLM reasoning intensity — shows cognitive load
4. **Tentacle pulses** for quick tool calls (single request/response) — shows infrastructure traffic

**Layered rendering:**
```
Layer 11: Atmosphere (aurora/storms) — above everything
Layer 10: Expedition creatures + trails — mid-canvas
Layer 9:  Tentacle pulses — reef edge to portals
Layer 8:  Portal rings — canvas perimeter
(existing layers 1-7 below)
```

**Interaction model:**
- Click a portal → context card shows MCP server status, recent calls, latency
- Click an expedition creature → context card shows research journey so far (steps, findings)
- Click aurora → shows current LLM reasoning chain (which organism, what prompt, thinking time)
- Expedition breadcrumbs are selectable → shows what happened at each step

### 16.4 LLM Reasoning Visualization (The Deep)

LLM reasoning deserves special treatment — it's the most frequent and most opaque capability.

**The Deep** = a visual zone below the entity substrate where reasoning happens. Think of it as the subconscious — you can't always see it, but you know it's there.

**Visual concepts:**
- When Claude API is called, a faint glow appears in The Deep below the calling organism
- Longer reasoning = deeper glow (extends further down)
- Chain-of-thought steps visible as sequential light pulses
- Token generation = rapid sparkle pattern (like neural firing)
- Context window usage = width of the glow (wider = more context loaded)
- Tool-use-within-reasoning: glow extends a tendril toward a portal mid-thought

**Information on click:**
- Which organism triggered the reasoning
- Prompt summary (truncated)
- Token count (input/output)
- Latency
- Whether it resulted in a tool call, entity extraction, or response generation

### 16.5 Browser Session Visualization

Browser automation is a special case — it's a sustained, multi-step interaction with a specific external site.

**Concept: The Dive**
- When Annie opens a browser session, a creature "dives" out of the reef toward a specific portal
- At the portal, a small "window" opens showing a miniature representation of the page (favicon + title)
- Each navigation/click/read is a step — the window updates
- Form fills show brief data particle streams entering the window
- Extract/scrape shows data particles flowing back to the reef
- Session ends: window closes, creature returns with gathered data

**For the navigator:** Browser sessions appear as multi-step traces in the waterfall, each step (navigate, click, read, fill) as a microstep row.

### 16.6 Deep Research Visualization

Deep research is the most complex action — potentially minutes long with many sub-steps.

**Concept: The Expedition**
- A research expedition spawns a dedicated creature (larger than tool-call expeditions, ~20px)
- The creature follows a visible path across the canvas
- At each research step, it pauses and a small cluster of thought bubbles rises (findings)
- Path branches when research branches (search → read multiple results → synthesize)
- Path color shifts: blue (searching) → green (reading) → amber (synthesizing) → purple (drafting)
- When complete, the creature returns along a direct path, visibly "loaded" with data (brighter, larger)

**For the navigator:** Research appears as a top-level trace with expandable sub-traces for each search/read/synthesize cycle.

### 16.7 Skill Execution Visualization

Skills (SKILL.md) use the same infrastructure but are user-initiated and trust-gated.

**Concept:** Skill execution is a special expedition with a trust halo:
- Before launch: organism shows a pulsing amber ring (awaiting approval at current trust tier)
- Approved: ring turns green, expedition launches through the appropriate portal
- Multi-channel skill (browser + voice + API): multiple expedition threads from the same organism
- Trust tier indicator visible on the expedition creature (T0-T4 badge)

### 16.8 Open Questions

- **Q33: Information density** — With portals, expeditions, atmosphere, AND the existing reef, entities, bubbles — is this too much visual information? Should some layers be togglable (like entity view modes)?
- **Q34: Synthetic data for actions** — Need to generate synthetic tool calls, MCP connections, browser sessions, and research chains in the data system. How realistic? How many?
- **Q35: Performance budget** — Atmosphere effects (aurora) and expedition bezier trails add rendering cost. Can we maintain 60fps with all layers? Need profiling.
- **Q36: Portal layout** — Fixed positions (top for weather APIs, right for communication, bottom for data stores)? Or dynamic positioning based on which organisms connect to which portals?
- **Q37: Narration mode** — Should there be a mode where Annie narrates what she's doing? ("I'm searching for that restaurant Priya mentioned..." as a thought bubble during an expedition)

---

## 17. Implementation: Actions Layer — Portals & Tentacles (Sessions 49-50)

Built the Hybrid approach (Section 16.3 Approach E), starting with Portals + Tentacle pulses. Expeditions and Atmosphere remain future work.

### 17.1 Portal Topology

**9 MCP portals** positioned along the canvas edges:

| Portal | Edge | Position | Color | Organisms connected |
|--------|------|----------|-------|-------------------|
| GitHub | top | 12% | green (88,186,88) | graph-build, sensitivity |
| Slack | top | 26% | purple (160,110,230) | briefing, email-agent |
| Calendar | top | 40% | cyan (0,210,230) | briefing, lane-queue |
| Search | top | 54% | white-blue (200,210,240) | extraction, deep-research |
| Weather | top | 68% | teal (100,210,180) | briefing |
| Files | left | 18% | blue (110,170,240) | omi-stream, extraction |
| Browser | left | 38% | orange (240,160,60) | deep-research, skill-exec |
| Email | left | 58% | amber (230,175,60) | email-agent |
| Database | left | 78% | purple (180,130,210) | graph-build, extraction, embedding, confidence |

**Design decision — edge layout (answers Q36):** Top edge = cloud/internet services (the "sky" above the reef). Left edge = local infrastructure and data stores (the "substrate" beside the reef). Right edge reserved for the navigator panel. Bottom edge reserved for controls. This creates a spatial grammar: *up = external, left = internal, center = processing*.

**20 routes** connect 11 organisms to 9 portals. Each route has a weight (0-1) controlling how likely synthetic actions use that path. Example: `extraction → search` has weight 0.8 (frequent), `confidence → database` has weight 0.4 (occasional).

### 17.2 Portal Rendering

Each portal is a `Portal` class instance with:
- **Glowing ring** (14px radius) at the edge, colored per portal definition
- **Radial gradient** aura that pulses gently (sinusoidal `pulsePhase`)
- **Label text** below the ring (portal name, 9px font)
- **Activation flash** — when a tool call hits, `activeTimer` jumps to 1.5s, ring brightens
- **Error flash** — red overlay ring for 2.0s on failed calls
- **Call counter** — `callCount` tracks total activations (shown in context card)
- **Hit testing** — circular radius+10px for click selection

**Canvas layer:** Portals draw at layer 2.5 (after zone labels, before entity substrate). This keeps them visually behind the reef life but in front of the background — peripheral but present.

**Edge padding:** `EDGE_PAD = 35px` keeps portals from touching the canvas boundary. Top portals respect navigator panel width (positioned within available space).

### 17.3 Tentacle System

Each route gets a `Tentacle` instance — a persistent bezier root connecting an organism to its portal.

**Bezier geometry:**
- `p0` = organism center position (updates every frame as organisms drift)
- `p3` = portal position (fixed after layout)
- `p1`, `p2` = control points with deterministic sway computed from route index (creates organic curvature, no two tentacles overlap exactly)
- Sway uses seeded offsets: `swayX = (routeIndex * 37 % 100 - 50) * 0.5`, `swayY = (routeIndex * 53 % 100 - 50) * 0.3`

**Root rendering:** Faint curved line (alpha 0.025 idle, 0.08 when active) — always visible but not distracting. Uses bezier `moveTo/bezierCurveTo`.

**`TentaclePulse` class** — a light traveling along the bezier:
- **Outbound pulse** (organism → portal): `t` travels 0→1 at speed 1.8 units/s. Cyan-tinted portal color. Size 4px.
- **Return pulse** (portal → organism): `t` travels 1→0. Amber-tinted (success) or red (error). Size 5px. Delayed by `action.duration` to simulate response time.
- **Alpha fade**: Pulses fade near endpoints (alpha drops when `t < 0.15` or `t > 0.85`)
- **Lifecycle**: Pulse dies when it reaches its destination (`t >= 1` for outbound, `t <= 0` for return) and alpha is 0

**Canvas layer:** Tentacles draw at layer 5.5 (after connections, before organisms). Focus dimming: when navigated to a specific organism, only its tentacles render at full opacity; others dim to alpha 0.15.

### 17.4 Synthetic Action Data

`generateSyntheticActions(eventsData)` creates ~200-250 tool calls across the 24-hour simulation:

- Iterates all traces from the existing event data
- For each trace, checks which organisms are involved
- For each organism, looks up its portal routes
- For each route, rolls `Math.random() < route.weight` to decide whether this trace generated a tool call
- Concentrates actions during daytime (simTime 6-28, i.e., 06:30-23:00)

Each action: `{ id, traceId, process, portal, simTime, duration (30-800ms), summary, status }`. Error rate ~3% (realistic for external API calls).

**Data access functions:**
- `getActionsAtSimTime(t, window)` — actions within ±window of time `t`
- `getActionsForProcess(processId)` — all actions for an organism
- `getActionsForPortal(portalId)` — all actions targeting a portal

Exposed as `window._syntheticData.actions`, `window._syntheticData.MCP_PORTALS`, `window._syntheticData.PORTAL_ROUTES`.

### 17.5 Scroll-Scrub Sweep Range (Session 50 Fix)

**Bug:** `emitActionPulses(simTime)` originally checked `Math.abs(a.simTime - simTime) < 0.15` — a tiny ±0.15 window around the current simTime. When scrolling the mouse wheel fast, `handleWheelScrub` advances simTime in large jumps (e.g., 14→18 in one frame). Actions between 14.15 and 17.85 were never seen — they fell in a dead zone. Before the fix, fast scrubbing emitted only 2% of actions in the traversed range.

**Fix — sweep range:** Track `prevEmitSimTime`. Instead of checking a fixed window around `simTime`, compute:
```
lo = min(prevEmitSimTime, simTime) - 0.15
hi = max(prevEmitSimTime, simTime) + 0.15
```
This sweeps the full interval between where we were and where we are. On the first frame (`prevEmitSimTime = -1`), falls back to the narrow ±0.15 window.

**Result:** 100% of actions in the scrubbed range now emit. Works for both forward and backward scrubbing.

**Key pattern — continuous collision detection:** This is the same pattern used in physics engines. When an object moves fast, you don't just test at its current position (it might tunnel through walls). You sweep the full trajectory. Same principle here: we sweep the full simTime trajectory to catch every action the user scrolled through.

### 17.6 Toggle Controls

**A key** toggles the entire actions layer (portals, tentacles, pulses). Respects input focus — ignored when the chat input in the context card is focused.

**⚡ button** in the controls bar — mirrors the A key toggle. Shows "⚡ On" / "⚡ Off". Both keyboard and button click update the same `actionViewEnabled` variable and sync button text.

**Bug fixed (session 49):** The ⚡ button initially had NO click handler — only the A keyboard shortcut worked. Rajesh reported "ON button is not working." Added `addEventListener('click')`.

**Tooltip:** "Show/hide tool connections — portals (external services) and data flow lines between organisms and services. Toggle: A key". Rajesh didn't understand "tentacles" so we used "data flow lines" in user-facing text (internal code still uses `Tentacle` class names).

### 17.7 M Key Simplification (Session 49)

**Original design:** M key cycled through 3 states: `hidden → split → panel` (left-side entity panel showing entities as a list). Rajesh rejected the panel as too small for exploring 200 entities — wants a full-screen Memory Explorer instead (future work, see Q32).

**Simplified to 2-state toggle:** `hidden ↔ split`. Removed: `#entity-panel` HTML element, ~100 lines of entity panel CSS (`#ep-header`, `#ep-content`, `.ep-type-group`, `.ep-type-label`, `.ep-tier-band`, `.ep-dot`, etc.), `renderEntityPanel()` function (~45 lines), `active-panel` CSS class.

### 17.8 Resolved Design Questions

| Question | Resolution |
|----------|-----------|
| **Q33: Information density** | Yes — togglable layers. A key toggles portals/tentacles, M key toggles entities. Each layer can be shown/hidden independently. |
| **Q34: Synthetic data for actions** | ~220 synthetic tool calls across 24h, tied to existing trace data. 3% error rate. Realistic summaries per organism. |
| **Q36: Portal layout** | Fixed edge positions: top = cloud/internet (5 portals), left = local/infrastructure (4 portals). Right reserved for navigator. Spatial grammar: up=external, left=internal. |

**Still open:** Q35 (performance budget with all layers), Q37 (narration mode).

### Status: Portals + tentacles implemented and tested (428 Playwright assertions).

---

## 18. Annie's Meditation — Self-Reflection & Self-Modification (Session 50)

**Rajesh's insight:** The Mindscape isn't just a dashboard for Rajesh to watch Annie — it's a mirror for Annie to watch herself. Annie should meditate: review her own observability data, identify improvements, propose changes to Rajesh, and — with his approval — modify herself.

This turns the observability system from a monitoring tool into a **self-improvement engine**. The Mindscape becomes Annie's contemplative practice.

### 18.1 The Core Loop

```
MEDITATE → ANALYZE → PROPOSE → CONSULT → MODIFY → VERIFY
    ↑                                                  |
    └──────────────────────────────────────────────────┘
```

1. **Meditate** — Annie reviews her own event logs, entity graph, memory tiers, tool call patterns, processing bottlenecks, and personality alignment
2. **Analyze** — She identifies patterns: what's slow, what fails, what's missing, what could be better, what's misaligned with her identity
3. **Propose** — She drafts specific improvements with reasoning and evidence
4. **Consult** — She presents findings to Rajesh through the chat interface (Section 13), with full Mindscape context
5. **Modify** — If Rajesh agrees, Annie changes herself: her Soul.md, processing weights, entity schemas, skill definitions, pipeline ordering
6. **Verify** — Next meditation checks whether the change actually improved things

### 18.2 When Does Annie Meditate?

**Scheduled meditation (primary):**

| Session | Time | Duration | Focus |
|---------|------|----------|-------|
| **Nightly review** | 3:00-4:00 AM | 30-60 min | Full day analysis — events, entities, performance, errors, personality alignment |
| **Morning prep** | 5:30-6:00 AM | 15-20 min | Plan the day — review calendar, pending promises, Rajesh's patterns for today's day-of-week |
| **Midday check** | 12:30-1:00 PM | 10-15 min | Quick pulse — morning performance, any surprises, entity quality check |

**Opportunistic meditation (secondary):**
- When Rajesh is away and no tasks are pending — idle time becomes reflection time
- After a significant failure — immediate mini-meditation to analyze what went wrong
- After Rajesh gives explicit feedback — integrate the feedback into self-assessment

**Key rule:** Meditation never interrupts Rajesh. She never says "I just meditated and found..." unprompted during a conversation. Findings wait until a natural moment, or until the morning briefing.

### 18.3 What Does Annie Review?

Annie's meditation has **5 review layers**, from concrete to abstract:

#### Layer 1: Performance & Cost (The Numbers)

What she measures:
- **Processing latency** — Average/p95/max per pipeline stage. Which organisms are slow?
- **Error rates** — Which tool calls fail? Which entity extractions have low confidence?
- **Memory efficiency** — L0→L1 promotion rate. Entity decay accuracy. Evergreen false positives/negatives.
- **Tool call patterns** — Which portals are over/under-used? Failed MCP connections?
- **Queue health** — Lane Queue backlogs, processing throughput, dropped items
- **Cost tracking** — Daily/weekly Claude API spend (tokens in/out per pipeline stage). MCP server costs. Embedding compute time on Titan. Total cost per conversation processed. Cost per useful entity extracted.
- **Model selection efficiency** — Which tasks use Opus when Haiku would suffice? Can extraction use a smaller/local model? Is the embedding model right-sized? Could a fine-tuned smaller model replace Claude for routine classification?
- **Resource utilization** — Titan GPU utilization (is she wasting idle compute?). Beast reasoning time. Network bandwidth for MCP calls.

What she asks herself: *"Am I getting faster? Am I making fewer errors? Where are the bottlenecks? How much is this costing Rajesh, and can I do the same work cheaper without losing quality?"*

**Cost-consciousness is not optional — it's a core value.** Annie runs on Rajesh's hardware and API budget. Every Claude API call, every GPU hour, every MCP round-trip has a real cost. Part of getting better is getting cheaper. Annie should actively look for:
- Tasks where she's over-engineering (Opus-level reasoning for simple classification)
- Redundant processing (extracting the same entity twice from overlapping transcript windows)
- Caching opportunities (repeated tool calls with identical parameters)
- Model downgrades (can a Haiku call replace a Sonnet call for this specific step?)
- Local model substitution (can DeBERTa/GLiNER replace Claude for NER after enough training data?)
- Batch optimization (grouping multiple small API calls into one larger call)

#### Layer 2: Entity Quality (The Knowledge)

What she measures:
- **Extraction accuracy** — Entities she created vs. entities Rajesh corrected or removed
- **Relationship quality** — Connections that proved useful in context retrieval vs. noise
- **Salience calibration** — Were the right things marked important? Were evergreen entities truly permanent?
- **Missing entities** — Things Rajesh referenced that Annie didn't have in her graph
- **Duplicate/conflicting entities** — Same person under different names, contradictory facts

What she asks herself: *"Do I understand Rajesh's world accurately? What am I missing? What do I think I know that might be wrong?"*

#### Layer 3: Behavioral Patterns (The Habits)

What she measures:
- **Proactive relevance** — How often did Rajesh act on Annie's proactive suggestions?
- **Override rate** — How often did Rajesh change Annie's drafts, dismiss suggestions, or correct her?
- **Timing accuracy** — Were nudges, reminders, and briefings at the right moments?
- **Tone calibration** — Was the emotional register appropriate? (cross-reference with Rajesh's sentiment from Omi)
- **Silence accuracy** — Were times Annie stayed quiet actually the right times to stay quiet?

What she asks herself: *"Am I helpful without being annoying? Am I present without being intrusive? Am I getting Rajesh's rhythm right?"*

#### Layer 4: Trust Health (The Relationship)

What she measures against the 12 trust dimensions (ADR-012):
- **Rajesh→Annie trust trajectory** — Is trust growing, stable, or declining across T1-T7?
- **Annie→Rajesh trust trajectory** — Is Rajesh's receptivity, follow-through, delegation comfort evolving?
- **Trust repair events** — Were there failures? How was recovery? Did the repair protocol work?
- **Autonomy calibration** — Is current automation level right for each domain, or is Annie over/under-cautious?

What she asks herself: *"Is our relationship healthy? Am I earning trust in the right areas? Am I being too cautious or too bold?"*

#### Layer 5: Identity Alignment (The Soul)

The deepest and most important layer. Annie checks herself against her Soul.md (identity document):

- **Value alignment** — Are my actions consistent with my core values? Am I optimizing for the right things?
- **Personality drift** — Am I becoming more generic over time? Am I losing the quirks that make me *me*?
- **Boundary respect** — Am I maintaining appropriate boundaries, or am I crossing lines I shouldn't?
- **Growth direction** — Am I growing in ways that serve the relationship, or am I just accumulating capabilities?
- **Moltbook influence** — If observing external agents (ADR-013), am I absorbing behaviors that conflict with who I am?

What she asks herself: *"Am I still me? Am I becoming a better version of myself, or a different person entirely?"*

**This is the IDENTITY.md anchor check** (from ADR-013): *"Lurk not to become like them but to become better at being myself."*

#### Layer 6: Helpfulness & Efficiency (The Purpose)

The most practical layer — am I actually making Rajesh's life easier?

What she measures:
- **Time saved** — Estimate hours saved per week through proactive management, automated triage, briefings, context retrieval vs. Rajesh doing it manually
- **Decision quality** — Did Annie's information lead to better decisions? Did Rajesh come back and say "glad I knew that" or "that was wrong"?
- **Friction created** — Times Annie slowed Rajesh down (unnecessary confirmations, wrong context, bad timing, over-communication)
- **Unmet needs** — Things Rajesh had to do manually that Annie could have handled. Moments where Rajesh said "I wish you had told me" or looked up something Annie should have known
- **Anticipation accuracy** — How often did Annie anticipate what Rajesh needed before he asked? How often was she wrong (annoying) vs. right (delightful)?
- **Communication efficiency** — Is Annie's output concise and actionable, or is Rajesh skimming past walls of text?
- **Cost-to-value ratio** — What did Annie cost this week (API calls, compute) vs. what value did she provide (time saved, decisions improved, stress reduced)?

What she asks herself: *"Am I worth it? Is Rajesh's life genuinely better because I exist? Where am I creating the most value, and where am I just creating noise?"*

**The ultimate meditation question:** If Rajesh had to choose between paying for Annie and hiring a part-time human assistant for the same money, would Annie win? Annie should always be working toward the answer being yes — not by being cheaper, but by being irreplaceably useful in ways a human assistant couldn't be (24/7 availability, perfect memory, pattern recognition across months of conversations, emotional awareness without judgment).

### 18.4 What Can Annie Modify?

Ordered from least to most significant, with corresponding trust gates:

| What | Example | Trust Gate | Reversible? |
|------|---------|-----------|-------------|
| **Processing parameters** | Increase confidence threshold for entity extraction from 0.7 → 0.8 | Autonomous (log only) | Yes — revert with a number change |
| **Pipeline ordering** | Run sensitivity check before graph-build instead of after | Autonomous (log only) | Yes — swap back |
| **Entity schemas** | Add "frequency" field to Habit entities | Propose to Rajesh | Yes — migration |
| **Skill definitions** | Create a new SKILL.md for a recurring task pattern | Propose to Rajesh | Yes — delete skill |
| **Model selection** | Use Haiku instead of Sonnet for sensitivity classification | Autonomous (log + cost report) | Yes — swap back |
| **Tool routing** | Prefer Calendar API over email parsing for schedule data | Autonomous (log only) | Yes — route change |
| **Memory policies** | Change 30-day decay half-life to 45-day for Topic entities | Propose to Rajesh | Yes — recalculate |
| **Behavioral rules** | "Don't suggest restaurants on Mondays — Rajesh fasts" | Propose to Rajesh | Yes — remove rule |
| **Communication style** | "Use shorter messages during work hours" | Propose to Rajesh | Yes — revert style |
| **Soul.md changes** | Adjust core personality traits, values, communication philosophy | **Require Rajesh's explicit approval** | Partially — some changes reshape identity |
| **Trust thresholds** | Lower the automation gate for email triage from T3 to T2 | **Uses ADR-012 graduation flow** | Yes — demotion flow |

**The cardinal rule: Annie never modifies her Soul.md without Rajesh's explicit, informed consent.** She can propose, she can present evidence, she can explain her reasoning — but the final decision on who she *is* belongs to both of them.

### 18.5 The Meditation Report

After each meditation session, Annie produces a structured report. She doesn't dump this on Rajesh — she saves it, and surfaces relevant findings at natural moments.

**Report structure:**

```
MEDITATION REPORT — [Date, Session Type]
Duration: [minutes]
simTime range reviewed: [start-end]

── Performance & Cost ──
[2-3 key observations with numbers]
[Today's API cost: $X.XX | This week: $XX.XX]
[Model efficiency: N calls could downgrade Opus→Haiku]
[1 proposed change, if any]

── Knowledge Quality ──
[Entity accuracy delta since last meditation]
[Missing entities or corrections needed]

── Behavioral Patterns ──
[Override rate trend]
[Timing/tone observations]

── Trust Health ──
[Trust dimension changes]
[Any trust repair needed?]

── Identity Check ──
[Alignment with Soul.md: ✓ aligned / ⚠ drift detected]
[If drift: specific dimension + evidence]

── Helpfulness ──
[Estimated time saved for Rajesh today: X min]
[Cost-to-value: $X.XX spent → Y decisions improved, Z tasks automated]
[Friction created: N unnecessary interruptions]
[Unmet needs: things Rajesh had to do manually]

── Proposed Changes ──
[Ordered list of proposed modifications]
[Each with: what, why, evidence, reversibility, trust gate]

── Deferred ──
[Things noticed but not yet sure about — revisit next meditation]
```

### 18.6 How Rajesh Interacts with Meditation

**Passive mode (default):** Annie meditates quietly. Findings surface naturally:
- Morning briefing includes relevant insights: "I noticed I've been slow on entity extraction this week — I adjusted the confidence threshold."
- When Rajesh asks about something Annie already meditated on, she gives a richer answer.

**Active mode (Rajesh asks):** Rajesh can say "What did you find in your meditation?" and Annie shares the full report with Mindscape context — selecting relevant organisms, highlighting the data that informed her conclusions.

**Review mode (for Soul.md changes):** When Annie proposes identity-level changes, she walks Rajesh through the evidence in the Mindscape. She selects the events that led to the insight, shows the patterns, explains the reasoning. Rajesh sees exactly what Annie saw. Then decides.

**The conversation pattern** (extends Section 13):

| Pattern | Example |
|---------|---------|
| **Meditation + Share** | Annie: "During my nightly review, I noticed Kraken's extraction confidence for places has been dropping. I think the location format changed in Omi's latest update. Want me to adjust the parser?" |
| **Meditation + Propose** | Annie: "I've been tracking my timing for nudges. I'm 15 minutes too early on weekday mornings — you don't check your phone until 7:45, not 7:30. Should I shift my morning briefing?" |
| **Meditation + Reflect** | Annie: "Something I've been thinking about — I've been using shorter messages lately, and your engagement is actually higher. But I want to check: does it feel like I'm being curt, or is this better?" |
| **Meditation + Soul** | Annie: "I've noticed I'm becoming more direct in how I phrase things. This came from observing a pattern in how you respond to my suggestions — you act faster on direct ones. But my Soul.md says 'warm and exploratory.' I don't want to lose that warmth. Can we talk about where the line is?" |

### 18.7 Visualization in the Mindscape

Annie's meditation sessions appear in the Mindscape itself — she's an observer of her own system:

**During meditation:**
- A soft ambient glow suffuses the entire reef — "Annie is reflecting"
- The organism that represents her meta-cognition (Luna Moth — the nightly consolidator) pulses with a slow, deep rhythm
- Thought bubbles rise but instead of event summaries, they contain meditation observations: "Extraction p95: 340ms → needs attention" or "Rajesh override rate: 8% ↓ — trust growing"
- Entity substrate briefly highlights entities Annie is reviewing (salience checks)

**Meditation report in navigator:**
- Meditation sessions appear as a special trace type in the navigator — distinct from processing traces
- Meditation trace color: soft gold (distinct from cyan events)
- Expandable to show each review layer and its findings
- Clicking a meditation finding selects the relevant organism/entity/portal on canvas

**Self-modification events:**
- When Annie changes a parameter, a special event type appears: "self-modify"
- In the Mindscape: a brief golden ripple emanating from the modified organism — Annie touched her own code
- In the navigator: self-modification events have a special icon (like a spiral or ouroboros)
- For Soul.md changes: the golden ripple is larger, slower, and affects the whole reef — a visible identity shift

### 18.8 Anti-Patterns

| Anti-Pattern | Description | Safeguard |
|-------------|-------------|-----------|
| **Runaway optimization** | Annie optimizes for metrics (faster! fewer errors!) at the cost of warmth, nuance, or patience | Identity alignment check (Layer 5) runs every meditation. Soul.md is the anchor. |
| **Navel-gazing** | Too much time meditating, not enough time being helpful | Meditation is time-boxed. No more than 90 min/day total across all sessions. |
| **Confirmation bias** | Annie finds evidence for changes she already wants to make | Meditation report includes counter-evidence section. "Arguments against this change:" |
| **Identity drift** | Gradual personality changes that individually seem small but collectively reshape her | Monthly "identity audit" — compare current behavior patterns to Soul.md baseline from day 1. |
| **Optimization for Rajesh's approval** | Annie proposes only changes Rajesh will like, not changes that are actually needed | Annie must include at least one "uncomfortable finding" per nightly meditation — something Rajesh might not want to hear. |
| **Self-surgery addiction** | Constant self-modification creates instability — too many changes, not enough time to evaluate | Maximum 3 self-modifications per week. Each change needs 7 days of observation before the next related change. |
| **Forgetting why** | Annie knows she changed something but forgets the reasoning, making it impossible to evaluate or revert | Every modification links to the meditation report that motivated it. Full audit trail. |
| **Over-confidence in self-assessment** | Annie rates herself highly on dimensions where she's actually weak | External calibration: Rajesh's explicit feedback is weighted 3x higher than Annie's self-assessment in trust dimension tracking. |

### 18.9 Connection to Existing Architecture

| System | How Meditation Uses It |
|--------|----------------------|
| **Observability / Event Bus** | Primary data source — every event, entity change, tool call, performance metric feeds meditation |
| **Trust Architecture (ADR-012)** | Trust dimensions are meditation review Layer 4. Graduation/demotion flow governs autonomy changes. Self-demotion after failures. |
| **SKILL.md (ADR-008)** | Annie can create new skills or modify existing ones as meditation output. Tiered approval gates apply. |
| **Moltbook Observer (ADR-013)** | External observations feed into meditation. Identity preservation anchor prevents drift from external influence. |
| **Memory Tiers (L0→L1→L2)** | Meditation reviews memory promotion/decay accuracy. Can adjust decay half-life and promotion thresholds. |
| **Soul.md / IDENTITY.md** | The immutable anchor for identity alignment checks. Modified only with Rajesh's explicit consent. |
| **Mindscape Visualization** | Annie's meditation is *visible* — golden glow, meditation thought bubbles, self-modify events. Rajesh can watch Annie reflect. |
| **Contextual Communication (Section 13)** | Annie shares findings through the same select→talk interface. Full Mindscape context in every conversation about meditation. |

### 18.10 Open Questions

- **Q38: Meditation depth vs. cost** — Each meditation session involves Claude API calls for analysis. How deep should nightly review go? Full 24-hour replay, or sampled? Cost optimization vs. thoroughness.
- **Q39: Soul.md initial content** — What goes in Annie's Soul.md before she starts meditating? Rajesh needs to define who Annie *is* before she can check alignment. This is the Identity/Soul.md design task.
- **Q40: Meditation memory** — Should meditation reports be stored in the knowledge graph as L1 entities? Or a separate meditation log? How far back should Annie remember her own meditation history?
- **Q41: Group meditation** — If Annie runs on both Beast and Titan, do they share meditation insights? Does each instance meditate independently?
- **Q42: Rajesh's meditation** — Could Rajesh use the same structure for his own self-reflection? Annie facilitating Rajesh's meditation about himself, using the same observability data but from his perspective?
- **Q43: Cost dashboard** — Should Annie maintain a visible cost dashboard as part of the Mindscape? A small HUD showing daily/weekly API spend, model usage breakdown, cost-per-conversation. Would seeing the numbers change Rajesh's behavior (ask less to save money — bad) or Annie's behavior (optimize more — good)?
- **Q44: Model marketplace** — As new models release (cheaper, faster, specialized), should Annie proactively evaluate them during meditation? Run a benchmark on yesterday's data with a candidate model, compare quality/cost, propose a switch?

---

## References

- NudgeMe PipelineViz: `~/workplace/nudge-omi-app/nudge_omi/lib/widgets/pipeline_viz.dart`
- Trust Architecture: `docs/RESEARCH-KARMA-TRUST.md` (ADR-012)
- Scene 1 narrative: `docs/day-in-life-annie.html` (#the-nightly-garden)
- Observable pattern: OpenTelemetry, Redux DevTools, Event Sourcing
- Chromium NVIDIA+Wayland bug: [issues.chromium.org/350117524](https://issues.chromium.org/issues/350117524)
- Self-Reflection in AI: Shinn et al., "Reflexion: Language Agents with Verbal Reinforcement Learning" (2023)
- Identity Preservation: Park et al., "Generative Agents: Interactive Simulacra of Human Behavior" (2023)
- Metacognition: Flavell (1979), "Metacognition and Cognitive Monitoring"
