Research Lens Catalog

26 perspectives for extracting hidden insight from any body of research

Every research document contains more insight than any single reading reveals.


Most of us read research linearly: introduction, method, results, conclusion. This default path captures one dimension. The other dimensions — the assumptions nobody questioned, the constraints that could be relaxed, the applications no one considered — stay invisible.


These 26 lenses make the invisible visible. Each forces you to view familiar information from a carefully chosen angle, surfacing insights that a linear read always misses.


Pick 3–5 lenses that feel non-obvious for your research. The obvious ones confirm what you know. The uncomfortable ones show you what you've been missing. Click any card to mark it for your analysis.

Decompose

Strip to structure — understand what it's made of
01 layered diagram

First Principles X-Ray

"What must be true for this to work?"


  • Which assumptions are laws of nature and which are just habits?
  • What constraint does everyone treat as fundamental but is actually a choice?
  • If you could only keep 3 constraints, which 3 make the core idea work?
  • What would change if the "obvious" assumption turned out to be wrong?
Why this works

Humans confuse "how it's always been done" with "how it must be." Separating convention from physics reveals which constraints are voluntary. Most innovations come from questioning what everyone else treats as fixed.

02 multi-level zoom

Abstraction Elevator

"What do you see at each altitude?"


  • What's the one-sentence pitch at 30,000 ft?
  • What architectural decision is invisible at ground level but load-bearing?
  • What implementation detail changes the theoretical picture entirely?
  • Where does the abstraction leak?
Why this works

Different altitudes reveal different truths. A CTO sees value propositions; an engineer sees race conditions. Moving between levels catches leaky abstractions and hidden load-bearing decisions that only appear at specific zoom levels.

03 directed graph

Dependency Telescope

"What's upstream and downstream?"


  • What single upstream change would break this?
  • What downstream use case would this accidentally enable?
  • Where is the hidden single point of failure?
  • Which dependency is most likely to change in the next 2 years?
Why this works

Systems fail at their dependencies, not at their core. The most dangerous dependencies are the ones you didn't know you had — upstream changes that cascade silently and downstream consumers you didn't design for.

04 heatmap / contour

Sensitivity Surface

"Which knob matters most?"


  • If you could only tune ONE parameter, which gives the biggest improvement?
  • What's surprisingly insensitive — matters less than people think?
  • What's catastrophically sensitive — tiny change causes system failure?
  • Where are the cliff edges — smooth gradient that suddenly drops off?
Why this works

Not all parameters matter equally. Understanding which knobs have outsized effect lets you focus energy where it counts. Cliff edges — where smooth functions suddenly become discontinuities — are where production incidents live.

Evolve

Trace the arc — understand where it came from and where it's going
05 annotated timeline

Evolution Timeline

"How did we get here and where are we going?"


  • What problem did the previous approach solve that this one takes for granted?
  • What's the repeating pattern in the transitions?
  • What external force (hardware, regulation, scale) will trigger the next evolution?
  • What would a researcher in 2030 find laughably primitive about today's approach?
Why this works

Technology evolves in recognizable patterns: a new constraint becomes the bottleneck, a new approach removes it, the next constraint surfaces. Seeing the pattern lets you anticipate the next transition before it arrives.

06 cascade tree

Second-Order Effects

"Then what?"


  • If this succeeds wildly, what new problem does success create?
  • What adjacent system will need to adapt?
  • What behavior will users develop that you didn't intend?
  • Three steps downstream — what world are we building?
Why this works

First-order thinking asks "will this work?" Second-order asks "then what?" The most impactful consequences are usually 2–3 steps downstream, invisible to a linear read. Every solution creates new problems worth anticipating.

Position

Map the landscape — understand where it sits among alternatives
07 2D scatter / bubble

Landscape Map

"Where does this sit among all the alternatives?"


  • What two axes best separate the competing approaches? (Go deeper than obvious features)
  • Which quadrant is suspiciously empty — and what would a solution there look like?
  • Is this research at the center of the map (compromise) or at an edge (extreme bet)?
  • What would shift everything on the map simultaneously? (A paradigm shift)
Why this works

Without a map, you can't see the empty spaces. Positioning reveals over-crowded quadrants (diminishing returns) and empty ones (unexplored opportunities). The most interesting point on any landscape map is usually the gap.

08 parallel mapping

Analogy Bridge

"What is this really, in a domain I already understand?"


  • What domain solves a structurally identical problem? (Biology, economics, city planning, music)
  • Where does the analogy break? (That's often where the real insight hides)
  • What solution exists in the analogy domain that hasn't been tried here?
  • If you described this to a chef/biologist/economist, what would they immediately suggest?
Why this works

The brain builds understanding through structural analogy. Mapping research to a domain you understand deeply lets you inherit that domain's intuitions — including solutions they've already discovered for structurally similar problems.

09 spider / radar chart

Tradeoff Radar

"What are you sacrificing, and is that the right sacrifice?"


  • Which tradeoff does the research community refuse to acknowledge?
  • Can any tradeoff be shifted by a fundamentally different approach (not just moved along the frontier)?
  • What tradeoff would the actual user choose differently than the researcher assumes?
  • Where is "good enough" dramatically cheaper than "optimal"?
Why this works

Every design choice is a bet about which tradeoffs matter. Making bets explicit reveals where the researcher's assumptions about priorities might diverge from yours. The tradeoff nobody discusses is often the most important one.

Stress-Test

Break & challenge — find what fails before it fails you
10 fishbone diagram

Failure Pre-mortem

"It's 6 months later and this failed. What happened?"


  • What's the most likely failure mode — the boring, predictable one?
  • What external change would make this approach obsolete overnight?
  • What would users complain about most?
  • What would the team wish they'd built differently?
Why this works

It's easier to explain failure in hindsight than predict it in foresight. Pre-mortems exploit this asymmetry: by imagining the project already failed, you unlock failure narratives that forward-looking risk analysis systematically misses.

11 threat model

Red Team Brief

"How would an adversary respond?"


  • How would a well-funded competitor counter this within 6 months?
  • How would a malicious user exploit this for unintended purposes?
  • What would a skeptical CTO ask that has no good answer today?
  • What plausible regulatory change would kill this approach?
Why this works

Failure analysis is accidental; red-teaming is intentional. A dedicated adversary finds vulnerabilities that passive analysis misses because they're actively trying to break the system, not just cataloging what might go wrong on its own.

12 do / don't pairs

Anti-Pattern Gallery

"What looks right but leads nowhere?"


  • What's the most seductive mistake a newcomer would make?
  • What "obvious improvement" actually makes things worse?
  • What worked in the paper but consistently fails in production?
  • What cargo-cult practice has grown around this approach?
Why this works

The most efficient way to learn is from others' mistakes. Every field develops common mistakes that look like best practices to newcomers. Cataloging anti-patterns saves months of dead ends and prevents reinventing known failures.

13 risk matrix

Constraint Analysis

"What assumptions must hold — and how fragile are they?"


  • Which constraint is most fragile — likely to break first?
  • Which constraint is artificially imposed and could be removed with effort?
  • What happens when two constraints conflict with each other?
  • Which constraint will technology relax within 3 years?
Why this works

Every system operates within constraints it doesn't acknowledge. When an implicit constraint changes (hardware gets cheaper, regulation shifts, user behavior evolves), systems built around the old constraint become liabilities overnight.

Generate

Create new ideas — use the research as raw material for innovation
14 mirror diagram

The Inversion

"What if you did the exact opposite?"


  • What if you optimized for the opposite metric?
  • What if the user did the work instead of the system (or vice versa)?
  • What if you solved this offline instead of online (or vice versa)?
  • What if you made this maximally simple instead of maximally capable?
Why this works

Humans anchor on the status quo. Considering the opposite breaks the anchor. Often the inversion isn't viable — but the process of considering it reveals hidden assumptions and sometimes surfaces a genuinely better approach nobody tried.

15 architecture diff

Constraint Relaxation

"What if the rules changed?"


  • What constraint, if removed, would change the architecture the most?
  • What if you accepted 10x worse on one metric — what would that buy on others?
  • What's constrained today that hardware trends will relax in 2 years?
  • What constraint does the user not actually care about?
Why this works

Much of engineering is optimization within constraints. Relaxing constraints changes the landscape entirely — sometimes a 10x relaxation on one metric buys a 100x improvement on another. The constraint your users care about least is your biggest opportunity.

16 combination matrix

Composition Lab

"What if you combined ideas that weren't meant to go together?"


  • What two ideas from different sections would create something new together?
  • What's the minimal combination that achieves 80% of the full system's value?
  • What combinations have been tried and abandoned — and has something changed since?
  • What combination would a practitioner from a different field naturally try?
Why this works

Individual ideas are well-explored; combinations are not. Most innovations are novel combinations of existing ideas, not genuinely new ones. The combinatorial space grows exponentially — most interesting pairs have never been tried.

17 domain crosswalk

Transfer Matrix

"Where else would this idea thrive?"


  • What industry would benefit from this exact approach?
  • What if you applied this to a problem 1000x smaller? 1000x bigger?
  • What adjacent problem has the same mathematical structure?
  • What startup could you build using only the insights from this research?
Why this works

The most impactful innovations are often transplants: an idea that's mundane in one field is revolutionary in another. Structural similarity between domains is usually deeper than it appears, and the solution techniques transfer surprisingly well.

Apply

Decide & build — turn understanding into action
18 flowchart

Decision Tree

"Under what specific conditions is this the best choice?"


  • What's the first branching question that determines fit?
  • Under what conditions is this approach CLEARLY the wrong choice?
  • What's the minimum viable context where this starts to pay off?
  • What single change in requirements would flip the decision?
Why this works

"Is this good?" is a useless question. "Under what specific conditions is this the best choice?" is actionable. Decision trees force specificity about context, making recommendations precise instead of vague.

19 log-scale chart

Scale Microscope

"What changes at 10x? 100x? 1000x?"


  • What's linear, what's superlinear, what's sublinear as you scale?
  • Where's the phase transition — where a small increase causes a qualitative shift?
  • What breaks first when you scale up? What breaks when you scale down?
  • What works at prototype scale but fails at production scale (and vice versa)?
Why this works

Behavior is not linear. Systems that work beautifully at small scale fail at large scale (and vice versa) because of phase transitions, resource contention, and emergent properties. Knowing where the transitions are is more valuable than steady-state performance.

20 storyboard

Day-in-the-Life

"Walk me through a real scenario, minute by minute."


  • What does the first hour of adoption look like in practice?
  • When things go wrong, what does the debugging experience feel like?
  • What's the most tedious recurring task this creates?
  • What moment makes someone think "this was worth the switch"?
Why this works

Nothing reveals gaps like concrete specificity. A vague description of "improved workflow" can hide dozens of unsolved problems. Walking through minute-by-minute exposes them all and forces honesty about the actual experience.

Human

People & adoption — understand who this affects and how they'll respond
21 multi-perspective cards

Stakeholder Kaleidoscope

"Who sees what — and whose view are we ignoring?"


  • How does a user experience this vs. how a developer debugs it vs. how ops monitors it?
  • Where do stakeholder needs directly conflict?
  • Whose perspective is underrepresented in the research?
  • What would change if you designed for the secondary stakeholder first?
Why this works

The same system looks completely different to a user, a developer, an ops engineer, and a business owner. Designing for only one perspective creates a product that delights one stakeholder and frustrates the rest.

22 stepped diagram

Learning Staircase

"What's the path from 'what is this?' to 'I can extend this'?"


  • What's the minimum someone needs to know to get any value?
  • What's the biggest misconception that slows learning?
  • Where do people plateau, and what unsticks them?
  • What would a 15-minute demo look like vs. a 3-hour deep dive?
Why this works

The gap between "I understand the concept" and "I can use this in production" is where most promising research dies. Mapping the learning journey reveals where adoption gets stuck and where better documentation or tooling is needed most.

23 energy surface

Energy Landscape

"What resists change — and what would lower the barrier?"


  • What's the real switching cost (time, money, political capital)?
  • What "good enough" existing solution is this competing against?
  • Who has to say yes for adoption to happen — and what do they care about?
  • What catalytic event would suddenly lower the activation energy?
Why this works

Good ideas fail not because they're wrong but because the activation energy for adoption is too high. Understanding what resists change — switching costs, organizational antibodies, "good enough" incumbents — is as important as understanding the idea itself.

Discover

Find the gaps — see what everyone else is missing
24 capability checklist

Gap Finder

"What's not being said — and why?"


  • What question does the research carefully avoid answering?
  • What use case is conspicuously absent from all the examples?
  • What would a complete solution need that this doesn't provide?
  • What follow-up work does this implicitly assume someone else will do?
Why this works

What a researcher doesn't say is often more revealing than what they do say. Gaps aren't oversights — they're usually conscious scope decisions. But each gap is both a critical limitation to understand and an opportunity for follow-on work.

25 Venn / visibility map

Blind Spot Scan

"What's invisible because of where you're standing?"


  • What field-specific jargon might be hiding a simple concept — or a real confusion?
  • What assumption is so universal in this field that nobody questions it?
  • What would a complete outsider find strange or unnecessary?
  • If this research were done in a different country, industry, or decade — what would change?
Why this works

Every field, culture, and individual has systematic blind spots. The most valuable insights come from perspectives structurally excluded from the standard discourse. An outsider's "naive" question often points directly at the field's biggest hidden assumption.

26 branching inquiry map

Question Horizon

"What new questions become askable because of this research?"


  • What question would have been meaningless before this work but is now both askable and important?
  • What does this research ALMOST answer — close enough to be tantalizing, not enough to be satisfying?
  • What question, if answered next, would make this research 10x more valuable?
  • What question connects this work to a completely different field nobody has linked it to?
  • What would a curious outsider ask that insiders have stopped asking?
Why this works

Research doesn't just provide answers — it expands the space of meaningful questions. The most valuable output of reading research is often not the answers it gives but the questions it makes possible to ask for the first time. These "horizon questions" are the seeds of the next wave of innovation, and they're invisible until you deliberately look for them.

How to Use This Catalog

  1. Scan — Read through all 26 core questions. Click the 3–5 that make you slightly uncomfortable. Comfort means the lens will confirm what you know. Discomfort means it'll show you something new.
  2. Interrogate — For each chosen lens, work through every driving question against your specific research. Write answers even when they feel speculative — especially then.
  3. Visualize — Build each lens output as a single-page HTML using the suggested output format on each card. The constraint of visual presentation forces clarity that prose alone doesn't demand.
  4. Cross-pollinate — Look across your lens outputs. Where do two different lenses point to the same gap? Where does one lens's answer create a new question for another? Convergence across lenses is the strongest signal for innovation.
  5. Extract — The innovations live in the intersections: where Lens A's gap meets Lens B's opportunity. Document these as explicit insight cards — each one is a candidate for action.