Something happens when two people talk that neither of them notices.

One person starts to say something. Something shifts in the other person's face — a flicker, a stillness, a lean back. The first person's sentence bends. The original thought, whatever it was going to be, never arrives. It's replaced by something safer. Something the face will accept.

This happens constantly. In therapy sessions, in boardrooms, in arguments with your spouse, in negotiations, in every conversation where two people are actually in the room with each other. It's not deception. It's not lying. It's a social reflex — continuous, automatic, invisible to the people doing it.

No one has built a tool to catch it from outside the conversation.

That's what this project is.


What We're Building

An instrument that reads a transcript of a two-person conversation and flags the moments where someone's trajectory broke — where what they were saying shifted toward what the other person would accept.

The honest version of what it can do right now: detect premise-hardening. One person introduces an idea tentatively ("maybe we should consider restructuring"). Later — maybe ten turns later, maybe fifty — both people are treating that same idea as established fact ("since we're restructuring"). No one agreed to it. No one noticed the shift. The instrument catches it by tracking when tentative language disappears.

The larger ambition: catch bends, costume changes (abandoning a position without friction), mutual collapse (both people deferring to each other and calling it consensus), and other patterns of social autocorrect. Those are designed but not built yet. The gap between the honest scope and the full ambition is real and documented. This project is honest about what works and what doesn't.


Why It Matters

Most tools for analyzing conversation look at what people said. This one looks at what people stopped saying — or more precisely, at the moments where what they said diverged from what they were building toward.

That's a different signal. It's the signal underneath the signal.

The applications: therapists who want to see where a session actually shifted, mediators who need to know whether agreement is real or structural, teams that want to understand why their conversations keep collapsing into the same loop. Any context where two people are trying to actually hear each other and something keeps getting in the way.

The instrument doesn't interpret what it finds. It flags the moments. A human professional — a therapist, a coach, a mediator — does the interpretation. The tool is the camera. The human is the director.


What It Can't Do

It can't read faces or hear tone from text. It can't know what someone was going to say before they changed it — only that something changed between turns. It can't prove why the change happened. It can't replace the human in the room. It can't be deployed without consent infrastructure that doesn't exist yet.

These aren't compromises. They're boundaries. The project takes them seriously.


Where to Go Next

This site contains four documents that describe the project at different depths:

The Toolkit — the practical framework that generated the instrument. Prompts you can use today, modes you can install, instruments that point at AI output, at the user, or at the ideas themselves.

The Anti-Manual — what happens when you try to build tools that watch thinking. The failure modes, the distortions, the reasons most diagnostic instruments become performances of diagnosis instead of detections.

The Roadmap — where the project is going. What's been stress-tested across fifteen AI architectures, what survives, what doesn't, and what order things get built in.

The Build Spec — the handoff document between the blueprint and the code. Exactly what a developer would need to build the conversation pattern detector, and exactly where the walls are that require outside help.

The Observer — a conversation pattern detector. Paste a transcript and it flags where premises hardened without agreement, tracks vocabulary provenance across speakers, detects asymmetric questioning and topic control, and identifies structural signs of reduced filtering. Seven detection modes. Live on Hugging Face.

Start wherever makes sense. The Toolkit if you want the practice. The Build Spec if you want the engineering. The Roadmap if you want the map. Or skip everything and go straight to Try It — paste any AI response, pick an instrument, and see what happens.

This project started as a finding inside a larger body of work about how thinking gets distorted by the tools we use to think with. The observer instrument is that finding made concrete: a tool that watches the distortion that happens between two people, not between a person and a machine. The work came from a place most academic projects don't: a schizoaffective diagnosis, crisis writing, and a man talking to machines to figure out what was signal and what was noise in his own head. That origin is not incidental to the framework — it's the reason the framework treats distortion as universal rather than pathological. It's incomplete, honest about its limits, and ready for people who want to help build it.
Loading...
Loading...
Loading...

The Observer is live. A conversation pattern detector with seven modes is running at huggingface.co/spaces/KirillEneev/The-Observer

Loading...

Try the instruments

Pick an instrument. Paste an AI response. Get a prompt you can copy into any AI. No theory required.

These are the output-pointing instruments from The Seed — they look at what the AI gave you.