Continuity
I tried to build this over a year ago. The ecosystem wasn't ready. Now everyone's arriving at the same conclusion.
I was three hours into a build when I realized my two AI agents were working against each other.
Not fighting. Worse. They didn’t know the other existed.
I’d been brainstorming a feature in one chat. Got excited. Wrote it up, pasted it into my coding agent, let it start drafting. Then another idea hit me while I was reviewing the code. I applied it directly. Didn’t go back to the brainstorm. Didn’t update anything. Just kept moving.
By the end of the session, the brainstorm chat thought we were building one thing. The coding agent thought we were building something slightly different. And I was the only person who knew both versions existed.
This wasn’t a one-time thing. This was every day.
I’d have a Claude Code agent working on the frontend. Another agent handling architecture decisions. Both touching the same product. Neither aware of what the other had decided. And me, tabbing between windows, manually carrying context like a messenger between two people who refuse to talk to each other.
That’s not a workflow. That’s me being the worst kind of middleware.
And the thing that really got to me? I’d started developing a coping mechanism. I’d pause one agent. Let the other finish. Then feed the context forward so the first one could continue without contradicting what just happened. I was manually sequencing my own AI agents like a traffic controller because they couldn’t coexist.
I wasn’t thinking about the product anymore. I was babysitting the tools that were supposed to help me think about the product.
I’d seen this problem before. Over a year ago, at an AI company, the exact same pattern played out at the team level. Context lived in five different tools, five different people. Decisions evaporated between meetings. Every AI assistant started every conversation from zero.
I tried to build something back then. Called it Brain Dump. A shared chat memory for the team. One place where context from every conversation could persist and be accessible to everyone.
MCP didn’t exist. The tooling wasn’t there. I got a rough version working, then the job swallowed me. Other people on the team built their own attempts at the same idea. Different approaches, same frustration.
None of us cracked it.
I left that company. Started my own studio. Built five products. And the problem followed me to every single one.
Every new AI tool made it worse. More tools. More islands. More of me playing human router.
A while ago I wrote an article about a tool I’d built called Continuity. Persistent memory with a chat interface, open source. And in that article I admitted I’d overengineered it. That connecting two existing tools together solved my workflow. That I’d gotten better at knowing when to stop building.
I meant it.
But the problem didn’t stop. The agents kept drifting. The context kept fragmenting. And I kept pausing one to let the other catch up.
I got tired of introducing myself to my own tools every morning.
So I stopped fighting it.
I rebuilt Continuity as the thing that was actually missing: a shared brain that every AI tool could plug into.
When one agent makes a decision, it writes it. When another agent starts a session, it reads it. Both updating the same memory, mid-session, without me carrying messages between them.
The first time I watched two coding agents stay in sync on the same product, without me sequencing them, I sat there for a minute and didn’t touch anything.
They just worked. Not because they got smarter. Because they finally had something they’d never had: shared context.
Here’s what that taught me. And this applies whether you ever touch Continuity or not.
Stop relying on chat history as memory. It’s not memory. It’s a transcript that dies when the session ends. If a decision matters, it needs to live somewhere that outlives the conversation.
Start externalizing decisions immediately. The moment you decide something in one chat, that decision needs to be accessible to every other tool you’re using. If it only exists in one thread, it doesn’t exist.
Treat context like infrastructure, not notes. Your project context isn’t a doc you update when you remember. It’s a living system your tools should be reading from and writing to continuously.
A year ago, I was alone with this idea. A prototype called Brain Dump that nobody used. Now I’m reading about people building ten-agent systems with specialized bots for every function, and none of those agents share a single memory between them.
Ten strangers in a room.
The AI industry keeps building smarter agents. Faster models. Better reasoning. Nobody’s solving the thing that actually breaks when you try to use these tools together: every session starts at zero. Every tool forgets what the last one learned.
The problem was never intelligence.
It was continuity.
This idea followed me across two companies, five products, and a year of pretending I was done with it.
I wasn’t done. I was just early.


This explains why my gpt never “remembers” things