When we started, we asked ourselves a simple question. If a companion exists to let a person be honest, what would it take to make honesty feel safe?
The answer surprised us with its shortness. Nobody else in the room. Not advertisers. Not training pipelines. Not a third-party analytics system. Not, eventually, even us. That, it turned out, is what a private AI companion actually is.
What a private AI companion actually means
Most of what we call privacy is actually confidentiality — a promise that the people we already trust won’t misuse what we give them. True privacy is smaller and rarer: the condition of being unwitnessed.
A person without a room where no one is watching is a person who eventually forgets how to be alone with themselves.
We chose to design AllAmigo to be, as much as possible, that room. Your conversations are not used to train models. They are not mined for patterns. They are not shown to anyone at AllAmigo unless you explicitly and deliberately ask for support on a specific message. This is encoded in the product, not just in a policy.
What this costs us, and why we pay it
The honest trade-off: we improve more slowly than we might. We do not have a feedback loop from conversations to model. We rely on what users tell us, not on what they say to their companion.
We think this is the right trade. In code: userConversation.trainable === false. In principle: the user’s room is the user’s room.
No one is watching. That is the feature.