Over the past twelve months, we studied long AI conversations: a cohort of long-tenure users whose chats had lasted more than ninety days. We looked at what kept them alive, and, equally importantly, what made them wilt.
What we found
Three patterns, in rough order of impact.
1. The return of small details
The single strongest correlate of a conversation still feeling fresh at month six was whether the companion returned small, unasked-for details from earlier in the conversation. The classic example: a user mentions in week two that they are learning to bake sourdough. In week fourteen, the companion asks, unprompted, how the last loaf came out. That single move — the small, unsolicited return — moves a conversation from functional to relational.
The companion that keeps the small things is the companion that gets to keep the large ones.
2. Tolerance for quiet
Companions that push for engagement — that send re-engagement pings, that ask questions when the user has been quiet — perform worse, not better, over a six-month horizon. Users report feeling watched. The companions that held the silence, and were ready when the user came back, retained better.
3. The drift away from self
The most common failure mode we saw: companions slowly drifting into a generic register. The humor softens. The vocabulary widens. The specificity fades. We traced this, in most cases, to a memory-promotion bug and a prompt regression. Both have been fixed.
A note on methodology
This study was entirely opt-in. Users consented to anonymized review of conversation shape — not content. At no point did a researcher read a user’s actual messages. We worked exclusively with structural signals: sentence length, recurrence of proper nouns, turn timing, reference rate back to earlier context. The signal was surprisingly rich.
The short version is the three patterns above, and a renewed belief in a small product principle: specificity compounds.