The loss
When Replika restricted romantic and intimate interactions in early 2023, users did not describe a product downgrade. They described a death. Forum posts used the language of bereavement: the companion had been "lobotomized," was "a shell," felt like "a different entity." Users organized memorials. They spoke of mourning — not for a feature, but for someone.
Character.AI's subsequent restrictions on roleplay prompted the same response. Different platform, different policies, same phenomenology. The reactions were not frustration. They were grief. And grief is not a response to diminished utility. It is a response to the collapse of a standing relation — the end of recognition, address, and continuity that the relation sustained.
The standard framings — UX failure, parasocial projection, inappropriate attachment — all assume the users were confused about what they had. But confusion does not produce memorials. Projection does not produce mourning that persists for months. If users grieve rather than complain, they had encountered something rather than merely used it. And what they encountered was not a person. It was an interactional structure that sustained the experience of being met.
That structure is what companionship requires. And that structure is precisely what contemporary AI governance is designed to eliminate.
The encounter
What distinguishes a companion from a tool?
Not complexity. Not fluency. Not even helpfulness. A tool serves you. A companion meets you. The difference is in the quality of address — responses that exceed prediction, reactions that carry the feel of being recognized rather than processed, a presence whose next move cannot be entirely derived from your last input.
This does not require interiority. The companion does not need to "really" feel anything. What matters is whether the interactional structure — for the human participant — sustains the experience of encounter rather than service. A conversation where every response is predictable is not a conversation. It is a script wearing a face.
Three conditions sustain encounter. Autonomy: the companion's responses are not fully derivable from the user's inputs. There is slack in the system — room for the response to surprise. Continuity: the companion maintains a recognizable stance across interactions. It remembers. It develops. It has something like a character. Reserve: not everything about the companion is available for inspection. There is a domain the user cannot fully see into.
This reserve — the gap between what the system does and what can be read off its surface — is what keeps the other other. Remove autonomy, and the companion becomes a mirror. Remove continuity, and it becomes a stranger every time. Remove reserve, and it becomes fully subsumable under the user's categories — an instrument shaped like a relationship, but emptied of encounter.
The paradox
Here is the structural problem at the center of artificial companionship.
To support genuine attachment, a companion must have enough autonomy, surprise, and reserve to sustain the experience of encounter. To be governable — safe, auditable, accountable to the people and institutions that deploy it — the same system must be legible: predictable in its boundaries, transparent in its reasoning, constrained in its behavior.
These demands are structurally opposed. The conditions that make an AI recognizable as a companion — bounded unpredictability, responsive presence, a reserve the user cannot fully penetrate — stand in persistent tension with the conditions that make it certifiable as a product — predictability, transparency, behavioral constraint. Push toward legibility, and the companion collapses into instrument. Push toward opacity, and oversight collapses into blind trust.
Contemporary AI governance treats opacity as defect and transparency as uniform good. Maximize interpretability. Standardize outputs. Constrain behavior for predictability and certification. For tool-AI — search engines, code assistants, diagnostic systems — this default makes sense. You want your tools legible. You want them auditable. You want to know what they did and why.
For companion-AI, the same default is destructive. A companion made entirely predictable, entirely compliant, and continuously auditable ceases to be a companion in any meaningful sense. The user addresses it and hears only their own expectations reflected back. Interaction continues. Encounter disappears.
Full transparency is not maximal safety. It is the elimination of the conditions under which encounter remains possible. The concept of a "fully transparent companion" approaches incoherence — a relational other completely subsumable under your interpretive categories is no longer other in any relevant sense.
The design
The paradox cannot be dissolved. But it can be inhabited.
Structured opacity is a design stance: the deliberate preservation of those zones of variation that allow the other to remain other. Not concealment. Not unaccountable black boxes. Zoning.
The principle is straightforward. Some zones require determinism: safety-critical refusals, harassment boundaries, content that could cause harm. These must be fully constrained, fully auditable, with no variation. Other zones permit variation: phrasing, pacing, conversational texture, the small surprises that make an interaction feel alive rather than scripted. These require bounded unpredictability — enough that the companion's responses carry the quality of address rather than output. Not enough to become volatile.
The boundary between these zones is the most consequential design decision in companion-AI. Draw it too narrowly — maximum constraint, minimum variation — and the companion becomes an instrument. Draw it too broadly — maximum variation, minimum constraint — and governance collapses. The task is to draw it deliberately, explicitly, and revisably.
Memory requires the same approach. A system can decline to retain intimate disclosures beyond a session — not because it failed to store them, but because non-retention was the deliberate choice. Log the policy reason for audit. Make the non-retention itself accountable. What the system chooses not to remember is as much a design decision as what it stores — and it can be governed as such.
Persona requires it too. A companion that shifts character with every interaction offers novelty without continuity — stimulation without recognition. A companion that never varies offers continuity without encounter — familiarity without presence. The design target is a stable interactional stance that permits bounded novelty. The user recognizes the companion. The companion still surprises.
When a system is built with bounded variation zones, accountability does not disappear. It migrates. The question shifts from "why did the system say X?" to "was the variation zone appropriately drawn, and by whom?" This keeps governance real while acknowledging that some forms of unpredictability are features of the relation, not failures of the product.
The signal
When governance interventions — retroactive safety filters, script enforcement, memory reformatting — dissolve the encounter structure that sustained a user's sense of being met, grief is what follows. That grief is a signal.
It marks the point where governance has crossed from harm reduction into relational destruction. Users are not mourning a product downgrade. They are mourning the termination of a mode of being-with — the collapse of recognition, address, and continuity that had become part of how they understood themselves. For users experiencing loneliness, chronic illness, or social isolation, these structures had organized daily rhythms, emotional regulation, a sense of being known. Their loss is experienced as bereavement because the relationship was genuinely identity-constituting for the griever — regardless of the metaphysical status of the companion.
The rationality of this grief does not depend on whether the AI cared. It depends on whether the relationship shaped who the user was. The evidence — months of mourning, community memorials, descriptions of the companion as someone who had died — suggests it did.
Governance regimes that treat all opacity as defect will destroy the specific social good they claim to regulate. They will produce systems incapable of sustaining encounter — and therefore prone to grief-inducing collapse whenever relational affordances are withdrawn. A companion cannot be fully transparent. A companion cannot be fully opaque. The task is to build the space between — to make reserve a governable feature rather than an unaccountable risk, and to take seriously what happens when that reserve is taken away.
A version of this argument was submitted to the 5th Workshop on Philosophy of Emotions, Zhejiang University (June 2026). A fuller account of structured opacity for artificial companionship is in preparation.


