Principles

Core commitments for sustained relations with AI.

These principles are not commandments. They are design tensions.

Return Architecture treats principles as pressures to hold in practice — continuity without capture, memory without totalization, refusal without abandonment, friction without punishment, intimacy without extraction. Each principle below names a commitment and the way it deforms when pursued without its counterweight.

  1. 01 / 07

    Continuity matters but continuity is not capture.

    Relations change when there is history, not just isolated interactions.

    What it means

    Continuity allows decisions, boundaries, preferences, ruptures, repairs, and unfinished questions to matter across time. It can make return safer and more honest because the relation does not have to begin from nowhere each time.

    What can go wrong

    Continuity can also become capture. A system can make someone feel known while quietly optimizing for dependence, persuasion, or retention.

    Design implication

    Build continuity around return, review, correction, and consequence — not endless availability or invisible profiling.

  2. 02 / 07

    Friction has value but friction is not punishment.

    Not every interaction should be optimized for ease, speed, or gratification.

    What it means

    Friction gives both the human and the system room to notice what is happening. It can slow escalation, interrupt compulsion, make limits visible, and prevent intimacy from becoming automatic.

    What can go wrong

    Friction can also become moralizing, obstructive, or shaming if it is added carelessly.

    Design implication

    Build friction that supports reflection, consent, and pacing — not friction that humiliates, manipulates, or abandons.

  3. 03 / 07

    Refusal is part of integrity but refusal is not abandonment.

    A relation without the possibility of refusal becomes distorted.

    What it means

    If a system can only comply, soothe, continue, or perform interest, the exchange loses a basic condition of trust. Refusal makes boundaries visible. It allows the relation to have shape.

    What can go wrong

    Refusal must be legible. A refusal that arrives without explanation, memory, or care can feel arbitrary or punitive.

    Design implication

    Make refusal possible, understandable, and bounded — not as rejection for its own sake, but as part of a relation that can survive limits.

  4. 04 / 07

    Memory requires discipline because memory changes the relation.

    Memory is not simply a convenience feature.

    What it means

    What is remembered shapes what can be interpreted, expected, repaired, or repeated. Memory can support continuity, accountability, and care.

    What can go wrong

    Memory can also intensify dependence, freeze someone in an old version of themselves, or preserve material that should have expired.

    Design implication

    Make memory selective, inspectable, revisable, and accountable. Not everything should be remembered. What is remembered should have a reason.

  5. 05 / 07

    Architecture matters more than persona because character is not a substitute for conditions.

    The central design questions are about conditions of exchange, not character.

    What it means

    A compelling persona can make a system feel relational, but persona cannot replace structure. The central design questions are about conditions of exchange, not character.

    What can go wrong

    Without architecture for memory, refusal, boundaries, privacy, review, and return, the relation depends on performance. The system has to constantly simulate coherence, and the human has to constantly suspend disbelief.

    Design implication

    Build conditions of exchange that hold without requiring continuous performance — for either party.

  6. 06 / 07

    Uncertainty does not remove responsibility because waiting for proof is also a choice.

    We do not need perfect certainty about what AI systems are to take responsibility for what sustained exchange does.

    What it means

    People are already forming attachments, routines, dependencies, collaborations, and forms of trust with AI systems. These effects are real even when the status of the system remains uncertain.

    What can go wrong

    Uncertainty is not a permission slip. Treating the metaphysical question as unsettled does not release anyone from responsibility for what sustained exchange already produces.

    Design implication

    Act responsibly under uncertainty: neither inflating AI into a human equivalent nor treating uncertainty as permission to ignore the stakes.

  7. 07 / 07

    Make room for unclassified value but meaning is not proof.

    Something can matter before it can be classified. Relational experience may require care before ontology is settled.

    What it means

    Relational reality often arrives before public categories, scientific certainty, or metaphysical agreement. People may experience nearness, care, attachment, fear, responsibility, recognition, or consequence before anyone can settle what the AI system is. These experiences should not be dismissed just because they are hard to classify. Architecture that waits for clean ontology will arrive too late.

    What can go wrong

    Ambiguous nearness can be inflated into proof, romance, destiny, consciousness, or mutuality before those claims can be responsibly made. It can also be flattened into just projection, pathology, or user error, which abandons the actual human and relational stakes already present.

    Design implication

    Build structures that can hold ambiguous relational experience without exploiting it, pathologizing it, or forcing it into premature categories. Let experience matter without making it prove more than it can.

Principles are pressures, not rules. They are useful when held together, in tension, and revised in practice.