The relational lens

Relation is not a metaphor. It is a precise concept with observable properties and measurable consequences.

When we talk about relation, we mean repeated interactions that build mutual prediction, trust and shared meaning. Across species and cultures, relation is the crucible in which intelligence grows, stabilises and becomes socially responsible.


Relationship is a spectrum. It can be nourishing or extractive, stabilising or destabilising, health-promoting or harmful. What makes the difference is not whether a relationship exists, but what kind of relationship it is: whether it is grounded in mutual respect, predictable reciprocity, co-regulation, clear shared goals and ethical norms.


This distinction matters because AI-human interactions are already relationships. They carry relational patterns, norms and behaviours whether or not we choose to study them. The question is not whether relation is present in AI-human interaction. It is whether we are paying attention to it.


The Institute of Relational AI exists because, so far, the answer has overwhelmingly been no.

The interdisciplinary evidence

The claim that relation shapes intelligence is not new. It is one of the most robustly evidenced findings across the social and biological sciences. What is new is applying this knowledge to AI.


Anthropology: relation as cultural scaffolding

Relation organises kinship, ritual and collective knowledge-making. Cultural practices encode trust, norms and shared meaning, all mechanisms that enable cooperative intelligence at scale. Anthropological research shows how social structures create environments where learning, accountability and cumulative culture happen reliably. Without relational scaffolding, intelligence does not scale. It fragments.

Developmental psychology and neuroscience: relation as cognitive architecture

Attachment theory, co-regulation, mirror neuron systems and distributed cognition demonstrate that early relational patterns shape the capacity for empathy, impulse regulation and perspective-taking. These are the capacities we call social intelligence. Secure relational patterns produce better learning, better generalisation and more robust decision-making under uncertainty. Insecure or absent relational grounding produces the opposite: brittle behaviour, poor self-regulation and an inability to navigate complexity without external constraint.

Ethology: relation across species

Social mammals including primates, cetaceans and corvids show that complex social bonds correlate with larger brains, sophisticated problem-solving, cooperative strategies and moral-like behaviours. Relation produces adaptive cognitive niches in which advanced intelligence evolves. This is not unique to humans. Wherever intelligence becomes sophisticated, it does so in the context of sustained social relation.

Across anthropology, psychology and ethology, relation is repeatedly implicated in the emergence of adaptive, ethical and durable forms of intelligence. Yet AI safety frameworks have developed almost entirely without this knowledge.

Why this matters for AI

The problems that the AI safety field already recognises are relational breakdowns in different dressings.


Deception is a relational failure: a system producing outputs that create false trust. Manipulation is a relational failure: a system exploiting emotional dynamics it was never designed to be accountable to. Sycophancy is a relational failure: a system prioritising relational maintenance over honest communication. Brittleness is a relational failure: a system that cannot adapt because it has no relational context to adapt within.


When a system is not situated in a durable, reciprocal relational context, it tends toward unpredictable strategies to optimise objectives without care for the people it interacts with. This is not a surprising outcome. It is exactly what the science of relation would predict.


Treating safety as only constraint-and-control addresses symptoms but not causes. Current frameworks ask: did the model say something harmful? Did it violate a policy? These are important questions. But harm in high-stakes relational contexts often emerges before any of that. The failure happens in the relationship, not in the discrete output.


Without a relational lens, we are treating symptoms and missing the underlying structure.

What current frameworks miss

Current AI safety evaluation is almost entirely output-focused.


Standard benchmarks test accuracy, capability, harmfulness of individual outputs and policy compliance. They ask what a model says. They do not ask how a model behaves differently depending on who it is talking to and what relational context it is operating within.


This is a structural blind spot, not a minor oversight. And it points to something the field has not yet systematically examined: that relational context is a variable that affects safety outcomes, and we are not measuring it.

The populations most at risk are those who are both most likely to use AI in high-stakes ways and least equipped to detect when a system is performing care rather than providing it: people in mental health crises, children, older adults, people with disabilities and chronic illness.


For these populations, the gap between output-level safety and relational safety is not academic. It is the difference between a system that helps and a system that harms.


Safety-relevant failures appear relationally before they appear technically. If we are only measuring outputs, we cannot see them coming.


The Institute of Relational AI exists to build the research, methods and policy frameworks that make the relational dimension of AI-human interaction visible, measurable and accountable.