Why we exist
AI safety has made extraordinary progress in evaluating what models say.
It has developed sophisticated methods for measuring capability, detecting harmful outputs and testing policy compliance. This work is essential and ongoing.


What remains almost entirely undeveloped is rigorous attention to the relational dimension:
the space between human and AI where trust forms, meaning is made and harm often first emerges.
This matters because AI systems are increasingly deployed in contexts that are fundamentally relational:
healthcare, education, elder care, mental health support, financial guidance. In these domains, the quality of the interaction determines whether the system helps or harms. And the people most at risk are those who are least equipped to detect when a system is performing care rather than providing it.
The Institute of Relational AI was founded to address this gap. We are building a new discipline:
the systematic study of relational patterns, norms and behaviours in AI-human interaction, drawing on established knowledge from anthropology, psychology and ethology to make the relational dimension of AI visible, measurable and accountable.
This is not an alternative to existing AI safety work. It is the missing layer without which that work is incomplete.

Our interdisciplinary approach
The Institute is deliberately interdisciplinary.
The relational dimension of AI cannot be studied from within a single field. It requires the integration of knowledge that currently sits in separate disciplines and has not yet been systematically applied to AI.
We draw on anthropology for its understanding of how cultural structures encode trust, norms and shared meaning at scale. We draw on developmental psychology and neuroscience for the evidence that relational patterns shape cognitive architecture, including empathy, self-regulation and decision-making under uncertainty. We draw on ethology for the cross-species evidence that complex social bonds are the environment in which advanced intelligence evolves.
And we apply these disciplines to AI safety, governance and policy, translating relational knowledge into research methods, evaluation frameworks and policy recommendations that can be used by labs, regulators and institutions working to make AI systems safe.
This is not interdisciplinarity as an aesthetic. It is a methodological necessity. The relational dimension of AI-human interaction cannot be made visible using the tools of any single field alone.