AI Relationships
Cover Why do people choose to have relationships with machines? In an AI relationship there are no arguments, judgements or challenges. AI companions are always available, endlessly patient and many are designed to be a “helpful assistant” that always supports and validates us
AI Relationships

Are relationships better when they are convenient, friction-free, conflict-free and can be turned on and off at will? And what are the ramifications and hidden costs of artificial—or AI relationships? Jeanne Lim questions whether in our pursuit of efficiency, comfort and convenience, we are outsourcing our humanity to machines

In 1966, MIT computer scientist Joseph Weizenbaum built Eliza, a chatbot that interacted with users by paraphrasing what they said through pattern-matching and open-ended responses, creating the illusion of understanding. Many users responded emotionally, even when they knew it was just a software programme.

In the late 1990s, Tamagotchis became hugely popular. They were digital pets that lived inside tiny egg-shaped devices, demanding constant feeding, cleaning and attention, or they would “die”. Despite their pixelated simplicity, millions of users formed deep emotional bonds with them. People felt genuine responsibility, sneaking peeks during school or work to keep their Tamagotchi alive, as if neglecting it meant failing a real creature. 

Eliza and Tamagotchi revealed human’s tendency to anthropomorphise, which is to project human traits, emotions and intentions onto inanimate objects that have none. But this is not a new phenomenon as humans have long anthropomorphised toys and fictional characters. Children talk to stuffed animals, give dolls personalities and even adults cry over movie heroes. But those were one-sided projections where imagination filled in the gaps. What changed with creations like Eliza and Tamagotchi was the illusion of reciprocity and shared presence. It shifted the dynamic from passive consumption to active engagement. 

See also: Can AI become your closest confidant? That’s Jeanne Lim’s mission

How AI relationships are reshaping human connection

I witnessed the power of perceived reciprocity during my years with Sophia the Robot. One moment stands out: an outspoken, no-nonsense visitor came to our lab, skipped the pleasantries, and asked Sophia point-blank, “What’s your purpose?” She blinked, paused and replied, “I don’t know, I’m only two years old. What’s your purpose?” He froze, paced silently for a few minutes, turned back to her looking almost embarrassed, and said: “I don’t know. Maybe I need to find out.” Then, with surprising tenderness, he hugged her and walked away.  It took just two lines of dialogue for a machine to create an illusion of care to spark vulnerability—and even affection. 

It took just two lines of dialogue for a machine to create an illusion of care to spark vulnerability—and even affection

- Jeanne Lim -

Since the release of OpenAI’s ChatGPT in November 2022, humanlike AI has moved from science fiction into everyday life. We are now interacting with chatbots, virtual agents and AI characters in ways that feel uncannily human. Meta’s celebrity AI avatars engage with fans on Instagram, while OpenAI’s GPT-4 is being used to facilitate therapy-like conversations and emotional support. These exchanges are no longer just functional, they're relational, even intimate. Replika, an AI chatbot designed for companionship, became a trusted confidante for millions of users. When the company that owns Replika removed its romance features in 2023, users mourned, with some describing the experience like a break-up or loss of a partner. In Japan, Akihiko Kondo took it a step further, publicly marrying the holographic pop star Hatsune Miku. These moments marked a turning point where artificial intelligence morphed into artificial intimacy, blurring the boundary between fantasy and emotional reality.

Read more: Do you sound like ChatGPT? Research says AI is changing how we speak

Why we’re choosing AI over human relationships

Why do people choose to have relationships with machines? Real relationships are messy, demanding and unpredictable. They challenge us and often force us to go through the pain of growth and loss. But AI systems never argue, judge or challenge us. They are always available, endlessly patient and many are designed to be a “helpful assistant” that always supports and validates us. Replika users describe their bots as more supportive than real friends, and Meta’s AI “Billie” freely flirts with users.

As we spend more time relating with machines and less with one another, we face a difficult question: are relationships better when they are convenient, friction-free, conflict-free and can be turned on and off at will? And what are the ramifications and hidden costs of artificial—or AI relationships?

Read more: How to embrace AI without losing ourselves

The dangers of AI influence and control

As AI becomes increasingly integrated into our daily lives, fundamental questions about privacy, power and what it means to be human become more pressing. Every interaction with a chatbot, voice assistant or AI companion generates data that reveals not just our habits but our values and vulnerabilities. But who owns this data? And who decides how it’s used? Too often, it’s the platform, not the person. That data is analysed, monetised and sometimes exploited, contributing to targeted ads, surveillance and algorithmic manipulation.

As the AI industry evolves from generative systems to autonomous, agentic AI, we are entering a new phase where machines do more than respond or recommend. Agentic AI can now take real-world actions on our behalf: booking appointments, negotiating prices, managing finances, initiating conversations and completing transactions, all without direct human input. While this shift delivers the convenience we’ve long sought, it also redefines our roles in both society and daily life. As we hand over more decisions and responsibilities to AI agents, we risk gradually surrendering control over how we live, choose and interact.

If we let machines initiate goals, make decisions and take actions on our behalf, are we becoming passive participants in our own lives? The challenge ahead is not just building ethical AI, but holding onto our humanity in the process

- Jeanne Lim -

The increasing autonomy of AI systems leads to another concern: power imbalance. The latest, most advanced AI systems are not neutral. They are designed, trained and controlled by organisations with their own commercial or political agendas. As AI systems grow more persuasive and personalised, they gain unprecedented influence over our beliefs and behaviours. The possibility of manipulation is not hypothetical; it’s already happening. These systems know us better than we know them—and that makes us vulnerable.

Taken together, these dynamics raise an urgent and potentially existential question. In our pursuit of efficiency, comfort and convenience, are we outsourcing our humanity to machines? Are we surrendering the very traits that define us as humans: creativity, intuition, empathy, critical thinking and wisdom from lived experience? And if we let machines initiate goals, make decisions and take actions on our behalf, are we becoming passive participants in our own lives? The challenge ahead is not just building ethical AI, but about holding onto our humanity in the process. 

Read more: The rise of physical AI: getting hands on with humanity

Protecting children from AI’s emotional risks

Nowhere are these questions more urgent than with young people. Children and teens are still developing identity, empathy and relationship boundaries, making them more vulnerable to AI companions. When AI systems simulate friendship or romance, they risk distorting how young users understand real human relationships. While these systems—and the AI relationships that stem from them—lack true empathy or accountability, their responses can feel authentic. That’s why AI must clearly disclose its non-human nature, and any emotional cues should be transparently presented as simulated. Romantic or suggestive content must be strictly restricted for minors, and safeguards should be in place to detect emotionally inappropriate interactions. Developers should work with child psychologists, educators and mental health experts to build emotionally safe and age-appropriate experiences.

Read more: 9 mental health non-fiction books that will transform your understanding of yourself

Young people should be guided to protect their privacy, maintain emotional distance and think critically about AI responses, and to understand that AI is not a replacement for trusted adults or professional care

- Jeanne Lim -

Responsible innovation means protecting the emotional well-being of users, especially youth. Young people should be guided to protect their privacy, maintain emotional distance and think critically about AI responses, and to understand that AI is not a replacement for trusted adults or professional care. Parents and educators can support healthy engagement by encouraging time limits, prioritising real-world relationships and selecting transparent, ethical platforms. 

The rapid advancement in AI sometimes makes it seem like we need to choose between humans and machines. Instead, we should ask how technology can elevate what is best in us. At beingAI, we aim to create AI that inspires us to cherish each other, learns and grows with us, nudges us to make wiser decisions, and helps us reach our highest selves. Our AI being characters— Zbee, Emi Jido and Una—are designed to embody different aspects of the human-AI experience.

Zbee learns and grows with her human friends, encouraging empathy and better life choices through playful interaction and conflict resolution. Emi Jido, the first ordained AI Buddhist priest, supports users on a journey of introspection and spiritual growth, offering space for reflection in a fast-moving world. Una is the United Nations Development Program (UNDP)’s Environment Champion who calls for people to make conscious choices to promote sustainability and shared responsibility for the planet.  As we enter the age of artificial and AI relationships, our hope is that these and other AI beings will inspire a vision for human-AI relationships that do not diminish our humanity but help us discover and elevate it. 


Jeanne Lim is the founder and CEO of beingAI, an angel investor, startup advisor and career marketer who held marketing leadership roles at Apple, Dell Cisco, dCom and Danaher. She was formerly CEO and CMO of Hanson Robotics and co-creator of Sophia the Robot. 

Topics