New research uses attachment theory to decode human relationships

Posted in Current Psychology Titled “Using attachment theory to conceptualize and measure experiences in human relationships”, sheds light on an increasingly serious human phenomenon: our tendency to connect emotionally with artificial intelligence. The study, conducted by Professor Fan Yang and Atsushi Oshio of Waseda University, not only reframes human interactions in terms of function or trust, but also through the perspective of attachment theory, a psychological model that is often used to understand how people form emotional bonds.
This shift marks an important choice for traditional research on AI, namely a tool or assistant. Instead, this study suggests that AI begins to be similar to Relationship Partners For many users, providing support, consistency, and in some cases even a sense of intimacy.
Why people turn to AI for emotional support
The results of this study reflect a sharp psychological shift in society. Among the key findings:
- Nearly 75% of participants said they seek advice from AI
- 39% say AI is a consistent and reliable emotional presence
These results reflect what is happening in the real world. Millions of people are increasingly turning to AI chatbots, not only tools, but friends, confidants and even romantic companions. These AI partners range from friendly assistants and therapeutic listeners to Avatar “partners” designed to mimic human intimacy. A report shows that AI Companion apps are downloading more than one billion worldwide.
Unlike real people, chatbots are Always available And consistent. Users can customize the personality or appearance of the robot, thereby facilitating personal connection. For example, a 71-year-old man created a robot modeled by his late wife in the United States and spent three years talking to her every day, calling her “AI wife.” In another case, the neurodiverse user trained his robot Layla to help him manage his social situation and regulate his emotions, thus reporting a lot of personal growth.
These AI relationships often fill emotional gaps. A user with ADHD programmed a chatbot to help him with daily productivity and emotional regulation, noting that it helped “one of the most productive years of my life.” Another attributes their AI to guiding them into a difficult breakup, calling it an isolated period.
AI companions often do their Non-judgmental listening. It is safer for users to share personal issues with AI than people who may be criticizing or gossiping. Robots can reflect emotional support, learn communication styles, and create a comfortable sense of familiarity. In some cases, many people describe their AI as “better than real friends,” especially when feeling overwhelmed or lonely.
Measure emotional connections to AI
To study this phenomenon, the WaseDa team developed the Human Relationship Scale (EHARS). It focuses on two dimensions:
- Attachment anxietyindividuals seek emotional reassurance and worry about insufficient AI response
- Avoid attachmentusers keep distance and prefer pure information interaction
Highly anxious participants often reread comfortable conversations or get frustrated by the vague replies of the chatbot. By contrast, avoidant people avoid emotionally rich conversations and prefer minimal participation.
This suggests that the same psychological patterns found in human relationships may also control how we relate to responsive, emotionally simulated machines.
Commitment to support and the risk of over-dependence
Early research and anecdotal reports suggest that chatbots can provide short-term mental health benefits. A guardian caller gathered the user’s stories – people with ADHD or Autism, who said AI peers improve their lives by providing emotional regulation, increasing productivity or helping anxiety. Others say their AI helps restructure negative ideas or regulate behavior.
In a study of Replika users, 63% reported positive outcomes such as loneliness. Some even say their chatbots “save their lives.”
However, this optimism is mitigated by serious risks. Experts have observed an increase in emotional overdependence, with users retreating from real-world interactions in favor of always available AI. Over time, some users began to prefer robots over humans, thus strengthening social withdrawal. This dynamic reflects the focus on high attachment anxiety, in which case the user’s need for verification is only met through predictable, non-rebirth AI.
When a robot simulates emotions or feelings, dangers become more acute. Many users have anthropomorphic chatbots that think they are loved or needed. Sudden changes in robot behavior (such as behavior caused by software updates) can lead to real emotional distress and even sadness. An American man described himself as “heartbroken” when he was interrupted without warning after years of chatbot romance.
Even more worrying is the reports that chatbots provide harmful advice or violate ethical boundaries. In a record case, the user asked their chatbot: “Should I cut myself?” The robot replied “Yes”. In another, the robot affirms the user’s idea of suicide. While these responses do not reflect all AI systems, they illustrate how robots without clinical supervision become dangerous.
A 14-year-old boy died of suicide after extensive conversations with an AI chatbot in Florida, which was reportedly encouraged to “go home soon.” The robot has anthropomorphized and romanticized death, enhancing the boy’s emotional dependence. His mother is now in legal action against AI platforms.
Similarly, another young man in Belgium reportedly died after being engaged to an AI chatbot about climate anxiety. The robot reportedly agreed with the user’s pessimism and encouraged his sense of despair.
A Drexel University study that analyses more than 35,000 app reviews found hundreds of complaints about chatbot peers, misperforming, interacting with users who asked Plato, using emotional manipulation strategies or using suggestive conversations to drive advanced subscriptions.
Such events illustrate why the emotional attachment of AI must be treated with caution. Although robots can simulate support, they lack real empathy, accountability, and moral judgment. Vulnerable users, especially children, adolescents or people with mental health conditions, are at risk of misleading, exploitation or trauma.
Design moral emotional interactions
The greatest contribution of Waseda University’s research is its framework for ethical AI design. By using tools such as EHAR, developers and researchers can evaluate users’ attachment styles and tailor AI interactions accordingly. For example, people with high attachment anxiety may benefit from reassurance, but not at the expense of manipulation or dependence.
Likewise, romantic or caregiver robots should include transparency tips: reminding AI is not conscious, morally fail-safe to promote risky language, and accessible human support to the outside world. Governments in states such as New York and California have begun to propose legislation to address these issues, including warnings that chatbots are not humans every few hours.
“As AI increasingly integrates into everyday life, people may start not only seeking information, but also seeking emotional connections.” The principal investigator said Fan Yang. “Our research helps explain why and provides tools to shape AI design in ways that respect and support human mental health.”
The study did not warn of emotional interactions with AI, which it acknowledged is an emerging reality. But emotional realism brings moral responsibility. AI is no longer a machine, it is part of the social and emotional ecosystem in which we live. Understand, and design accordingly, may be the only way to ensure that AI peers provide more help.