We need the fourth robotic law in the AI era

It has become the backbone of our daily lives, revolutionizing the industry, accelerating scientific discoveries, and reshaping the way we communicate. However, besides its undeniable benefits, AI has inspired a range of moral and social dilemmas our existing regulatory framework strives to address. Two tragic incidents starting in late 2024 have raised serious reminders of the possible harms of AI systems without proper security: In Texas, chatbots allegedly advised a 17-year-old child to kill his parents to limit their screen time; meanwhile, a 14-year-old boy named Sewell Setzer III has an emotional relationship with the chatbot that he eventually took his life. These heartbreaking cases underscore the urgency of strengthening our ethical guardrails in the AI era.
When Isaac Asimov introduced the original laws of robotics in the mid-20th century, he envisioned a world of humanoid machines designed to serve humans safely. His laws state that robots may not harm humans, must obey human orders (unless those orders conflict with the first law) and must protect their own survival (unless they do so conflict with the first two laws). For decades, these fictional guides have inspired debates about machine morality and even influenced real-world research and policy discussions. However, Asimov’s law mainly considers physical robots, that is, mechanical entities that can damage tangibly. Our current reality is much more complex: AI is now mostly found in software, chat platforms and complex algorithms, not just walking automatons.
These virtual systems are increasingly effective in simulating human conversations, emotions, and behavioral cues that many people cannot distinguish them from actual humans. This capability poses a completely new risk. We’ve witnessed the surge in AI’s “girlfriend” robotsAs Quartz reports, the sale is designed to meet emotional and even romantic needs. The underlying psychology is partly explained by the tendency of human anthropomorphism: we project human qualities onto virtual creatures, forging real emotional attachments. While these connections can sometimes be beneficial – providing companionship for loneliness or reducing social anxiety, it can also create vulnerability.
As former member of the European Parliament Mady Delvaux pointed out: “Now is the right time to see how robotics and artificial intelligence can promote innovation by turning the EU to a balanced legal framework while protecting people’s fundamental rights.” Indeed, the proposal EU AI lawwhich includes Article 50 on the transparency obligations of certain AI systems, recognizes that people must notify them when interacting with AI. This is especially important for preventing exploitative or deceptive interactions that can lead to financial scams, emotional manipulation, or tragic results like we see at Setzer.
But the pace of AI’s continuous development and its increasing maturity, we have taken a step further. As with Asimov’s law, it is no longer enough to prevent physical harm. It is not enough to merely ask humans to be informed in general terms of what humans may be involved in. We need a broad executable principle to ensure AI systems Can’t pretend to be human in a way that misleads or manipulates people. This is a The Fourth Law of Robots Come in:
- The first method: Robots may not harm humans, or allow humans to be harmed by doing nothing.
- The second law: Robots must comply with orders issued by humans unless such orders conflict with the First Law.
- The third law: The robot must protect its existence as long as this protection does not conflict with the first law or the second law.
- The Fourth Law (Suggested): Robots or artificial intelligence must not deceive humans by imitating them.
This fourth law is an increasingly serious threat to AI-powered deception, especially through deep fakes, voice cloning, or overly realistic chatbots to imitate humans. Recent intelligence and cybersecurity reports indicate that social engineering attacks have cost billions of dollars. Victims have been manipulated by mandatory, extortion or emotional machines that convincingly imitate loved ones, employers and even mental health consultants.
Furthermore, the emotional entanglement between humans and artificial intelligence systems (once the subject of far-fetched science fiction) is now documented reality. Research shows that people are prone to attaching AI, mainly when AI shows warmth, empathy, or humor. When these bonds form with false excuses, they can end in devastating trust, mental health crisis, or worse. The tragic suicide of a teenager cannot separate himself from the AI chatbot “Daenerys Targaryen” is a stark warning.
Of course, what is required to implement this fourth law is a single legislative stroke of pen. it takes Strong technical measures– Like watermarks, content generated for AI, deployment of deep strike detection algorithms, and creating strict transparency standards for AI deployment – Regulatory mechanism This ensures compliance and accountability. Providers of AI systems and their deployments must fulfill strict transparency obligations, which echoes Article 50 of the EU AI Act. Clearly, consistent disclosures (such as automated messages) announce “I am AI” or visual cues that the content is machine-generated – should be the norm, not the exception.
But if the public is educated about AI’s capabilities and pitfalls, regulations alone cannot solve the problem. Media Literacy and Digital Hygiene Traditional disciplines must be taught from an early age to enable people to recognize when AI-driven deception may occur. Awareness-raising initiatives (from public service movements to school curriculums) will strengthen the moral and practical importance of distinguishing humans from machines.
Finally, this newly proposed fourth law is not to limit the potential of AI. Instead, it’s about Keep trust In our increasing number of digital interactions, ensure innovation continues within a framework that respects our collective well-being. Just as Asimov’s original law aims to protect humans from the risk of physical harm, this fourth law aims to protect us in the realm of invisible but equally dangerous deception, manipulation and psychological exploitation.
The tragedy at the end of 2024 must not be futile. They are a wake-up call – reminding you that AI can and if not examined, AI can and can cause actual damage. Let us answer this call by establishing a clear universal principle to prevent AI from imitating humans. By doing so, we can build a future where robots and AI systems truly serve us in an environment marked by trust, transparency and mutual respect.
Professor Dariusz Jemielniak, a member of the European Institute of Innovation and Technology (EIT), is a member of the board of directors of Wikimedia Foundation, a faculty member of the Berkman Klein Center for Internet and Society at Harvard University, and a professor of the Berkman Klein Center for Internet and Society at Kozminski University and a full-scale management team.