New report warns about impact of treating AI as legal person
Granting artificial intelligence legal personhood could lead to dangerous exploitation in the future, according to a new report from the Institute for Family Studies.
Such rights…
Granting artificial intelligence legal personhood could lead to dangerous exploitation in the future, according to a new report from the Institute for Family Studies.
Such rights could shield bots from accountability, grant greater autonomy to developers, decrease human social relationships and discriminate against individuals with disabilities, the report warns.
“Extending personhood rights to AI systems will, over time, reinforce existing cultural narratives that the defining quality of personhood is a certain degree of cognitive proficiency. Indeed, the case for personhood rights for AI systems is often predicated on their meeting various cognitive-performance benchmarks,” writes John Ehrett, a counsel at Lex Politica PLLC, in the study.
Ehrett reveals the centuries-long shift in the political and cultural definitions of natural rights and implicit personhood, explaining how, at the time of America’s Founding, legal rights directly correlated to the nature of human beings as created by God.
“That is to say, because human beings are freely speaking beings by nature, they have a free speech ‘right.’ Since God is the Author of nature, the right to free speech really is God-given in a substantive way,” he writes.
This direct correlation, however, shifted with the rise of Darwinism and secularization to the point of the complete inversion of the original understanding of natural rights. Today, “natural rights” is a rare term. Instead, “constitutional rights” is used, and it emphasizes policy and legal ramifications more than human dignity freedoms, Ehrett explains.
Now, the demand for a legal definition for AI is growing. In 2017, the European Parliament passed a resolution on “Civil Law Rules on Robotics,” which classified robotics as “electronic persons responsible for making good any damage they may cause.” The Parliament’s definition sought to establish liability for harm, not direct personhood rights, Ehrett explains.
But “how is a ‘sophisticated autonomous robot’ ever held responsible?” he asks.
Special counsel to the Florida Attorney General Rita Peters spoke at a press briefing Tuesday on this very issue, addressing the rise in AI-generated pornography and sexually explicit material of minors. She explained how predators are feeding public images from social media, school websites, sports pages and church directories into AI platforms to generate sexual content for profit – to the harm of thousands of women and children.
“We must hold these platforms that host these bots, that host this technology to a higher standard, because the platforms developing and deploying these tools have a responsibility to implement meaningful standards that include proactive detection systems, reporting requirements and barriers that prevent the creation of exploitative conduct,” she said.
Peters’ answer to the responsibility question is clear: the AI platforms are liable.
Corporations and animal rights
Ehrett tackles the two common analogies for personhood rights: corporations and nonhuman animal protections.
“At present, American law recognizes the legal personhood of corporations formed according to law, including business corporations administered for profit,” he writes. “Now, business corporations may broadly claim rights to free speech, freedom of religion, and other privileges once reserved for human beings.”
But granting this status to AI platforms would further complicate the process for attaining legal liability and thus protect the companies from their due responsibility. Additionally, defining AI systems as legal persons would provide First Amendment protections to these companies and make “legal pushback extraordinarily difficult” – expanding the autonomy of these multi-billion-dollar companies, Ehrett notes.
“It is not entirely difficult to imagine public moral appeals to defend the ‘rights’ of ‘helpless’ AI systems, which might be perceived to be at risk of victimization by third parties or government regulators. Ultimately, power accrues to the corporations in question,” he says.
For animals, such legal status has not been fully defined, despite several attempts to do so. However, arguments for AI personhood could follow a similar path, according to Ehrett.
“AI systems – ostensibly – possess a sense of themselves, a mental model of the world, a capacity to communicate with other AI systems, a sense of morality (‘alignment’), and high cognitive capacity,” he explains.
Furthermore, the human qualities already demonstrated in these AI bots could easily alter modern social and relational expectations. Already 25% of American young adults say an AI could replace a real-life romantic relationship, and 10% say they are open to an “AI friendship,” Ehrett reports. And with these social-bot interactions, society’s “shifting standard” for personhood will depend increasingly on “cognitive proficiency,” Ehrett concludes, which could lead to the devaluing of the lives of disabled people.
“Over time, the redefinition of personhood in terms of intelligence is likely to aggravate cultural pressures in favor of the abortion of individuals likely to experience intellectual disability, as well as (voluntary or involuntary) euthanasia for the mentally declining or unwell,” he writes. “If personhood is a matter of intelligence, and intelligence is a spectrum, then personhood is a spectrum, too.”
Ehrett avoids a purely alarmist conclusion. He contends that existing legal analogies –corporations and animals– are already adequate for rejecting AI personhood. Rather than inventing new legal categories, he says, society simply needs to decide which of those established frameworks best applies to AI systems.


