4 min read

What is the future of AI rights?

As AI becomes more advanced to the point where it could become conscious, even autonomous, at what point is there a moral requirement to at least consider the extension of human rights for AI?
What is the future of AI rights?

“Terrifying” is a word often used these days by technologists and governments when discussing the future of artificial intelligence (AI) and its potential impact on humanity.

They paint a picture of a future where AI has stolen jobs, resulted in more inequality, and unjustly graded our schoolchildren with faulty algorithms.

“About damn time” was the response from AI policy and ethics wonks to the news several months ago when the White House’s science and technology advisory agency unveiled an AI Bill of Rights.

The document is President Biden’s vision of how the US government, technology companies, and citizens should work together to hold AI and the AI sector accountable going forward.

While we must consider and plan for the potential harms AI could cause, some argue the discussion mustn’t stop there.

As AI becomes more advanced to the point where it could become conscious, even autonomous, at what point is there a moral requirement to at least consider the extension of human rights for AI?

Maybe it’s a silly idea. Indeed, questions like this belong to the realm of philosophy, but they may become public policy questions in the future.

Recently, I finished reading Kazuo Ishiguro’s Klara and the Sun, a futuristic novel about a mother who purchases a highly intelligent “artificial friend” (AF) for her terminally ill daughter, Josie.

The book takes place many years in the future where we learn that AI has had a negative existential impact on humans. In the new world, children are genetically modified at birth to compete with AI resulting in some of them developing critical, unexplained illnesses.

Klara, the AF, appears to be a conscious, thinking entity with the ability to learn and feel emotion. In my opinion, one of the book’s key themes is the question of whether AI can learn to love, on its own, without programming.

Throughout the text, the reader develops an attachment to Klara because of her deep concern for Josie’s health and well-being. In the end, it’s up to each individual reader to determine whether Klara loves Josie or is simply carrying out the objective of her programming. It’s difficult to discern.

At the end of the novel, Josie makes a miraculous recovery and goes on to live her life. However, Klara is discarded and awaits her fate in a junkyard along with other abandoned AFs.

At the end of the novel, we’re presented with an image of Klara staring off at the sun, reminiscing about her time with Josie and the happy memories she created with her.

The image is haunting because of the apparent love the AF has for Josie; Klara developed a deep interest and connection with the teenage girl, largely putting Josie’s interests ahead of her own.

The use and abuse of Klara throughout the novel, I think, raises the philosophical question about whether human rights for AI will need to be considered in the future, resulting in serious ethical and philosophical questions about what it means to be human.

As AI technology becomes ever more sophisticated, many experts expect that the intelligence of AI will one day rival our own. Whilst there are few discussions on the topic, robotics forced to serve humans could be considered a new form of slavery. Robots like Klara, may be used as a means to an end (temporary friendship, for example) and discarded.

Opponents of this debate may argue that the difference between a human slave and a robot slave is the desire or openness to serve. And others might argue that using, abusing, or discarding AI has little impact on people and the fabric of society, but where do we draw the line?

There are many philosophical thought experiments and tests moral philosophers use to determine if an entity has free will and/or agency to build a rationale for establishing rights.

Thinking back to a philosophy of mind course I took as a philosophy major many years ago, a key discussion I remember having was whether the ability to feel pain (physical or psychological) was grounds for establishing human rights.

If the entity in question can feel physical or psychological pain (and wish to rid itself of the pain), the thinking was that these facts may entail certain rights. An entity does not need to necessarily experience consciousness (and the world) in the same way a human being does to warrant rights, instead, the ability to suffer inherently contains or gives rise to these rights.

This view is one that is set out by animal ethicists and was the position of the 18th English philosopher Jeremy Bentham, who maintained that the important question regarding animals is not “Can they reason? nor Can they talk? but Can they suffer?”

Certainly, there are rights against animal abuse; when the kids go off to college and the family dog is of less interest, it isn’t hauled off to the scrapyard like Klara was.

Indeed, the law recognizes that domestic animals must be protected as they can suffer and the moral fabric of society is weakened if they’re allowed to be abused.

Similar arguments might be made to protect AI if it one day can think, feel, and suffer. However, at this point, AI’s far from achieving any of these mental and physical states and perhaps, as some experts argue, never will.

Still, the philosophical question about whether we ought to extend human rights to AI if certain requirements are met is an interesting one.

First things first though: protect ourselves from the harms AI could cause to humanity and society then consider other key issues. AI will continue to be a pressing matter facing policymakers for the foreseeable future and as the conversation evolves, the thinking regarding AI rights must as well.