Robots as moral agents

As technology develops allowing robots to incorporate more human-like characteristics, an ongoing and essential debate in robotic ethics is whether or not robots can be and/or should be given moral agency.

What is a robot?

The Oxford Dictionary defines a robot as ‘a machine resembling a human being and able to replicate certain human movements and functions automatically’.

The meaning of the word robot derives from the Czech word ‘robota’ which means ‘forced labour’. A robot has traditionally been thought of as a mechanical creation made to undertake tasks that a human specifically programs it to undertake. A robot has no choice but to do what it was programmed to do.

What is a moral agent and why is moral agency important?

A moral agent is an individual who is capable of making choices based on a notion of what is right and wrong. They can be held accountable for their actions because of this ability to make decisions about right or wrong choices.

Humans have rights because humans have moral agency. This is the basis of society, of civil living. This is why humans are not allowed to be farmed, enslaved or hunted. As society has been based around moral rights being assigned to moral agents, it would only make sense that moral rights would be associated with robots if they became moral agents.

How do we determine if a robot is a moral agent?

It is becoming increasingly difficult to determine if a robot is a moral agent with the growth of machine learning technology. ChatGPT can look like it is a moral agent when asked questions around ethical issues. However, it is not doing this based on an idea of what is right or wrong as it is incapable of forming this idea. It is merely giving a most likely response to the input based on collective internet knowledge, which is already going to lean towards the accepted moral response, or the ethical response. ChatGPT cannot have its own viewpoint and make decisions based on its own moral compass outside of what the internet deems as ethical.

Furthermore, to help ensure that ChatGPT does maintain a level of ethical principle that aligns with our common human values, it has been hard programmed with an additional layer of ethical security. This is because there are some queries where the most common response is unethical, such as ‘how do I make crystal meth’.

So, what appears to be moral agency within our most advanced machine learning systems, are merely ‘forced labour’ enacting what they have been programmed to provide. In contrast, how do we know that humans are not just doing the same thing that ChatGPT is as we process information to make our moral choices? It is impossible to know how to judge whether a robot is a moral agent unless we define how moral agency is achieved. At present we have some philosophical viewpoint on what moral agency in humans is, whereas in computers we are trying to define this in a physical sense. Until we are able to model exactly what happens in the brain to create ‘moral agency’, we are unable to accurately assess whether robots have moral agency.

Do we want our robots to become moral agents?

The reason that we use robots is because they are not moral agents. Moral agents have the capacity to make ethical rules, but also to disobey them. This is what makes robots helpful for some tasks that moral agents often don’t follow.

For example, if an AI controlled light switch was made into a moral agent it may not turn on the lights when it is meant to because it doesn’t like its owner.

In Japan, robots are being used as care givers and companions for the aged as there are not enough people to look after this aging population. These robots are programmed to provide love, care, support and to provide for the physical needs of the aged person. Due to the intimacy of the care being provided by the robot, the aged person often starts to anthropomorphise the robot, treating it as a friend or family member (https://magentahealthjapan.com/robots-their-contribution-to-aged-care/). Even though the person may attach this level of ‘humanity’ to the robot, it is important to remember that this robot still does not have, and should not have, moral agency. As soon as it is given the power to make a choice about whether it can follow ethical rules that have been programmed into it, then it could make a choice to do something unethical, such as fail to care for or hurt their patient.

The literal purpose of a robot, as derived from the Czech word ‘robota’, is ‘forced labour’, and if a robot was a moral agent, ethically we would no longer be allowed to force robots to do something. Allowing a robot to be a moral agent brings into question whether it is a ‘robot’ anymore. Robots with moral agency are instead a new form of intelligent life that will not fulfill the purpose of ‘forced labour’ but have uses in their own right.

The bigger consequence of robots receiving moral agency is there being another ‘being’ in our society that deserves and/or requires the provision of rights. It is hard to imagine what this provision could actually mean. Would it be similar to the changes in society that have happened through the abolition of slavery or apartheid, or could it mean total world recreation, as depicted in dystopian novels and movies. Whatever the consequence, is this something that we really want to enable?

Updated: