top of page
  • Writer's pictureAndrew Alam-Nist

Do Robots Deserve Moral Consideration?

Updated: Jul 4, 2023

About a month ago I went to a talk by Shelly Kagan, Clark Professor of Philosophy at Yale, on the moral status of robots. The room was filled with eager listeners hoping to acquire morsels of moral wisdom from Kagan, whose demeanor and propensity to sit on tables give him a counter-cultural, almost Messianic air. In his talk, Kagan argued that we should treat robots with full moral consideration when they are sufficiently advanced to behaviorally resemble human beings and their desires. For Kagan, if a robot or AI appears to beg for a certain thing, for instance, to not be turned off, the robot’s desires have comparable moral weight to those of humans or other sentient beings.


Kagan’s view of robotic moral value entails a plethora of implications that initially are foreign to most people. If robots have full moral value, they may have rights such as the right to life or free speech, and perhaps even democratic rights such as the right to vote. Turning a robot off could constitute murder.


It is difficult to imagine a world with robotic rights and personhood. Such a concept provokes a visceral rejection. It almost seems like a category error for a program with a specific purpose, such as a chatbot or AI assistant to have a right to free speech. However, to credibly reject Kagan’s stance, it is necessary to consider questions of personhood, moral value, and consciousness, moving beyond gut instincts.


Before delving into the philosophical status of robots, it is worth defining what a robot is. While the term typically evokes dramatic images of Terminators, droids, or Daleks (or perhaps all at once), when considering questions of moral agency we can define a robot as a mechanical or electronic system that uses some form of artificial intelligence to display signs of apparent awareness or interests. In my definition, robots do not need to have corporeal bodies. Rather than creations of science fiction, a real-world robot would likely take the form of a complex neural network or artificial intelligence, such as ChatGPT, a sophisticated language generator made by OpenAI, or more advanced iterations of similar programs.


The question of whether robots deserve moral consideration can be subdivided into two further questions: what are the necessary prerequisites for moral status, and do robots have such prerequisites?


Answering the first question - what is needed for moral status - Kagan considered a commonly held philosophical prerequisite of moral status: Qualia. Qualia (singular quale) are individual instances of subjective experience. For instance, seeing the color red is a quale. Feeling hungry is a quale. Desperately thinking I don’t want to be killed is a quale. Many advocates of the view that robots lack moral significance suggest that robots lack qualia and, consequently, need not be considered in our moral calculus.


Shelly Kagan did not agree with this. Throughout his talk, he half comically qualified that whenever he says a robot ‘desires’ something, they desire it with an asterisk (desires* it) because, likely lacking qualia, robots may not desire things in the same way a human or other sentient being does. Kagan did not think this was overwhelmingly important. For Kagan, if a robot seems to be in pain or want something, this is itself important. In his example, if a seeming mommy robot desires* for its child* to not be turned off, or if a robot states it is scared* and wants* something, this should be taken at face value and be considered important. Qualia are not necessary.


If you agree with Kagan’s suggestion that qualia are not needed for moral value or the consideration of one's interests, then it is reasonable to suggest that we should consider robots in our moral decision-making. However, there are essential problems with Kagan’s suggestion.


It is firstly worth noting that the framing of Kagan’s examples biases our intuitions to conclude that robots are morally significant, regardless of whether this is true or not. Talking about ‘mommy robots’ and ‘child robots’ and their desires uses the language of qualia and human experience even while acknowledging that robots may not have them. His example causes us to map our visceral instincts about interpersonal relations to things which that may or may not be people. If they lack qualia, using the word want* or desire* may not be appropriate, even with an asterisk attached, because the ‘wanting’ of robots does not resemble actual wants.


If a robot lacks qualia, then a robot saying it is scared or wants something does not correspond to a corresponding mental state. Despite saying it is scared, it does not actually feel fear. If a robot says it wants something, it does not actually have a conscious desire underpinning this want*. Without these corresponding mental states, it is difficult to see how a robot can actually be morally significant.


To consider the importance of actual thoughts rather than statements, consider the following example:


A man is hypnotized such that he is compelled to say that he wants us to sell his car. The outward language he uses will never say anything except that he would like to be rid of this car. However, internally, he actually wants to keep his car and would be aghast at the notion of it being sold.


If we knew this entire setup, that a man was hypnotized and was saying something contrary to his desires, would we have a duty to sell this car or hold onto it? I would argue we do. Consider, then, a modified version of this thought experiment. Rather than wanting to hold onto his car, the man is thoroughly neutral. He does not care whether we sell his car or hold onto it. Would we have an impetus to sell the car? I would again argue the answer is no. While it is no longer immoral to sell the car, the man’s words, if they do not reflect a corresponding desire, are mere words. They are a medium to convey desire that only carries moral weight insofar as this desire exists.


As such, I suggest that if something uses language to state that it desires* something without actually experiencing such desire, its apparent desires are not morally significant, as they are not matched by actual desires. This in turn implies that, if robots lack any form of qualia, we can disregard their desires*.


Upon the conclusion that qualia are needed to be morally significant, a natural second question follows: do robots have qualia? This is harder to answer than it may initially seem.


Robots lack any of the biochemical or neurological structures that are believed to lead to consciousness in humans. While artificial ‘neurons’ are an essential element of machine learning, robots lack and will not in the foreseeable future develop any of the neurological structures that give rise to consciousness in humans. In humans, it is possible to neuroscientifically map which areas of the brain are primarily responsible for subjective interpretation and emotions. For instance, the amygdala integrates emotions, emotional behavior, and motivation. Robots lack any similar equivalent. Moreover, more fundamentally, it is difficult to see how robots which, at present, are extremely sophisticated formulae, could develop an epiphenomenal consciousness that can experience quale. There is a disconnect between mathematic operations and the storage of data and the emergence of consciousness.


Due to the conceptual difficulties in seeing how consciousness can arise from artificial intelligence as well, as the lack of the structures corresponding to qualia in AI, I thus posit that they likely do not experience qualia.


However, this itself should be considered with an asterisk. The fundamental problem arising from nearly every discussion of consciousness is that we know very little about how it arises. It is possible that this article is mistaken, and that machines do experience consciousness. If one accepts that materialism is true, then the mind is merely an epiphenomenon - an emergent property - of the brain. If this is true, then consciousness can arise from purely physical and biochemical processes. This being the case, it does not seem impossible, at least in principle, that an AI could develop qualia, which would in turn entail moral value.


While distinguishing AI from brains can give us some level of credibility that AI is not conscious, we cannot know that with certainty. This stems from the fundamental epistemic problem of other minds. As was famously recounted by Descartes hundreds of years ago, we have no way of knowing with certainty whether something else is conscious. While I assume based on their behavior that everyone else - animals, humans, the reader of this article - are conscious, from my standpoint that need not be true. Everyone else could be a philosophic zombie - a creature that, despite displaying the behavior of a conscious being, is not itself conscious. If AI is indeed conscious, there is no method to distinguish between it being a zombie and actually conscious because we can only understand its behavior, not its internal thought processes.


When considering the moral status of robots, this in turn creates the credible possibility of moral false negatives - believing that a creature is not conscious when it actually is. Kagan illustrated their danger credibly in his talk by considering Descartes himself.


When proposing his philosophic system, Descartes believed cats (and all animals) to be philosophic zombies or automata. While they seem to exhibit the characteristics of a conscious creature, Descartes believed that, since only humans have immaterial souls - the basis for his dualist conception of the mind - cats cannot actually experience emotions or qualia. This led Descartes to treat animals extraordinarily brutally, vivisecting animals despite their clear suffering in the belief that they could feel no emotions. If AI were to be conscious, my philosophic position may end up in the same place as Descartes - justifying the abhorrent mistreatment of sentient beings. Due to the limited possibility of robots being conscious, we should treat them with some level of moral worth. This is particularly applicable in preventing negative suffering. We should, for instance, avoid doing things that seem to cause AI profound stress.


However, it is also worth recognizing that, when considering AI, there is also a significant risk of false positives which may exceed the risk of false negatives. Believing an AI to be a conscious being with accompanying rights could strain resources to maintain their well-being when, in reality, they lack any form of moral accountability. Instead of helping sentient beings - animals and humans - with true emotions and moral value, we could waste time on AI and robots. This is made a more significant issue by the fact that robots likely are not conscious.


The problem of false positives is exemplified well by democratic rights. If AI were fully conscious with full moral value, they may consequently have a right to vote. However, giving a right to vote to something which doesn’t deserve it not only practically could be dangerous (leading to possible widespread voter manipulation) but also could subvert the democratic principle based on a consensual, rational citizenry with true interests.


As such, I conclude that it is necessary to strike a middle ground in our treatment of AI. Kagan is right that we may want to hesitate before developing or mistreating AI. There is a lingering possibility of AI being conscious and, as a consequence, we should avoid creating AI that will be profoundly harmed or actively harming AI. However, at the same time, robots should not necessarily receive the positive rights that stem from rationality and consciousness. Contrary to what Kagan argued in his talk, it would be too dangerous to bestow positive privileges such as the right to vote on robots and AI. However, because of the possibility that they are conscious, the desires* and interests* of robots are not entirely worthless ethically.

bottom of page