A Bot’s Life: Essay

Should robots and AI have the same rights that people have? Year 12’s Em recently considered this in an essay for a competition, for which she was shortlisted for a national prize. 

“Should robots have rights? Why or why not”

The problem of robots and humanity is an idea that has infatuated the media and scientists, inspiring films and books which express the dangers of artificial intelligence (AI) but also the potential benefits to humanity. Human rights are, by definition, for humans. The issue arises of whether or not robots qualify as humans and whether or not they are deserving of any form of rights. This links into the question of what it is to be human, which is a question highly debated by both ancient and modern philosophers, and whether or not humanity – or even personhood – is something that can be achieved or awarded, or something that is intrinsically human.

The technology of AI has advanced to the point that humanoid robots have been built, such as the creation of ‘Sophia’, an American example of artificial intelligence developed by Hanson Robotics in 2016 that can speak and respond to human actions. She is the first robot citizen and represents a future where AI could become nearly indistinguishable from humans if science continues to develop [1]. While Sophia is incapable of the same advanced level of reason and thinking that humans are, it is conceivable that robots could develop autonomy and the capability to judge and think for themselves. This capacity to think could suggest a very human tendency within the robots, and if they are to be considered human it is necessary that they are deserving of rights. Ludwig Wittgenstein (1889 – 1951) in his book Tractatus-Logico-Philosophicus (1921) suggested that a core element of humanity is to be able to think consciously. He said that “the world is the totality of facts, not things”, and that to be human is to think. When applying this view to a robot that has developed to the point it can think consciously, it could be argued that robots become so closely interlinked to humans that they must be deserving of rights.

Most philosophical teachings would argue that robots and AI are not deserving of human rights in the state that they are currently in. This is an opinion generally agreed on – I conducted a survey of 50 participants and 48%, when asked to what extent robots were eligible for rights in the modern era, agreed that robots in their current form are deserving of some form of rights, but not to the same level as humas. There are multiple philosophers whose teachings would agree that robots are not deserving of human rights – on the concept of human nature, Karl Marx argued that humans are shaped by culture, heritage and history. [2] When looking at the example of Sophia, she was born with neither heritage nor history, but rather created fully formed without the period of evolution and development that a human goes through. Marx argued that man can be defined psychologically, as well as biologically and anatomically. The way humans view concepts such as morality and social constructs is contingent on history, therefore human nature shares this contingency. This argues against the idea that human nature is shaped by intellect and reason, which could imply that robots will never be equivalent to humans because they do not experience the same growth and evolution, and do not belong to the complex tapestry of human history. As a result, a robot cannot be entitled to the same rights as humans, though this does not guarantee that they are entitled to no rights at all.

David Hume, a 18th century Scottish philosopher, held highly influential views of philosophical empiricism and scepticism, and applied this to human nature.  As an empiricist, his views were based around sense experience and Hume argued that humans were sensory creatures. Hume named three different mental processes that are used as an inherent part of humanity, and identified these as Resemblance, Contiguity in time or place, and cause and effect. Hume believed that all human ideas start from sense impressions – an idea that is clear when observed in human life. It can be argued that it is impossible to have an idea of something when we have no sense experience of an event. Hume believed these ‘impressions’ were arranged through the three mental processes outlined above.[3] Hume also believed a key element of humanity is to seek the truth, and that to be human is to experience important and revelational moments of realization. [4] Many would argue that robots cannot have this moment of realisation and that because they do not have the same level of sense experience, therefore they do not qualify as human. Robots are not thought of as beings which seek truth and discover things, but rather machines with preinstalled knowledge and data – this would imply that they cannot have these moments of realisation and therefore lack humanity under Hume’s view. However, Hume’s theory could be considered flawed – if Hume is relying on sensory experience to define humanity, it could be considered that a human deprived of all their senses would be inhuman. While Hume’s theory could be seen to deny robots rights, a decision that may be widely supported in the current era, it could also be seen to deny rights to other humans in the process.

The necessity of human rights is a widely accepted topic, the human rights act of 1998 allowed people to defend themselves in court when rights were abused or broken, and the duty to uphold these rights could be viewed as a fundamental part of any governing body. It is a tool to stop the exploitation of humans, but in a future where robots are being exploited to a similar extent, many may argue that it is necessary to grant at least some rights to robots. An article on Forbes written by Stephen Diori in May 2020 [5] argues the benefit of AI in the growth and development of businesses and the economy. This appears as a positive and optimistic future of the human race, and quotes Professor Kartik Hosanagar in saying that AI should be viewed “as a tool, not a strategic goal”. This echoes dangers described in pop culture and media that explore the risks of AI and robots, often focusing on machines that have been treated as tools and questioning the ethics behind such an act. The 1982 sci-fi film ‘Bladerunner’ is an example of a film narrating these dangers, telling the story of humanoid robots who have been exploited. The issues of treating something as a “tool” also raise issues faced in modern society, and past atrocities, where ‘undesirable’ groups of society have been exploited and abused for the benefit of those in power. Human civilisation is built on the demonisation of ‘lesser’ groups in society, such as the Atlantic slave trade, beginning in the 16th century, and children working in factories during the Industrial revolution. Without rights, robots risk becoming a modern form of these oppressed groups, brutalising human society and turning it into something eerily comfortable with using others for its own gain.

As illustrated above, the modern world demonstrates a plethora of examples where rights are unjustly ignored and abused to assist those in power. Madeline Gannon is a researcher specialising in robotics and when talking about the highly controversial issue of Sophia being awarded citizenship in Saudi Arabia [6], argued that “a conversation about robot rights in Saudi Arabia is only a distraction from a more uncomfortable conversation about human rights” [7]. When asking the question of whether or not robots deserve rights, it is imperative to consider the current state of human rights in the modern world. Similarly, in the survey I conducted 44% of respondents also believed that the question of robot rights was not currently a relevant topic. Rights such as freedom of religious belief, right to education and the right to a food and a home are constantly ignored. For example, the current re-education camps in China claim to ‘re-educate’ Uighurs and restrict the freedom to follow a religion [8], and in many LCD’s children are denied an education, as well access to food and shelter. It could be argued that robot rights cannot even be considered a realistic or valid question when people who are undeniably human are ignored and suffer in the absence of proper protection/rights.

Another key philosophical view relies on the importance of intelligence and intellect. The highly influential philosopher Immanuel Kant believed that “there is nothing higher than reason”. The importance of human reason is emphasised in the teachings of numerous philosophers, including Aristotle and St Thomas Aquinas, and more recently, philosophers and scientists have followed these points to link the idea of intelligence to robots and machines. The Turing test, developed by mathematician Alan Turing in 1950, was a way in which to determine whether or not a computer can ‘think’. This was achieved through a person speaking, through a computer, with either a human or a machine. If the two were indistinguishable, then the machine was deemed ‘intelligent’ [9]. If a way in which to distinguish humanity is intelligence, then surely if a robot were to involve into a highly intelligent and reasoned creation, it would be entitled to some of the rights that humans are entitled to. For example, the right to life. Although, it could be argued that intelligence is not a reliable way to order society, nor does the Turing test succeed in actually proving any presence of intelligence. Ned Block criticised this test, claiming that appearing human in an online conversation didn’t prove intelligence, as there is a limited number of syntax structures for a robot to memorise. There is no way to identify if the robot thinks for itself – only this would be intelligence. [10].

In conclusion, it is generally agreed that robots and AI in their current state cannot be eligible for rights, because they neither sense the world around them nor think or reason for themselves. They are not emotional and sentient beings in the same way animals and humans are, and this therefore puts forward the argument that robots should not have rights in the modern world. However, it may be agreed that in the future, the question of rights may become relevant and it may be concluded that robots are entitled to a level of rights, especially if they develop autonomy.

 

[1]https://www.hansonrobotics.com/sophia/#:~:text=Hanson%20Robotics’%20most%20advanced%20human,for%20the%20future%20of%20AI

[2] https://www.marxists.org/archive/fromm/works/1961/man/ch04.htm

[3] https://www.britannica.com/topic/epistemology/David-Hume

[4] https://ideapod.com/what-does-it-mean-to-be-human-famous-philosophers-answer/

[5] https://www.forbes.com/sites/forbesinsights/2020/05/08/realizing-the-growth-potential-of-ai/?sh=6f8713ce33f3

[6] https://www.dw.com/en/saudi-arabia-grants-citizenship-to-robot-sophia/a-41150856#:~:text=Saudi%20Arabia%20claims%20to%20be,granted%20citizenship%20to%20a%20robot.&text=The%20inventor%20David%20Hanson%20claims,mimic%2062%20human%20facial%20expressions.

[7] https://www.discovermagazine.com/technology/do-robots-deserve-human-rights

[8] https://www.bbc.co.uk/news/resources/idt-sh/China_hidden_camps

[9] https://www.britannica.com/technology/Turing-test

[10] http://www.nyu.edu/gsas/dept/philo/faculty/block/papers/Psychologism.htm