A panel of experts in the ethics of science and technology has started exploring the possibility that robots could become “moral machines” with potential legal rights if they develop the ability to feel emotions and distinguish between right and wrong.
“Depending on future advances in this research area, one should not exclude the possibility of future robots’ sentience, emotions and, accordingly moral status,” a working group on emerging technologies of the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), a scientific advisory body at UNESCO, said in a preliminary draft report released this month.
Since the first industrial robots were used in car manufacturing in the 1950s, they have become a fact of modern life. Robots are used in factories, war-zones, medicine, elderly care and treating children with autism. Popularized by science fiction novels, films and TV, from Star Wars to The Terminator, robots are increasingly visible. Experts are even speculating about the possibility that humans could fall in love with or have sex with robots.
The “Preliminary Draft Report of COMEST on Robotics Ethics” examines ethical issues related to the use of autonomous robots and how humans interact with them. The rapid development of highly intelligent autonomous robots is likely to challenge our current classification of beings according to their moral status, in the same or maybe an even more profound way than the animal rights movement, the report said.
A robot’s behavior — even if it is highly complex, intelligent and autonomous — is determined by humans. However, assuming future robots are likely to become even more sophisticated (perhaps to the point that they will be able to learn from past experience and program themselves), the nature of their algorithms – a set of precise instructions on how the robot should operate – is likely to become an issue worthy of “serious ethical attention and reflection,” the report said.
While noting that most scholars working on “machine ethics” agree that robots are still far from being “ethical agents” like human beings, there is speculation that robots could acquire human characteristics, such as a sense of humor in the future.
The prevailing view on robots – thanks to science fiction – is that they are machines that look, think and behave like human beings. However, robots do not necessarily take human form. They can be smart machines doing routine, repetitive and hazardous mechanical tasks.
“Robots’ autonomy is likely to grow to the extent that their ethical regulation will become necessary, by programming them with ethical codes specifically designed to prevent their harmful behavior (e.g. endangering humans or the environment),” the report said.
Bearing in mind the complexity of contemporary robots the question arises as to who should bear responsibility – ethically and legally – in cases where robots malfunction and harm human beings, according to the report. Robotics remains both ethically and legally under regulated, probably because it is a relatively new and rapid changing field of research whose impact on the real world is often difficult to anticipate.
“It is likely that malfunctioning of today’s sophisticated robots is capable of inflicting significant harm to a very large number of human beings (e.g. armed military robots or autonomous robotic cars going out of control),” the report said. “The question is, therefore, not only if roboticists ought to respect certain ethical norms, but whether certain ethical norms need to be programmed into the robots themselves.”
UNESCO has a leading role globally promoting ethical science: science which shares the benefits of progress for all, protects the planet from ecological collapse and which creates a solid basis for peaceful cooperation.