This blog post will explore in depth whether it is possible to entrust the judgment and responsibility of human-like robots.
If robots with intelligence and morality close to that of humans are created, will they be able to take on not only simple labor but also the intellectual judgment that humans do? You may think that this is a topic that would be covered in many science fiction movies or novels. However, recently in the United States, a combat robot that looks like a primitive form of a robot that would be seen in a movie like Star Wars has actually been created. This robot has been active on the battlefield and has caused controversy related to ethical issues, such as distinguishing between allies and enemies and killing enemies. This controversy begins with whether the robot can distinguish between allies, enemies, and civilians, and continues to the ethical issues related to killing people. You might think that this is a special situation of war, but if the related technology is extended to real life, we will be able to see the “self-judging robot” that we have only seen in movies in reality.
My answer to this question is “I can’t leave it to the robot.” No matter how much the robot develops, people will not be able to leave everything to the robot. People who have different opinions than me argue that the information processing capabilities of the various types of robots we currently use are superior to ours. They also argue that robots can replace us because the human thought process is made up of electrical signals in the brain. However, I still think that robots cannot be entrusted with sophisticated intellectual judgment, and I will present two main reasons for this: the issue of responsibility and the issue of value judgment.
Before we develop our argument, we need to make a few assumptions for a more sophisticated discussion. The first assumption is that the robots we will discuss have a level of intelligence and morality close to that of humans. Even now, robots are better than us in terms of information processing capabilities. However, the intelligence we are talking about here includes not only information processing capabilities, but also the morality, emotions, and situational judgment that humans use for judgment. This means that we start from a different assumption than other articles that generally describe robots as emotionless machines. In other words, we assume that the difference in empathy between robots and humans will almost disappear. The second assumption is that even if robots become closer to humans, they will still be products, so there will be industrial regulations, performance standards, or related laws that must be met. Now, let’s start the discussion in earnest.
First, we cannot entrust all of our work to robots because it is unclear who is responsible if a robot makes a mistake. Just as our computers, smartphones, and home appliances malfunction, robots can also make mistakes. When our machines malfunction, it often ends with a little delay or inconvenience. However, if robots replace human judgment, robots will be created that make important decisions, and the impact of a malfunction is likely to be much greater. In fact, there was a case in the United States where a power outage occurred in an entire state due to an error in a power plant system. Of course, this was caused by a simple system, but sophisticated robots cannot be ruled out as a cause of malfunction. At present, robots are not often used for such important decisions. However, if robots develop in the future and are used more for important tasks, the impact will be significant.
But if a robot makes a mistake, can it be punished? When people make mistakes, they suffer damage to their reputation, property, etc. and try not to make mistakes, and they become more cautious when making important decisions. Even if they make such efforts, they are held accountable when mistakes occur. However, even if they have human-like intelligence, robots are just products and do not have autonomy. Since robots cannot own money or honor, they cannot receive any compensation or punishment.
So, who is responsible when a robot malfunctions? Whether the responsibility falls on the manager, creator, owner, or user, no one is directly responsible. In some cases, the victim may even have to take responsibility for themselves. This problem means that we cannot leave everything to robots to make decisions. Robots can provide solutions or process information quickly, but the final decision must ultimately be made by humans.
Of course, there are also arguments that if technology advances, robots will be able to feel pain like humans and can be punished. When a person causes harm to others through his or her own fault, he or she feels remorse and is subject to social punishment. However, even if a robot feels remorse, it cannot compensate or take responsibility for itself because it is a product. Also, although it is possible to use physical pain as a punishment in the way that human societies used to do before the modern era, it is barbaric to inflict physical harm for the purpose of punishment even if robots feel the same pain as humans. It is also questionable what the meaning of executing a robot by death penalty would be.
There is an argument that if legal regulations are created in advance, the responsibility for a malfunctioning robot can be clearly defined. Just as a person is held responsible for their mistakes by law, a robot can also be held responsible by defining the law in advance and making someone take responsibility. However, this argument is not realistic for two main reasons. First, even if there is a legal basis, the question of interpretation remains. As is often the case with claims of legal innocence, the law is not as clear as one might think. There are many conflicting provisions or aspects to consider, and even when the same provisions of the law are applied, the outcome of the judgment may differ. In other words, the law is not a perfect tool that solves all problems, but merely a means of providing a basis.
Second, it may be virtually impossible to hold someone legally accountable. Let’s assume that the creators, owners, and users are the only ones who can be held accountable. If the creator is held accountable, the creator cannot be held responsible for the robot’s malfunction forever. If it is an error in the early stages of product launch, the manufacturer may be responsible, but over time, the product cannot always be maintained in its initial state. Just as the warranty period for laptops and smartphones that we commonly use is one to two years, robots will also have a warranty period. After this period, it will be difficult to hold the manufacturer accountable. Of course, this part may be based on the warranty period and legal standards may be established.
If so, the responsibility will mainly fall on the owner or user. The problem here is that as robots replace human judgment, the scale of damage caused by malfunctions can also increase. As shown in the power plant system error presented in Moral Machines: Teaching Robots Right from Wrong, the scale of damage may not be limited to individuals or a small number of people. The responsibility may be too heavy for an individual or a company. In this case, the robot is not capable of compensating, so the very idea of holding it accountable becomes meaningless. Furthermore, if the owner is a country rather than an individual or organization, there may be an ironic situation in which the citizens who have suffered damage are compensated by the country through taxes.
As mentioned earlier, there may be an unfair aspect to holding the user accountable. If the victim and the user are the same, there may be an unreasonable situation in which the user is held accountable for his or her own damage. The same is true for the owner. Since the robot has a level of intelligence and morality close to that of a human, it would have been left to make its own decisions and operate. The owner is only allowed to operate the robot and is not responsible for the robot’s mistakes, which is why it would be unfair if it had to take responsibility. Determining liability by law does not solve the problem.
Now, I will explain the second reason why robots cannot be entrusted with decisions on behalf of humans. This is because robots cannot make value judgments. This may seem to contradict the premise we have just made, but we want to discuss the question of whether, even if robots are scientifically able to make value judgments, those judgments can be socially accepted. Can robots make desirable and non-harmful decisions just because they have intelligence and morality close to that of humans? Looking at our society, it is likely that this is not the case. Most people know what is right and wrong, but sometimes they act wrongly. This is due to judgment based on the situation or individual values. Some people commit crimes, while others do not commit crimes in the same situation. This stems from the difference between human will and value judgment.
In other words, robots will also make their own judgments, just like humans do, and in this case, the results will be difficult to predict. Just like V.I.K.I. in the movie I, Robot, there is a possibility that robots will harm humans in order to suppress their destructive nature. Because of this unpredictability, even if robots have intelligence and morality close to that of humans, they cannot be trusted to do all the work for humans. Some may argue that robots will not cause harm because they only follow orders. However, the problem is that we cannot always be sure of what is right. Just as utilitarianism and Kant’s deontology sometimes lead to conflicting moral conclusions, each theory takes a different position on the issue of lying for good.
Both utilitarianism and deontology provide standards for moral judgment, but the motivation for that judgment cannot always be said to be moral. For example, the United States used the term “axis of evil” to justify its war in the Middle East, but the motivation behind that may have been the interests of the military industrial complex in the United States. Similarly, even if a robot makes a judgment based on a moral theory, there is a possibility that it will abuse that judgment like a human. We still cannot entrust the decision-making to robots because the standards for moral judgment are not clear, and we believe that the final decision should be made by humans.
This argument can also be questioned by asking, “Don’t humans also face the same problem in the end?” Humans can also disagree with each other when making decisions. Rather, the argument is that a robot with superior information processing capabilities can make better decisions. If robots are capable of making value judgments, then several robots could make decisions after discussion. I oppose this argument for two reasons.
The first reason is that unpredictable robots will no longer be used by people. Robots are still products, so they should do what we want them to do. If a robot acts in a way that we did not expect and does not move as we intended, it will not be able to replace us. For example, if a document-processing program keeps changing the letter “a” to “A” when you try to type it at the beginning of a sentence, it would be very annoying. If a robot starts to make its own value judgments, it is likely to cause inconvenience rather than convenience.
The second reason is that all decisions are not made by a group of people. If multiple robots can come together to make a decision, then that’s fine, but in places where there is no capacity, a single robot will make the decision. If the robot makes the wrong decision, the damage could be significant, and, as with the conflict between utilitarianism and deontology mentioned earlier, it may not be possible to reach an agreement on conflicting moral standards.
The third reason why robots cannot be entrusted with human tasks is that robots cannot do creative work. Some may argue that human creativity is ultimately based on experience. However, robots’ creativity is not the same as generating new ideas. The reason why the chess robot can beat the champion is not because it has created a new strategy, but simply because it has calculated all the possible outcomes. The robot can remember more examples and solve problems based on statistics, but this only makes the difference between humans and robots more clear. Statistical judgments can ignore minor changes or extremely rare cases, and humans may be better at finding these unusual cases.
Even when using deductive reasoning, one of the current scientific research methods, robots may underestimate the probability of rare events. This tendency can limit the robots’ ability to form hypotheses. Among scientific theories, there are innovative ideas that overturn the existing majority opinion, which often do not arise from thinking based on statistics or existing data. In addition, robots also have a limit to their information processing capabilities, and one robot cannot conduct all the research in the world. It will still be up to humans to decide which fields to study and suggest new directions.
So far, we have discussed three reasons why robots cannot be entrusted with human tasks and counterarguments to them. Of course, robots with intelligence and morals close to that of humans have not yet been created, and the possibility is uncertain. However, as science and technology have developed rapidly, there have been cases where ethical standards have not kept up. As the problems with nuclear bombs were raised after their development, the impact of robots that are close to humans will be enormous, and it may be too late to discuss it then. Such discussions should be prepared from now on.