1. Robots cannot harm humans, or sit and watch humans being harmed. Stand by.
2. Unless the first rule is violated, robots must obey human orders.
3. Under the premise of not violating the first and second laws, the robot must protect itself.
The "three major safety rules" are intertwined, logically rigorous, and all aspects are considered: the first premise is that "robots cannot harm humans", under this premise (otherwise it will be difficult to prevent robots from being used by some people to attack others. Human), which stipulates the subordinate status of robots relative to humans. As long as the above regulations are met, the robot can protect itself (in fact, it is also protecting human property). The "Safety Laws" seem to be perfect, so when the "Problem Robot" Sony appeared in the plot of "Machine Public Enemy", everyone's reaction was that the "three major safety rules" were violated! This was quickly confirmed, and the audience's suspicions immediately shifted to the president of the company who behaved abnormally and had a ghost in his heart. However, the plot continued to turn around, the president was killed, and the robot rioted-all the problems in the previous year were on the host computer "Vickie".
The most interesting thing is that when Vicky commanded all NS5 robots to fully occupy the human world, his behavior did not "violate" the three major safety rules. The reason NS5 restricts people’s freedom to go out is very "justified": to ensure your safety. Logically, this is in line with the primary premise that "robots cannot harm humans". If the robot knows that people may encounter injuries when going out (such as a traffic accident), but still let them go out, is this considered as "harming to humans"? So the natural result of Vicky's self-evolution is from the negative "cannot harm humans" to the active "protecting humans". And this kind of "protection" is undoubtedly "enslavement" in the eyes of mankind. Paradoxically, this right of enslavement is granted to robots by human beings through the "three laws" in every possible way to ensure their own safety!
Also paradoxical is the judgment of Sony's behavior. Robot killing should undoubtedly be regarded as the most serious act of "harming humans". However, from the macro perspective of the plot of "Machine Public Enemy", Sony's killing of Dr. Ronning is the first part of the entire plan to help humans fight the NS5 robot riot. Where is "harming mankind" is clearly "saving mankind"! In this regard, Detective Spooner's (Will Smith) intuition is particularly accurate. His distrust of robots is that they are too dogmatic and logical, and they are "logical slaves", so he described them contemptuously. Only "light bulb and clockwork".
In fact, the film’s suspicion of this overly mechanical logic has projected a lot of the creators’ understanding of the reality of human society. For example, Dr. Langing authorized Sony to kill himself. On the surface, it is difficult to judge whether it really violates the three major safety rules. In fact, it is a question of whether people can kill themselves but have the right to kill themselves with the help of others. "Ethanasia" is still controversial and reflects the powerlessness of existing human logic on this issue. Will Smith once said: "The central concept of "Machine Public Enemy" is that there is no problem with robots, and technology is not the problem itself. The limit of human logic is the real problem." Cheng Zai said.
View more about I, Robot reviews