Robots stalked the street, looking for anyone who didn’t meet the program’s appropriate face detection incorporated in the Haar Cascades algorithm. Failure to find the double T pattern in a person’s face that corresponds to the eyebrows, nose, and mouth, meant instant annihilation. There is no stopping them for further consideration; the program demands it.
The program, of course, has an inherent flaw in that the detection method used only works well on light-skinned people. Anyone who does not meet that standard is immediately seen as an “enemy.” By that standard, the program is racist, bigoted, and inherently biased with an affinity for the light-skinned population.
It is, of course, an example of AI's inaccurate person perception in these robots. It’s not the only problem with machine learning. One program recognized sand dunes as porn . Yes, it is scratch your head time, but machines can only do what we teach them to do, and there must have been a misstep here.
Who could have known about the bias, and who should have prevented it? How did it slip through and never was questioned as to its appropriateness? All of the questions are good ones, but one of the questions is missing; should robots have an ethical standard programmed into them?
The ethics problem has been pointed out in terms outlined in an article on ethics and robots, which indicated that the program had to meet a certain standard. The standards are set by caveat.
In a 2015 article, Robert Newman noted that “Delegating ethics to robots is unethical not just because robots do binary code, not ethics, but also because no program could ever process the incalculable contingencies, shifting subtleties and complexities entailed in even the simplest case to be put before a judge and jury.”
Some will take issue with Newman’s reasoning. A different view is that AI will progress to the point that robots will have intelligence superior to humans. This approach, then, would enable them to make “better” decisions. However, intelligence is not necessarily correlated to ethics. The reasoning is flawed.
Asimov’s Three Laws of Robotics
The prolific science fiction writer, Isaac Asimov, produced his first book, “I, Robot,” in his Foundation series of books, in 1972 and it is here that he laid down his Three Laws of Robotics. The laws are an indication of how prescient Asimov was when it came to science and its possibilities.
Asimov’s Three Laws of Robotics are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
Seemingly laid out in a logical fashion that would anticipate the use or misuse of robots, the laws have, according to some scientists, come into question.
One article, based on research that asked consumers about autonomous cars and the responsibility to save a life. The short study provided an interesting, if predictable, response from the subjects.
The question was, “Suppose, in the world of autonomous cars, two kids suddenly run in front of an autonomous car with a single passenger, and the autonomous car (robot) is forced into a life-and-death situation or choice as to who to kill and who to spare (kids vs. driver).”
Seventy-six percent of the people in the survey agreed that the driverless car should sacrifice the lone passenger and not kill the ten pedestrians and the kids. It was, they said, the moral choice to minimize death and preserve life.
However, when the situation was changed slightly, there was a markedly different opinion. When questioned if they would purchase a car that would protect them in such a situation, they overwhelmingly agreed that would be their choice. The writer asks whether or not we need a Fourth Law of Robotics.
Mercedes-Benz has indicated that their future autonomous vehicles will always put the driver first. Projecting this type of design into the future, we may anticipate pedestrians will be sacrificed to save the drivers in MB cars. Of course, that is unless the self-driving vehicle is told not to do so and, in that case, it might do nothing at all.
The result would be an accident without a predictable outcome; drivers and pedestrians could be killed. We may need more programming to anticipate needed changes according to specific conditions. The number of situations in the program might be astronomical, but the AI should be able to process that in a millisecond. The difficulty lies in the programmers’ coming up with all needed situations and the likelihood is that one may be missed.
Robotics challenges and superintelligence
One of the primary challenges of robotics and artificial intelligence, as it progresses, is keeping the progress in line with our acceptable standards of behavior. Ethics may seem to be a simple concept. However, it is possible to create robots with deceptive conduct.
A researcher at the Georgia Institute of Technology outlined robotic deception as follows: “We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligence machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chances of being discovered.”
Theoretical and conceptual models of robot deception were formulated and explored by the researchers. They believed that, in the future, robots would be capable of deception in several different areas, including military and search and rescue operations.
In the article, Arkin indicated that robots on a search and rescue mission might need to deceive to calm or receive cooperation from panicking victims. Of course, this is similar to rescue workers who tell people in peril that they will be fine when, in fact, there is a likelihood they will not.
When used in a battle situation, robots would need to be capable of deception. The deception would ensure that they could successfully hide from or mislead an enemy to keep themselves or valuable information safe.
“Most social robots will probably rarely use deception, but it’s still an important tool in the robot’s interactive arsenal because robots that recognize the need for deception have advantages in terms of outcome compared to robots that do not recognize the need for deception,” according to the study’s co-author, Alan Wagner.
Ben Goertzel, an AI theorist, believes that ethical precepts need to be flexible. “If an AGI (artificial general intelligence) is created to have an intuitive, flexible, adaptive sense of ethics — then, in this context, ethical precepts can be useful to that AGI as a rough guide to applying its own ethical intuition.
“But in that case the precepts are not the core of the AGI’s ethical system, they’re just one aspect of how it works in humans — the ethical rules we learn work, insofar as they do work, mainly as guidance for nudging the ethical instincts and tuitions we have — and that we would have independently of being taught ethical rules.” Would a superintelligent machine be willing to follow the fallibility of humans who are inferior to it? What would HAL do?
How do we protect ourselves?
As the theorist I. J. Good indicated the intelligence of the machine would realize that it is better able to achieve its goals through convergent instrumental value or what is better known as self-improvement. This improvement will initiate a rapid, AI-motivated cascade of self-improvement cycles to the point that it leaves the human level of intelligence in the dust.
What steps must we take to protect our survival as machines could takeover and view us as inferior? One idea is that we now incorporate AI “boxing” techniques into the programs to prevent them from escaping our control.
An AI that escapes the control of the programmers is described in the book, “Life 3.0: Being human in the age of artificial intelligence” by Max Tegmark. In the book, he specifies a machine known as Prometheus run by the Omegas, who, unfortunately, fail to prevent the AI’s escape from control. As the author points out, it is a sobering commentary on the need for maintaining a program within a “box.”
In the book, the program, once run, automatically erases itself and must be reinstalled from scratch. This procedure prevents it from gaining the ability to learn and self-improve over time and is one example of “boxing.”
A final note of caution is in Tegmark’s book. “We are better off in a society where AGI-safety research results get implemented rather than ignored. In looking further ahead, to challenges related to superhuman AGI, we’re better off agreeing on at least some basic ethical standards before we start teaching the standards to powerful machines.”
Comments