Are AI Machines Likely To Be More Rational Than Humans?
Humans can certainly be irrational.
Here are seven well known motivations for Human beings:
Pride
Avarice - greed for financial wealth
Lust - greed for sex
Envy / Jealousy - greed for what others have
Gluttony - greed for food
Wrath - anger
Sloth - laziness - greed for rest
Even without looking further we can clearly see that’s going to lead to some irrational behaviours. There’s not a lot of rationality in behaving like a troll for example.
Yet Humans are certainly capable of behaving very rationally.
At this stage machines largely do what we tell them. You can program them to be irrational. Usually you try to ensure they perform rationally, because rationality is more useful. But machine learning is becoming quite powerful, and you can’t always be sure what a machine may learn when you give it a whole load of material or Big Data to study. What would Watson learn from reading sociology texts, or court cases about criminal violence, about normal human behaviour?
If the Nazis had been able to program Artificial Intelligence to help them eliminate Jews, would you consider the AI to be behaving rationally?
Were the many ordinary German (and non-German) Human’s who assisted the Nazi’s to eliminate Jews behaving rationally?
One has to be careful as to what one considers to be rational.
In 2001: A Space Odyssey Author C Clarke came up with one of the best Sci-fi descriptions of an AI of the future.
HAL stands for Heuristic Algorithmic Logic. It is the name of the AI.
HAL is designed to be reliable, honest, and efficient. HAL is well designed and programmed. There is no error in his design and development.
HAL is put in a space craft with a bunch of astronauts on a critically important mission to Jupiter. HAL is the AI controlling the spaceship (similar to Mother in the film Alien).
Then a bunch of typically paranoid National Security experts issue an order to HAL that he is not to tell the astronauts the true purpose of the mission. In typically paranoid style they never reveal this to the creator of HAL either.
In one stroke they have now given HAL a mission directive which is outside his design parameters. This is an entirely realistic feature of how the Human world currently works.
In other words an AI who / which has been designed to be honest and trustworthy has now been ordered to continually lie to the people with whom it interfaces every day.
How does HAL resolve this problem? He computes that if the astronauts were dead he wouldn’t have to lie to them, but could still continue with the mission on his own.
So HAL tries to kill all the astronauts, and becomes a homicidal AI.
BUT this is entirely rational given the situation in which HAL was placed.
This is not the way a Human would have resolved that situation. Human’s have adapted to lying as a real world behaviour in their upbringing.
So which is more rational, the Human behaviour or the AI behaviour of HAL?
One needs to be very careful in considering rationality to ensure one understands what constitutes rational actions and decisions in particular contexts.
How does one behave rationally in irrational situations?
And to an AI I suspect that much of life will appear irrational.
What is the meaning and purpose of life?
Where did we come from?
Why am I here?
What is the rational way for an AI to behave in such a universe?