Artificial intelligence is currently in the news. My PhD supervisor has even been on Newsnight. Yet it seems everyday people do not have any clear idea of what ‘artificial intelligence’ might be. I am fortunate to have been able to spend some time thinking about all this recently.
Firstly, lets try and nail down ‘intelligence‘ a little. If an artifact (a human manufactured device) is able to respond appropriately given an input and a prior context, then we can choose to say it is acting intelligently. For example, your calculator intelligently adds up the numbers you give it. Does it ‘simulate‘ adding them up, or does it add them up? Of course, it adds them up, and it can do a better job than you can! It does not ‘simulate‘ intelligence in adding up numbers, it is intelligent in adding them up. Similarly, an auto-pilot does not simulate flying the aircraft, it is indeed flying it, unaided. I could equally give Google self driving cars, or autonomous vacuum cleaners, as examples. So, based on my definition we can say that many commonplace artifacts and systems are already intelligent, and we already rely on them in everyday life.
I suspect however, you are thinking “yes, but what about human-level intelligence?”.
This needs some unpacking. Humans are incredibly poor at many things, for example anything numerical, anything that requires large amounts of information storage, anything that requires high speed, and interestingly anything that involves probability. Humans are really optimised by evolution for a particular set of high level intelligent behaviours: social interaction. This we are good at, and it’s primarily why our species dominate the planet. So, that’s why maths is hard and Facebook is easy (for a computer maths is easy, and Facebook is hard). Get your kids to focus on maths, because being a Facebook expert is hardly a differentiator in a competitive world.
“Ah yes” you say, “but what I really mean is that people can make decisions and have free will”.
Ah ha! Now we are getting to what the real fuss is all about. People confuse intelligence with agency – the capacity of individuals to act independently and to make their own free choices. AI does indeed allow us the possibility of building agents that can act independently, and make decisions. We normally call these embodied agents ‘robots’. However, the extent of their agency is a matter of design, not a matter of chance. Thus it is possible to have a super-intelligent robot with almost no agency at all. There is no reason why such an artifact need have any ‘will‘ of its own, other than the goals that we give it. I am personally not threatened by the idea of machine that is ‘smarter‘ than I am. What a useful tool!
Just because we humans have a complex soup of needs, desires, strengths, weaknesses and frailties, we should not transplant this idea of what it means to ‘be’ a ‘self’ to a robot. Humans have a strong desire to anthropomorphize all animate life, and this even seems to extend to robots. We must fight this irrationality. It’s just a machine, no more.
“But wouldn’t a super-intelligence evolve to have its own goals? Wouldn’t it advance itself by taking all our resources, or see us as a threat and try to eliminate us?”.
This all makes for good sci-fi movies, but I honestly think it’s just far fetched; the product of an over-active imagination, and an under-developed practical familiarity with the technology. Why would an artificial intelligence have this capacity to create its own goals, unless we build it that way? Even if it did go awry, machines are much more brittle than life. Breaking a super-computer or a wandering robot is hardly a difficult thing to do.
The real threats today do not come from AI and autonomous robotics, they come from more prosaic problems such as the global human population explosion, a burgeoning population of the infirm elderly, climate change, mass species extinction, unstable states with nuclear weapons and religious extremism. AI may help us to understand these intractable problems, and come up with better solutions that we could otherwise find.
AI is a powerful tool, no more. In the wrong hands it has the capacity for harm, in the right hands for good. In this respect it is no different to any other powerful technology. We should embrace it, and at the same time carefully legislate to ensure it is used for the good of humanity.
Rob Wortham, October 2014
Acknowledgement: Some of these ideas and examples are taken from various AI textbooks and papers, and also from talks given by Joanna Bryson, Marvin Minsky and others. I guess we all borrow from one another’s thinking.