Woz thinks AI is out to get us. Elon Musk isn’t too sanguine about our future either; he likened AI to “summoning the demon.” And with him is Bill Gates: “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” How about Steven Hawking? He and fellow Brit Clive Sinclair (a true sage of the computing revolution, I had one of his computers long ago) think it’ll be “a challenge for us humans to survive.”
Hawking states the danger with less hysteria (demons Elon?) here but he is no less chilling ‘”Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate,” Hawking warned for the second time in recent months. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”‘
What do they know we don’t? Is it really possible for:
a) a machine to think (or at least appear to think — at some point the difference is immaterial)
b) a machine to improve itself?
c) a machine to decide we are superfluous and wipe us out?
How real is the existential threat?
I can believe ‘a’ — Cortana and Siri appear to think. Their capabilities are simplistic, but it has begun — both evaluate data and make intelligent guesses about what you might want. (Dinnertime? Here are nearby restaurants. That is thinking.) All ‘thinking’ is, in machine terms (and probably human terms — I’m a programmer not a biologist) is
- Remembering past actions
- Evaluating past outcomes (depends on #1, and some value system for evaluating)
- Applying rules to improve future outcomes (and rules can be developed given #1 & #2)
I have news for you non-technical folks. There is absolutely nothing that prevents current computing systems from doing all of the above. There is much work to be done to develop enough rules and matrices for interacting in a complex environment, but for simple tasks, it can be done now. Check out Edwin Olson’s work with robots, and their win at Magic 2010 here: “the robots autonomously identify obstacles, plan paths, and detect objects of interest. Exceptions are reported back to ground control, which can result in either a new task assignment or human intervention.”
You aren’t worried about these little robots? Check out the advances in the Internet of Things (IOT). These inventors want sensors to just about everything, all connected to the internet. Coffee pots, toasters, even mattresses! (You know, I do not want my mattress talking to the internet…)
So, what about ‘b’ — robots remaking themselves to be better? If they develop some level of self-awareness, and understand their own design, why not? The software development field has been working towards computer-aided software design (CASE) for years. If machines are smart enough to write software, we may well give them the rules and desired outcomes and ask them to write themselves. Indeed, in my SciFi epic-in-progress, one of the AIs does exactly that — rewires itself, so to speak, after the protagonist leaves it in battle override. Once machines can think, what will prevent them from making decisions and taking actions which we do not agree with?
Obviously, we’ll have to write controls, limits in the machine’s logic, and constraints on its actions. Those rules and constraints will have to be very clever to outsmart a machine which can learn — and act — on its own. Just think about raising kids. Mom always told me, ‘don’t eat cake before dinnertime,’ etc., and I always found a way around her rules. I was sneaky! The problem with awareness is with awareness comes will.
So, we’ll need some really smart people writing logic, rules, and techniques to control the intelligences we are rushing forward to develop. And they’d better hurry. Amazon already watches everything I buy, read and watch, and adjusts its advertising to leverage this knowledge…just think about the power it will have if this Internet of Things gets rolling. It’ll know everything about me. Will the clever lads and lasses at Amazon use this information — along with all the sensor data from IOT — to manipulate my decision-making? Well, yeah. Check out how marketers have been using behavioral research for years to do exactly that.
You wonder, who are the smart people who’ll save us from ourselves? Hawking (again, the most cogent of the bunch) writes in the Independent: ‘So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI.’ He names a few nonprofits working on this, and you can bet their funding is a tiny fraction compared to what Microsoft, Amazon, Google and the rest are putting into AI, the IOT and behavioral analytics.
So, that leaves ‘c’ — if they have the motive, will our machines have the means to commit homicide?
Computer algorithms already control our financial markets, electric grid, and assist with air traffic control. In the dystopian future of Musk, Gates and Woz, these tasks will be handed over to AIs, so our machines will have the capability for widespread havoc. Further, Elon Musk thinks self-driving cars are the thing for the future, and guess who’ll be driving our cars? Machines — which I am sure you have noticed, are what Elon is afraid of. Ironic, much? Will he trust his own cars to drive him through LA?
How about drones? In the movie Terminator, Skynet (a self-aware network of machines) hunted down humans from the air. Not so different from the horror our drones cause people in Pakistan today. Good thing AI can’t fly, right? Oops. Can fly. (Apologies to Toy Story there.)
Obviously, we have some time before Skynet comes after us with lasers (and if it does, we are toast – with thermal imaging, there is no hiding. See below. We can try, but they’ll find us.) Let’s hope we get our game on.