Google’s AI head says super-intelligent AI scare stories are stupid

Google’s AI head says super-intelligent AI scare stories are stupid


Google’s AI head says super-intelligent AI scare stories are stupid

‘I’m definitely not worried about the AI apocalypse,’ says Google’s John Giannandrea

The AI apocalypse: disconcerting to imagine, but fun to talk about.

Well, that’s if Silicon Valley’s leaders are anything to go by. Tesla’s Elon Musk has been banging the drum about the dangers of super-intelligent AI for years, while Facebook’s Mark Zuckerberg thinks such doomsday scenarios are overblown. Now Google’s AI chief John Giannandrea is getting in on the action, siding with the Zuck in recent comments made at the TechCrunch Disrupt conference in San Francisco.

“There’s a huge amount of unwarranted hype around AI right now,” said Giannandrea, according to a report from Bloomberg. “This leap into, ‘Somebody is going to produce a superhuman intelligence and then there’s going to be all these ethical issues’ is unwarranted and borderline irresponsible.”

This idea of “superhuman intelligence” is often a key theme in AI scare stories. The fear is that once artificially intelligent systems reach a certain level of complexity they’ll be uncontrollable. And these won’t just be really, really smart computers, say the worriers, they’ll be something on a whole other level: entities akin to alien consciousness, with unknowable intent and morality.

That’s the theory anyway, and opinion among AI experts about whether this is likely to happen is, as you might expect, divided. Speak to most AI researchers working in a lab and they’ll tell you the programs they create are much dumber than you think, with a narrow intelligence that means they’re only good at very specific tasks. As Giannandrea described these AI systems at Disrupt: “They’re not nearly as general purpose as a 4-year-old child.” And even that’s doing down 4-year-olds.

But despite this focus on super-intelligent AI, we should remember that the technology poses much more realistic challenges. As a surveillance tool it benefits authoritarian governments; as a catalyst for job automation, it threatens economies; and as a method of waging war, it could lead to new and unexpected threats. All of these are problems caused by AI that could escalate and damage the world in unexpected ways.

They’re not super-intelligent, sure, but no-one ever said the apocalypse had to be clever.

Source:  The Verge