Artificial Intelligence as a Threat
Adapted from:Creativity and AI – The Rothschild Foundation Lecture (Dr. Demis Hassabis)
watch at: https://www.youtube.com/watch?v=d-bvsJWmqlc
watch at: https://www.youtube.com/watch?v=d-bvsJWmqlc
Ebola
sounds like the stuff of nightmares. Bird flu and SARS also send shivers down
my spine. But I’ll tell you what scares me most: artificial intelligence. The
first three, with enough resources, humans can stop. The last, which humans are
creating, could soon become unstoppable.
Before we
get into what could possibly go wrong, let me explain what AI is. Actually,
skip that. I’ll let someone else explain it: Grab an iPhone and ask Siri about
the weather or stocks. Or tell her “I’m drunk.” Her answers are artificially
intelligent.
Right now
these artificially intelligent machines are pretty cute and innocent, but as
they are given more power in society, these machines may not take long to
spiral out of control.
In the
beginning, the glitches will be small but eventful. Maybe a rogue computer
momentarily derails the stock market, causing billions in damage. Or a
driverless car freezes on the highway because a software update goes awry.
But the
upheavals can escalate quickly and become scarier and even cataclysmic. Imagine
how a medical robot, originally programmed to rid cancer, could conclude that
the best way to obliterate cancer is to exterminate humans who are genetically
prone to the disease.
Nick
Bostrom, author of the book “Superintelligence,” lays out a number of
petrifying doomsday settings. One envisions self-replicating nanobots,
microscopic robots designed to make copies of themselves. In a positive
situation, they could fight diseases in the human body or eat radioactive
material on the planet. But, Mr. Bostrom says, a “person of malicious intent in
possession of this technology might cause the extinction of intelligent life on
Earth.”
AI
proponents argue that these things would never happen and that programmers are
going to build safeguards. But let’s be realistic: It took nearly a
half-century for programmers to stop computers from crashing every time you
wanted to check your email. What makes them think they can manage armies of
quasi-intelligent robots?
I’m not
alone in my fear. Silicon Valley’s resident futurist, Elon Musk, recently said
AI is “potentially more dangerous than nukes.” And Stephen Hawking, one of the
smartest people on earth, wrote that successful AI “would be the biggest event
in human history. Unfortunately, it might also be the last.” There is a long
list of computer experts and science fiction writers also fearful of a rogue
robot-infested future.
Two main
problems with AI lead people like Mr. Musk and Mr. Hawking to worry:
- the near-future fear, is that we are starting to create machines that can make decisions like humans, but these machines don’t have morality and likely never will.
- the longer way off fear is that once we build systems as intelligent as humans, these intelligent machines will be able to build smarter machines, often referred to as superintelligence. Then, things could spiral out of control as the rate of growth and expansion of machines would increase exponentially. We can’t build safeguards into something that we haven’t built ourselves.
What makes
it harder to comprehend is that we don’t actually know what super-intelligent
machines will look or act like. “Can a submarine swim? Yes, but not like a
fish,” Mr. Barrat said. “Does an airplane fly? Yes, but not like a bird. AI
won’t be like us, but it will be the ultimate intellectual version of us.”
Perhaps the
scariest setting is how these technologies will be used by the military. It’s
not hard to imagine countries engaged in an arms race to build machines that
can kill.
Bonnie
Docherty, lecturer at Harvard and a researcher at Human Rights Watch, said that
the race to build autonomous weapons with AI is reminiscent of the nuclear arms
race and that treaties should be put in place before we get to a point where
machines kill people on the battlefield.
“If this
type of technology is not stopped now, it will lead to an arms race. If one
state develops it, then others will too. And machines that lack morality and
mortality should not be given power to kill.” How do we ensure that all these
doomsday situations don’t come to fruition?
But we can
hinder some of the potential chaos by following the lead of Google. When Google
acquired DeepMind, a neuroscience-inspired, AI company based in London, they
put together an AI safety and ethics board that aims to ensure these
technologies are developed safely.
Demis
Hassabis, founder of DeepMind, said that anyone building AI, including
governments and companies, should “think about the ethical consequences of what
they do way ahead of time.” Watch a 71-minute video where he discusses
“Creativity and AI”.
No comments:
Post a Comment