Friday, April 19, 2019

Artificial Intelligence as a Threat

Artificial Intelligence as a Threat

Adapted from:Creativity and AI – The Rothschild Foundation Lecture (Dr. Demis Hassabis)
watch at:  https://www.youtube.com/watch?v=d-bvsJWmqlc
Ebola sounds like the stuff of nightmares. Bird flu and SARS also send shivers down my spine. But I’ll tell you what scares me most: artificial intelligence. The first three, with enough resources, humans can stop. The last, which humans are creating, could soon become unstoppable.
Before we get into what could possibly go wrong, let me explain what AI is. Actually, skip that. I’ll let someone else explain it: Grab an iPhone and ask Siri about the weather or stocks. Or tell her “I’m drunk.” Her answers are artificially intelligent.
Right now these artificially intelligent machines are pretty cute and innocent, but as they are given more power in society, these machines may not take long to spiral out of control.

In the beginning, the glitches will be small but eventful. Maybe a rogue computer momentarily derails the stock market, causing billions in damage. Or a driverless car freezes on the highway because a software update goes awry.
But the upheavals can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.
Nick Bostrom, author of the book “Superintelligence,” lays out a number of petrifying doomsday settings. One envisions self-replicating nanobots, microscopic robots designed to make copies of themselves. In a positive situation, they could fight diseases in the human body or eat radioactive material on the planet. But, Mr. Bostrom says, a “person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth.”
AI proponents argue that these things would never happen and that programmers are going to build safeguards. But let’s be realistic: It took nearly a half-century for programmers to stop computers from crashing every time you wanted to check your email. What makes them think they can manage armies of quasi-intelligent robots?
I’m not alone in my fear. Silicon Valley’s resident futurist, Elon Musk, recently said AI is “potentially more dangerous than nukes.” And Stephen Hawking, one of the smartest people on earth, wrote that successful AI “would be the biggest event in human history. Unfortunately, it might also be the last.” There is a long list of computer experts and science fiction writers also fearful of a rogue robot-infested future.
Two main problems with AI lead people like Mr. Musk and Mr. Hawking to worry:
  • the near-future fear, is that we are starting to create machines that can make decisions like humans, but these machines don’t have morality and likely never will.
  • the longer way off fear is that once we build systems as intelligent as humans, these intelligent machines will be able to build smarter machines, often referred to as superintelligence. Then, things could spiral out of control as the rate of growth and expansion of machines would increase exponentially. We can’t build safeguards into something that we haven’t built ourselves.
“We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest,” said James Barrat, author of Our Final Invention: AI and the End of the Human Era. “So when there is something smarter than us on the planet, it will rule over us on the planet.”
What makes it harder to comprehend is that we don’t actually know what super-intelligent machines will look or act like. “Can a submarine swim? Yes, but not like a fish,” Mr. Barrat said. “Does an airplane fly? Yes, but not like a bird. AI won’t be like us, but it will be the ultimate intellectual version of us.”
Perhaps the scariest setting is how these technologies will be used by the military. It’s not hard to imagine countries engaged in an arms race to build machines that can kill.
Bonnie Docherty, lecturer at Harvard and a researcher at Human Rights Watch, said that the race to build autonomous weapons with AI is reminiscent of the nuclear arms race and that treaties should be put in place before we get to a point where machines kill people on the battlefield.
“If this type of technology is not stopped now, it will lead to an arms race. If one state develops it, then others will too. And machines that lack morality and mortality should not be given power to kill.” How do we ensure that all these doomsday situations don’t come to fruition?
But we can hinder some of the potential chaos by following the lead of Google. When Google acquired DeepMind, a neuroscience-inspired, AI company based in London, they put together an AI safety and ethics board that aims to ensure these technologies are developed safely.
Demis Hassabis, founder of DeepMind, said that anyone building AI, including governments and companies, should “think about the ethical consequences of what they do way ahead of time.” Watch a 71-minute video where he discusses “Creativity and AI”.

Saturday, April 6, 2019


Quick, Draw!
Have you ever wanted to play a fast game of Pictionary, but didn’t have friends? Thanks to Google, now there’s Quick, Draw!, an online game where you are prompted to draw something in 20 seconds. In each game, you get six prompts, which can be things like “pizza,” “foot”, “houseplant,”, “motorbike”, etc. It’s an addictive and fun way to join the AI bandwagon. Once you start drawing, the AI in the game takes over. It matches your drawing with thousands of drawings by others. If, within 20 seconds, you have drawn something that’s recognized (similar to those of thousands of others), your artwork has "passed".
Quick, Draw! uses a neural network to guess what you draw. When you play the game, you get an insight into how image recognition (a domain of AI) works. You draw, and a neural network tries to guess what you’re drawing. With every stroke of your mouse, a neural network tries to guess what you’re drawing, by recognizing patterns from previous drawings by users. Of course, it doesn’t always work. But the more you play with it, the more it learns. The technology used is similar to that of Google Translate, which can identify handwritten characters. The computer “looks” at a drawing and attempts to identify it.
The software doesn’t look just at what the player drew, but how they drew it, i.e., which strokes they made first and which direction they draw them in. The robot revolution is here and it’s artistic. It could need your help to understand the world around it. Visit https://quickdraw.withgoogle.com to find out for yourself.
When I played the game, one of the things I was asked to draw was "animal migration".  I was absolutely stumped. I wondered how I was to draw a herd of migrating wildebeest in 20 seconds. Needless to say, I failed utterly. Shown alongside, is the successful effort of another player.


"Quick, Draw!" is not the first AI experiment that Google has undertaken, but it's definitely one of the more fun projects. As you draw, the computer will guess out loud what it thinks you're drawing until it's sure: "Oh, I know, it's a duck!" Pretty cool, right? Somewhere behind the neural networks that we can't see in the back-end, the computer recalls all of the drawings that other people have submitted in the past and draws its conclusions based on that.
"Quick, Draw!" has become good at figuring out what you draw, even if though sketches will not be exactly what someone else has drawn. That is because lots of people have played it and the more people play, the better it gets. That's the magic of big data.
Need a more visual explanation? OK — let's say the game asks for a rabbit to be drawn. I draw a rabbit to the best of my ability, and the computer successfully guesses that I am, indeed, drawing a rabbit. You can see my (unfinished) rabbit.
Considering how bad my rabbit is, I'm impressed that the software could guess it right. I didn't get my bucket correct, even though my bucket is more bucket-like than my rabbit was rabbit-like. So, there must've been a disconnect between the strokes I made and what the computer expected.
When the game is over, the computer tells you what your drawing reminded it of. For instance, my rabbit drawing was similar to other people's drawings of a rhino and a duck. It's all about training the computer to compare and contrast different drawings and recognize that various features belong in different categories, or objects. While it may be easy for us to know what a duck looks like, the computer needs to "learn" that for itself. To make the computer understand, it needs to see several rabbit doodles before it starts to see the patterns that make a rabbit a rabbit.
Later, the game shows the rabbit drawings other people have submitted while playing the game. As you can see, most drawings of rabbits include big, floppy ears and a round body. So, while we may have encountered different mailboxes, and cookies, we have a similar idea of what each of those objects should look like. Again, the computer needs to see a lot of rabbit drawings before it can get good at guessing.
Of course, the technology isn't perfect and the algorithms behind "Quick, Draw!" don't always work the way they're supposed to. But as more and more people play the game, it's safe to assume that those algorithms will get smarter over time and improve its accuracy ratio. So, if you're a tech nerd or a fan of interactive games, give "Quick, Draw!" a whirl. Happy drawing!