#alternate alternate alternate Artificial Intelligence as a Threat NYTimes.com no longer supports Internet Explorer 9 or earlier. Please upgrade your browser. LEARN MORE » (Submit) Sections (Submit) Home (Submit) Search Skip to content Skip to navigation View mobile version The New York Times Fashion & Style|Artificial Intelligence as a Threat (Submit) Search (Submit) Subscribe Now (Submit) Log In (Submit) (Submit) 0 (Submit) Settings (BUTTON) Close search Site Search Navigation Search NYTimes.com ____________________ (BUTTON) Clear this text input (Submit) Go https://nyti.ms/1AiNSPn 1. Loading... See next articles See previous articles Site Navigation Site Mobile Navigation Advertisement Supported by Fashion & Style Artificial Intelligence as a Threat Disruptions By NICK BILTON NOV. 5, 2014 Continue reading the main story Share This Page Continue reading the main story (Submit) Photo Credit Jamec C. Best, Jr./The New York Times Ebola sounds like the stuff of nightmares. Bird flu and SARS also send shivers down my spine. But I’ll tell you what scares me most: artificial intelligence. The first three, with enough resources, humans can stop. The last, which humans are creating, could soon become unstoppable. Before we get into what could possibly go wrong, let me first explain what artificial intelligence is. Actually, skip that. I’ll let someone else explain it: Grab an iPhone and ask Siri about the weather or stocks. Or tell her “I’m drunk.” Her answers are artificially intelligent. Right now these artificially intelligent machines are pretty cute and innocent, but as they are given more power in society, these machines may not take long to spiral out of control. Advertisement Continue reading the main story In the beginning, the glitches will be small but eventful. Maybe a rogue computer momentarily derails the stock market, causing billions in damage. Or a driverless car freezes on the highway because a software update goes awry. Continue reading the main story Advertisement Continue reading the main story But the upheavals can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease. Nick Bostrom, author of the book “Superintelligence,” lays out a number of petrifying doomsday settings. One envisions self-replicating nanobots, which are microscopic robots designed to make copies of themselves. In a positive situation, these bots could fight diseases in the human body or eat radioactive material on the planet. But, Mr. Bostrom says, a “person of malicious intent in possession of this technology might cause the extinction of intelligent life on Earth.” Artificial-intelligence proponents argue that these things would never happen and that programmers are going to build safeguards. But let’s be realistic: It took nearly a half-century for programmers to stop computers from crashing every time you wanted to check your email. What makes them think they can manage armies of quasi-intelligent robots? I’m not alone in my fear. Silicon Valley’s resident futurist, Elon Musk, recently said artificial intelligence is “potentially more dangerous than nukes.” And Stephen Hawking, one of the smartest people on earth, wrote that successful A. I. “would be the biggest event in human history. Unfortunately, it might also be the last.” There is a long list of computer experts and science fiction writers also fearful of a rogue robot-infested future. Two main problems with artificial intelligence lead people like Mr. Musk and Mr. Hawking to worry. The first, more near-future fear, is that we are starting to create machines that can make decisions like humans, but these machines don’t have morality and likely never will. Newsletter Sign Up Continue reading the main story Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. ____________________ (Submit) Sign Up [_] You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. You are already subscribed to this email. View all New York Times newsletters. * See Sample * Manage Email Preferences * Not you? * Privacy Policy * Opt out or contact us anytime The second, which is a longer way off, is that once we build systems that are as intelligent as humans, these intelligent machines will be able to build smarter machines, often referred to as superintelligence. That, experts say, is when things could really spiral out of control as the rate of growth and expansion of machines would increase exponentially. We can’t build safeguards into something that we haven’t built ourselves. “We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest,” said James Barrat, author of “Our Final Invention: Artificial Intelligence and the End of the Human Era.” “So when there is something smarter than us on the planet, it will rule over us on the planet.” What makes it harder to comprehend is that we don’t actually know what superintelligent machines will look or act like. “Can a submarine swim? Yes, but it doesn’t swim like a fish,” Mr. Barrat said. “Does an airplane fly? Yes, but not like a bird. Artificial intelligence won’t be like us, but it will be the ultimate intellectual version of us.” Advertisement Continue reading the main story Perhaps the scariest setting is how these technologies will be used by the military. It’s not hard to imagine countries engaged in an arms race to build machines that can kill. Bonnie Docherty, a lecturer on law at Harvard University and a senior researcher at Human Rights Watch, said that the race to build autonomous weapons with artificial intelligence — which is already underway — is reminiscent of the early days of the race to build nuclear weapons, and that treaties should be put in place now before we get to a point where machines are killing people on the battlefield. “If this type of technology is not stopped now, it will lead to an arms race,” said Ms. Docherty, who has written several reports on the dangers of killer robots. “If one state develops it, then another state will develop it. And machines that lack morality and mortally should not be given power to kill.” So how do we ensure that all these doomsday situations don’t come to fruition? In some instances, we likely won’t be able to stop them. (Submit) But we can hinder some of the potential chaos by following the lead of Google. Earlier this year when the search-engine giant acquired DeepMind, a neuroscience-inspired, artificial intelligence company based in London, the two companies put together an artificial intelligence safety and ethics board that aims to ensure these technologies are developed safely. Demis Hassabis, founder and chief executive of DeepMind, said in a video interview that anyone building artificial intelligence, including governments and companies, should do the same thing. “They should definitely be thinking about the ethical consequences of what they do,” Dr. Hassabis said. “Way ahead of time.” A version of this article appears in print on November 6, 2014, on Page E2 of the New York edition with the headline: Artificial Intelligence as a Threat. Order Reprints| Today's Paper|Subscribe Continue reading the main story We’re interested in your feedback on this page. Tell us what you think. * * * * Disruptions A weekly column by Nick Bilton exploring how technology is shaping our lives. * Is the Answer to Phone Addiction a Worse Phone? JAN 12 * Gaymoji: A New Language for That Search MAR 14 * The Upside to Technology? It’s Personal * Why Vinyl Records and Other ‘Old’ Technologies Die Hard * The Risks in Hoverboards and Other Lithium-Ion Gadgets See More » What's Next Loading... Go to Home Page » Site Index The New York Times Site Index Navigation News * World * U.S. * Politics * N.Y. * Business * Tech * Science * Health * Sports * Education * Obituaries * Today's Paper * Corrections Opinion * Today's Opinion * Op-Ed Columnists * Editorials * Op-Ed Contributors * Letters * Sunday Review * Video: Opinion Arts * Today's Arts * Art & Design * Books * Dance * Movies * Music * N.Y.C. Events Guide * Television * Theater * Video: Arts Living * Automobiles * Crossword * Food * Education * Fashion & Style * Health * Jobs * Magazine * N.Y.C. Events Guide * Real Estate * T Magazine * Travel * Weddings & Celebrations Listings & More * Reader Center * Classifieds * Tools & Services * N.Y.C. Events Guide * Multimedia * Photography * Video * NYT Store * Times Journeys * Subscribe * Manage My Account * NYTCo Subscribe * Subscribe * Home Delivery * Digital Subscriptions * Crossword * Email Newsletters * Alerts * Gift Subscriptions * Group Subscriptions * Education Rate * Mobile Applications * Replica Edition Site Information Navigation * © 2018 The New York Times Company * Home * Search * Accessibility concerns? Email us at accessibility@nytimes.com. We would love to hear from you. * Contact Us * Work With Us * Advertise * Your Ad Choices * Privacy * Terms of Service * Terms of Sale Site Information Navigation * Site Map * Help * Site Feedback * Subscriptions