#alternate alternate alternate How to Regulate Artificial Intelligence NYTimes.com no longer supports Internet Explorer 9 or earlier. Please upgrade your browser. LEARN MORE » (Submit) Sections (Submit) Home (Submit) Search Skip to content Skip to navigation View mobile version The New York Times Opinion|How to Regulate Artificial Intelligence (Submit) Search (Submit) Subscribe Now (Submit) Log In (Submit) (Submit) 0 (Submit) Settings (BUTTON) Close search Site Search Navigation Search NYTimes.com ____________________ (BUTTON) Clear this text input (Submit) Go https://nyti.ms/2wZrcI5 1. Loading... See next articles See previous articles Site Navigation Site Mobile Navigation Advertisement Supported by Opinion | Op-Ed Contributor How to Regulate Artificial Intelligence By OREN ETZIONISEPT. 1, 2017 Continue reading the main story Share This Page Continue reading the main story (Submit) Photo Credit Isaac Lawrence/Agence France-Presse — Getty Images The technology entrepreneur Elon Musk recently urged the nation’s governors to regulate artificial intelligence “before it’s too late.” Mr. Musk insists that artificial intelligence represents an “existential threat to humanity,” an alarmist view that confuses A.I. science with science fiction. Nevertheless, even A.I. researchers like me recognize that there are valid concerns about its impact on weapons, jobs and privacy. It’s natural to ask whether we should develop A.I. at all. I believe the answer is yes. But shouldn’t we take steps to at least slow down progress on A.I., in the interest of caution? The problem is that if we do so, then nations like China will overtake us. The A.I. horse has left the barn, and our best bet is to attempt to steer it. A.I. should not be weaponized, and any A.I. must have an impregnable “off switch.” Beyond that, we should regulate the tangible impact of A.I. systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of A.I. I propose three rules for artificial intelligence systems that are inspired by, yet develop further, the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws. These three laws are elegant but ambiguous: What, exactly, constitutes harm when it comes to A.I.? I suggest a more concrete basis for avoiding A.I. harm, based on three rules of my own. Newsletter Sign Up Continue reading the main story Sign Up for the Opinion Today Newsletter Every weekday, get thought-provoking commentary from Op-Ed columnists, the Times editorial board and contributing writers from around the world. Please verify you're not a robot by clicking the box. Invalid email address. Please re-enter. You must select a newsletter to subscribe to. ____________________ (Submit) Sign Up [_] You agree to receive occasional updates and special offers for The New York Times's products and services. Thank you for subscribing. An error has occurred. Please try again later. You are already subscribed to this email. View all New York Times newsletters. * See Sample * Manage Email Preferences * Not you? * Privacy Policy * Opt out or contact us anytime First, an A.I. system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want A.I. to engage in cyberbullying, stock manipulation or terrorist threats; we don’t want the F.B.I. to release A.I. systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, A.I. weapons that violate international treaties. Advertisement Continue reading the main story Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior. Continue reading the main story Advertisement Continue reading the main story My second rule is that an A.I. system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasingly sophisticated dialogue with real people — society needs assurances that A.I. systems are clearly labeled as such. In 2016, a bot known as Jill Watson, which served as a teaching assistant for an online course at Georgia Tech, fooled students into thinking it was human. A more serious example is the widespread use of pro-Trump political bots on social media in the days leading up to the 2016 elections, according to researchers at Oxford. My rule would ensure that people know when a bot is impersonating someone. We have already seen, for example, @DeepDrumpf — a bot that humorously impersonated Donald Trump on Twitter. A.I. systems don’t just produce fake tweets; they also produce fake news videos. Researchers at the University of Washington recently released a fake video of former President Barack Obama in which he convincingly appeared to be speaking words that had been grafted onto video of him talking about something entirely different. (Submit) My third rule is that an A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information. Because of their exceptional ability to automatically elicit, record and analyze information, A.I. systems are in a prime position to acquire confidential information. Think of all the conversations that Amazon Echo — a “smart speaker” present in an increasing number of homes — is privy to, or the information that your child may inadvertently divulge to a toy such as an A.I. Barbie. Even seemingly innocuous housecleaning robots create maps of your home. That is information you want to make sure you control. My three A.I. rules are, I believe, sound but far from complete. I introduce them here as a starting point for discussion. Whether or not you agree with Mr. Musk’s view about A.I.’s rate of progress and its ultimate impact on humanity (I don’t), it is clear that A.I. is coming. Society needs to get ready. Oren Etzioni is the chief executive of the Allen Institute for Artificial Intelligence. Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter. A version of this op-ed appears in print on September 2, 2017, on Page A19 of the New York edition with the headline: How to Regulate Artificial Intelligence. Today's Paper|Subscribe Continue reading the main story We’re interested in your feedback on this page. Tell us what you think. * * * * What's Next Loading... Go to Home Page » Site Index The New York Times Site Index Navigation News * World * U.S. * Politics * N.Y. * Business * Tech * Science * Health * Sports * Education * Obituaries * Today's Paper * Corrections Opinion * Today's Opinion * Op-Ed Columnists * Editorials * Op-Ed Contributors * Letters * Sunday Review * Video: Opinion Arts * Today's Arts * Art & Design * Books * Dance * Movies * Music * N.Y.C. Events Guide * Television * Theater * Video: Arts Living * Automobiles * Crossword * Food * Education * Fashion & Style * Health * Jobs * Magazine * N.Y.C. Events Guide * Real Estate * T Magazine * Travel * Weddings & Celebrations Listings & More * Reader Center * Classifieds * Tools & Services * N.Y.C. Events Guide * Multimedia * Photography * Video * NYT Store * Times Journeys * Subscribe * Manage My Account * NYTCo Subscribe * Subscribe * Home Delivery * Digital Subscriptions * Crossword * Email Newsletters * Alerts * Gift Subscriptions * Group Subscriptions * Education Rate * Mobile Applications * Replica Edition Site Information Navigation * © 2018 The New York Times Company * Home * Search * Accessibility concerns? Email us at accessibility@nytimes.com. We would love to hear from you. * Contact Us * Work With Us * Advertise * Your Ad Choices * Privacy * Terms of Service * Terms of Sale Site Information Navigation * Site Map * Help * Site Feedback * Subscriptions