#publisher alternate (Submit) Close Skip to main content [_] * switch to the International edition * switch to the UK edition * switch to the US edition * switch to the Australia edition current edition: International edition The Guardian - Back to home Support The Guardian Subscribe Find a job Jobs Sign in * Comment activity * Edit profile * Email preferences * Change password * Sign out (Submit) My account Search [_] * News * Opinion * Sport * Culture * Lifestyle Show More * (Submit) News + World news + UK news + Science + Cities + Global development + Football + Tech + Business + Environment + Obituaries * (Submit) Opinion + The Guardian view + Columnists + Cartoons + Opinion videos + Letters * (Submit) Sport + Football + Rugby union + Cricket + Tennis + Cycling + F1 + Golf + US sports * (Submit) Culture + Books + Music + TV & radio + Art & design + Film + Games + Classical + Stage * (Submit) Lifestyle + Fashion + Food + Recipes + Love & sex + Health & fitness + Home & garden + Women + Family + Travel + Money ____________________ What term do you want to search? (Submit) Search with google * Become a supporter * Subscribe * Sign in/up * (Submit) My account + Comment activity + Edit profile + Email preferences + Change password + Sign out * (Submit) International edition + switch to the UK edition + switch to the US edition + switch to the Australia edition * Jobs * Dating * Holidays * The Guardian app * Video * Podcasts * Pictures * Newsletters * Today's paper * The Observer * Digital archive * Crosswords * Facebook * Twitter * Jobs * Dating * Holidays * The Guardian view * Columnists * Cartoons * Opinion videos * Letters (Submit) More Artificial intelligence (AI) Opinion Social media bots threaten democracy. But we are not helpless Ever-more sophisticated Facebook and Twitter bots can sway political opinions. We have the technology to counter this – we need the will to use it Samuel Woolley and Marina Gorbis Mon 16 Oct 2017 15.57 BST Last modified on Wed 18 Oct 2017 10.56 BST * Share on Facebook * Share on Twitter * Share via Email * (Submit) View more sharing options * Share on LinkedIn * Share on Pinterest * Share on Google+ * Share on WhatsApp * Share on Messenger * (Submit) Close Two people exchanging information via smartphone [_] ‘It appears that in 2016, bots were deliberately unleashed on social media to sway voter opinion by spreading fake news and deceiving trending algorithms.’ Photograph: PhotoAlto/Alamy Can social bots – pieces of software that perform automated tasks – influence humans on social media platforms? That’s a question congressional investigators are asking social media companies ever since fears emerged that they were deployed in 2016 to influence the presidential election. Half a decade ago we were among a handful of researchers who could see the power of relatively simple pieces of software to influence people. Back in 2012, the Institute for the Future, for which we work, ran an experimental contest to see how they might be used to influence people on Twitter. The winning bot was a “business school graduate” with a “strong interest in post-modern art theory”, which racked up 14 followers and 15 retweets or replies from humans. To us, this confirmed that bots can generate followers and conversations. In other words, they can influence social media users. We saw their power as potential tools for social good – to warn people of earthquakes or to connect peace activists. But we also saw that they can be used for social ill – to spread falsehoods or skew online polls. Q&A What is AI? (Submit) Show Hide Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites. In the future, it could deliver driverless cars, smart personal assistants, and intelligent energy grids. AI has the potential to make organisations more effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law. Was this helpful? (like) (dislike) Thank you for your feedback. When we published papers and the findings of our experiments on bots, they were reported in the popular press. So why didn’t the alarm spread to the tech, policy and social activist communities before automated social media manipulation became front-page news in 2017? Since 2012, thanks to investments in online marketing, bots have become far more sophisticated than the models in our experiment. Those who build bots now spend time and effort generating believable personas that often have a powerful presence on multiple sites and can influence thousands of people instead of just a few. Innovations in natural language processing, increases in computational power, and cheaper, more readily available data allow social bots to be more believable as real people and more effective in altering the flow of information. Over the last five years, this type of bot usage has been mapped on to political communications. Research from several universities, including Oxford and the University of Southern California, shows that bots can be used to make politicians and political ideas look more popular than they are or to massively scale up attacks upon the opposition. It appears that in 2016, they were deliberately unleashed on social media to do just that – sway voter opinion by spreading fake news and deceiving trending algorithms. And political manipulation over social media has very real implications for the 2018 US midterm elections. Recent research suggests that those initiating digital propaganda campaigns are beginning to focus their attentions upon specific subsections of the US population and constituencies in swing states. The more focused such attacks become, the more likely they are to have a significant effect on electoral outcomes. Furthermore, the unrealized promises of “psychographic” targeting, marketed by groups like Cambridge Analytica in 2016, may be achieved in 2018 with technological advancements. Social media platforms may be able to track and report on political advertisements from foreign entities, but will they divulge information on pervasive and personalized advertising from their domestic political clients? This is a pressing question, because social bots are likely to continue to grow in sophistication. At a recent roundtable on the Future of AI and Democracy, several technology experts forecast that bots will become even more persuasive, more emotional and more personalized. They will be able to not just spread information, but to truly converse and persuade their human interlocutors in order to even more effectively push the latter’s emotional buttons. Bring together advances in neuroscience, the ability to analyze massive amounts of behavioral data and the proliferation of sensors and connectivity and you have a powerful recipe for affecting society though computational means. So what do we need to do to stop this technology from going astray? Consider the advances in modern oceanography. In the not too distant past, scientists collected samples and measurements from the ocean floor episodically –in select places and at specific times. The data was limited and usually not shared widely. Threats were not easily detected. Today, we find portions of an ocean floor instrumented with wireless interactive sensors and cameras that enable scientists (and laypeople) to see what is happening 24 hours a day, seven days a week. This allows scientists to “take the pulse” of the ocean, forecast a range of possible threats and suggest powerful interventions when needed. If we can do this for monitoring our oceans, we can do it for our social media platforms. The principles are the same – aggregating multiple streams of data, making such data transparent, applying the best analytical and computational tools to uncover patterns and detect signals of change. Then we will be able to provide such data to experts and laypeople, including technology companies, policymakers, journalists, and citizens of political bot attacks or other large-scale disinformation campaigns before these take hold. We know how to do this in many realms, what we need now is the will to apply this knowledge to our social media environment. Topics * Artificial intelligence (AI) * Opinion * Social media * Twitter * Blogging * Digital media * Internet * comment * Share on Facebook * Share on Twitter * Share via Email * Share on LinkedIn * Share on Pinterest * Share on Google+ * Share on WhatsApp * Share on Messenger * Reuse this content View all comments > (Submit) Order by * (Submit) newest * (Submit) oldest * (Submit) recommendations (Submit) Show 25 * (Submit) 25 * (Submit) 50 * (Submit) 100 * (Submit) All (Submit) Threads * (Submit) collapsed * (Submit) expanded * (Submit) unthreaded Loading comments… Trouble loading? (Submit) View more comments most viewed * The Guardian view * Columnists * Cartoons * Opinion videos * Letters back to top IFRAME: /email/form/footer/today-uk * become a supporter * make a contribution * securedrop * ask for help * advertise with us * work for us * contact us * complaints & corrections * terms & conditions * privacy policy * cookie policy * digital newspaper archive * all topics * all contributors * Facebook * Twitter © 2018 Guardian News and Media Limited or its affiliated companies. All rights reserved. [p?c1=2&c2=6035250&cv=2.0&cj=1&comscorekw=Artificial+intelligence+%28AI %29%2CSocial+media%2CTwitter%2CBlogging%2CDigital+media%2CInternet%2CMe dia%2CNewspapers+%26+magazines%2CTechnology%2CUS+news%2CElections+past% 2CPolitics%2CPolitics+past%2CWorld+news%2CComputing%2CRobots]