Democracy

Democracy and Disinformation

Imagine this: Suddenly, everybody was an author. Everybody could publish; everybody could write. The men who had controlled access to information were now irrelevant; the establishment was crushed in a flood tide of new publications. New ideas multiplied; so did new conflicts. Violence followed – and so, eventually, did war.

No, I am not talking about the invention of blogging or social media. I am talking about the invention of the printing press, the radical fifteenth century technology that spread literacy and made it possible for millions to read and write – but also destabilized the church, undermined monarchies and brought us the Reformation. Presumably, the Protestants reading this might think that was good; but wherever you are on the Protestant-Catholic split, you will also remember that the reformation led to centuries of religious wars. Christians of different kinds split into tribes, fought for power, burnt one another at the stake.

But if my description of the fifteenth century does sound a lot like the 21st century, that’s not an accident, for we are living through an equally transformative, equally revolutionary moment. Why are so many elections, in so many democracies, suddenly taking such surprising turns? Why are nationalists and xenophobes who all sound the same suddenly gaining support in countries with very different economics and very different histories, from Poland and the Phillipines to Brazil, Britain and the United States?

Just as the printing press broke the monopoly of the monks and priests who controlled the written word in the fifteenth century, the internet and social media have, within the space of a few short years, undermined not only the business model used by democratic political media for the past two centuries, but the political institutions behind them too.

Here’s my guess: just as the printing press broke the monopoly of the monks and priests who controlled the written word in the fifteenth century, the internet and social media have, within the space of a few short years, undermined not only the business model used by democratic political media for the past two centuries, but the political institutions behind them too.

Look around the democratic world: everywhere, large newspapers and powerful broadcasters are disappearing. These old-fashioned news organizations might have been flawed, but many of them had, as their founding principle, at least a theoretical commitment to objectivity, fact-checking and the general public interest. They served as a filter, eliminating egregious conspiracy theories. More importantly, whatever you think about their objectivity or lack of it, they also created the possibility of a national conversation, a single debate.

In some big European countries, well-funded public broadcasters, obligated by law to be politically neutral, have stepped into the breach. But in many smaller European countries independent media has become very weak or has ceased to exist entirely, having been replaced by media which is either controlled by the government and operated by the ruling party, or else controlled by ruling parties via large business groups connected to them. In the United States there is also no broadcaster or newspaper which both sides of the political spectrum consider to be neutral either.

The result is polarization. People choose sides, they move apart, the center disappears. And polarization has other side effects: in many democracies, including the United States, there is now no common debate, let alone a common narrative. And this is not about different opinions or different biases. People actually don’t even have the same facts – one group thinks one set of things is true, another believes in something quite different. We often find ourselves arguing not about the way to move forward, but about what happened yesterday.

People now get their news from their close knit, ideologically similar friends; most members of an echo chamber share the same prevailing world view, and interpret news through this common lens.

Social media accelerates and accentuates this phenomenon, because it allows people, and indeed with its algorithms sometimes forces people, to see only the news and opinion they want to hear, whether factual or not. The algorithms that reinforce comforting narratives have created homogenous clusters online – otherwise known as “echo chambers.” People now get their news from their close knit, ideologically similar friends; most members of an echo chamber share the same prevailing world view, and interpret news through this common lens.

This ever-deeper polarization has numerous effects. It creates distrust for what used to be considered apolitical institutions. The civil service, the police, the judiciary and government-run bodies of all kinds fall under suspicion because one side or the other, or sometimes both, suspects that they have been captured by the opposing party.

It has also had a lethal effect on traditional political parties, which were once based on real-life organizations like the trade unions or the church. ​​Instead of looking to real organizations, more and more people now identify with groups or organizations or just ideas and themes that they find in the virtual world. People can reach across traditional social and geographic lines to form interest groups, in ways that undermine traditional politics, both for better and for worse. Those new parties which succeed in converting virtual support into real votes do so because they take advantage of this change – one thinks of Macron’s En Marche, or the very different Italian five Star movement. But in many places, and I am afraid this might be one of them, this phenomenon has simply led to fragmentation – and again, increased partisanship. Groups, or grouplets, hunker down, barricade themselves inside ideological ghettoes and stop worrying about the general interest.

But this new information network, with its deep divides and its suspicious clans, is also far more conducive than the old one to the spread of false rumours, whether generated naturally or imposed from outside, and to campaigns of insider and outsider manipulation. To put it bluntly – and this has now been proven in several studies and surveys – people who live in highly partisan echo chambers are much more likely to believe false information if they receive it from the highly partisan sources that they trust. The more partisan and the more polarized, the more susceptible to conspiracy theories, disinformation campaigns and false information. And this, of course, is a weakness which can be exploited.

To put it bluntly – and this has now been proven in several studies and surveys – people who live in highly partisan echo chambers are much more likely to believe false information if they receive it from the highly partisan sources that they trust.

In due course, many people have learned to exploit it. But since we are meeting in Washington in the final days of Robert Mueller’s investigation into Russia’s information campaign in the US, perhaps it is worth pointing out that the first national government to understand this well was that of Russia. We now know that in the US election, professional Russian trolls deliberately sought out partisans, ranging from Black Lives Matter activists on the one hand to anti-immigration groups on the other. Not only did they target these groups with false information, in an effort to get them to vote for Donald Trump or to abstain from voting for Hillary Clinton, in a couple of instances they sought to use fake Facebook pages to organize real events: protest marches, even orchestrated clashes between different groups.

As we now know, this kind of activity was combined with a more traditional form of cyberattack: the hack of the Democratic National Committee, material from which was spun into hundreds of different kinds of stories, ranging from “democrats are anti-Catholic” to “Hillary Clinton runs a pedophile ring in the basement of a Washington DC pizza restaurant.” At the same time, the same group ran Twitter campaigns which promote particular memes and hashtags, in order to make them seem more popular – and not only in the United States. Particular messages were amplified by the use of bots. Many millions of them now operate on all of the social media platforms, where they are used to distort reality, to make particular ideas “trend” and to promulgate particular narratives.

All of these tactics can be modified to suit particular countries. In Germany in 2017 there was no “leak,” and so most Germans mistakenly believed there was no Russian interference in their campaign. But I took part in a data analysis project at the London School of Economics in the months before the vote. We found that the AfD’s messages on social media were deliberately boosted by pro-Russian media as well as trolls and botnets, some of them originally created for commercial use. They echoed and repeated divisive messages: anti-immigration, anti-NATO, pro-Russian and pro-AfD. They were targeting very specific groups: The German far-right, the Russian speaking community in Germany which, thanks to immigration from the ex-USSR comprises several million people, and to some extent the far-left too. Most of those who read mainstream media in Germany never even saw those messages, but the AfD’s alternative echo chamber read them every day.

All of the examples I have cited so far are Russian, but of course that is misleading. Russia has just simply been ahead of the game. For historical reasons, the Russian secret services understood the possibilities of internet disinformation before anyone else. In the Soviet era, the KGB had whole departments devoted to what we now call fake news, famously spreading not only Soviet propaganda, but also, for example, the famous rumor that the CIA invented the AIDS virus. The Soviet Union is gone of course, but there are also reasons why the KGB’s descendants have devoted so much time to thinking about this in the present: it’s a cheap way for an impoverished ex-superpower to meddle in other countries’ politics. Moreover, the Russians have a direct interest in weakening and dividing Western democracies, more so than most.

But although the Russians were the first to invest in these things, others are already following them: other governments, other political movements, private companies. There is no big bar to entry in this game. It doesn’t cost much, it doesn’t take much time, it isn’t particularly high-tech and it requires no special equipment.

There also isn’t, at the moment, an institution which is capable of stopping it. Democratic governments don’t censor the internet. They aren’t in the habit of funding independent media – and if they were it would cease to be independent. The militaries of NATO aren’t set up to fight information wars either. Generals control tanks, but aren’t going to wade into the social media wars inside their own countries.

Even counter-intelligence services are queasy about taking part in political debates inside their own countries. It isn’t their job to penetrate echo chambers, to counter conspiracy theories or to bring back trust to democratic institutions, let alone to reinvigorate democratic newspapers. Tech companies could help solve this problem, but they have no incentive to do so: The new information network is also where Google and Facebook are making money. Facebook and Twitter created the algorithms that spread shock, anger and conspiracy theory faster than truth. Although some are looking for a technical solution, no gadget will ever be able to measure truth; no investments can persuade people to read quality media that no longer exists. And censorship from Google or Facebook will not in the long term be any more acceptable or successful than censorship from a government.

Some solutions may come from the old media, from universities and from the NGO world. There are journalists talking about re-inventing what they do in order to create greater levels of public trust. There are people designing media literacy campaigns. There are fact-checking websites, and then there are people trying to understand how we can make sure that fact-checkers are believed.

But there are some deeper changes to be considered too. There is a precedent for this historical moment: In the 1920s and 1930s, democratic governments suddenly found themselves challenged by radio, the new information technology of its time. Radio’s early stars included Adolf Hitler and Joseph Stalin: The technology could clearly be used to provoke anger and violence. But was there a way to marshal it for the purposes of democracy instead? One answer was the British Broadcasting Corp., the BBC, which was designed from the beginning to reach all parts of the country, to “inform, educate and entertain” and to join people together, not in a single set of opinions but in the kind of single national conversation that made democracy possible. Another set of answers was found in the United States, where journalists accepted a regulatory framework, a set of rules about libel law and a public process that determined who could get a radio license.

The question now is to find the equivalent of licensing and public broadcasting in the world of social media: to find, that is, the regulatory or social or legal measures that will make this technology work for us, for our society and our democracy, and not just for Facebook shareholders. This is not an argument in favor of censorship. It’s an argument in favor of applying to the online world the same kinds of regulations that have been used in other spheres, to set rules on transparency, privacy, data and competition.

The question now is to find the equivalent of licensing and public broadcasting in the world of social media […]

We can, for example, regulate Internet advertising, just as we regulate broadcast advertising, insisting that people know when and why they are being shown political ads or, indeed, any ads. We can curb the anonymity of the Internet — recent research shows that the number of fake accounts on Facebook may be far higher than what the company has stated in public — because we have a right to know whether we are interacting with real people or bots. In the longer term, there may be even more profound solutions. What would a public-interest algorithm look like, for example, or a form of social media that favored constructive conversations over polarization?

We could make a start with Senator Amy Klobuchar’s and John Warner’s proposed bill on honesty in advertising. But the debate needs to be deeper; it cannot include another chaotic, amateurish interview with Facebook chief executive Mark Zuckerberg in the Senate. Constantly changing technology will make it difficult, as will lobbying. But we have regulated financial markets, another sphere where the technology changes constantly, the money involved is enormous, everyone is lobbying, and everyone is trying to cheat. If we don’t find a way to regulate we will not be able to ensure the integrity of elections or the decency of the public sphere. If we don’t find a way to regulate, in the long term there won’t even be a public sphere, and there won’t be functional democracies anymore, either.

Still, regulation is not a silver bullet, and it is only a part of the answer. The revival of democracy, so long dependent on reliable information, in an era of unreliable information is going to be a major civilizational project. Just like it took hundreds of years to end the religious wars in Europe, it may take some time before solid solutions to this problem are found too.

At the end of the day, we have to hope that the very basic human desire not to be fooled wins the day. Also that the deeper values of democracy – the principles of tolerance, respect for rule of law, the importance of strong and neutral political institutions – prove stronger than the inchoate, dissatisfied anger that you can find online.

Anne Applebaum, 2019

Print

Anne Applebaum