State-sponsored internet propaganda is defined as a government’s use of paid internet propagandists with the intention of swaying online opinion, undermining dissident communities, or changing the perception of what is the dominant view.
Another similar online activity has also made an appearance : ‘astroturfing’. This is the practice of masking the sponsors of a message or organization (e.g., political, advertising, religious or public relations) to make it appear as though it originates from and is supported by grassroots participants.
The intention here is to give credibility to the statements or organizations by withholding information about the source’s financial connection.
The term astroturfing is derived from AstroTurf, a brand of synthetic carpeting designed to resemble natural grass, as a play on the word “grassroots”.
The implication behind the use of the term is that instead of a “true” or “natural” grassroots effort behind the activity in question, there is a “fake” or “artificial” appearance of support.
With these definitions in mind, let us examine a recent case of such activity that has had a number of detrimental effects on its intended targets in Europe.
Russia’s ‘New’ Tools for Confronting the West
In the last three years, Russia has demonstrated its assertive foreign policy by successful military interventions in Ukraine and Syria. The distinctive Russian approach to operations in Ukraine gave rise to an impression among some observers that its military had employed fundamentally new concepts of armed conflict.
But the techniques and methods displayed by Russia in Ukraine have roots in traditional Soviet approaches.
Since the end of the Cold War, Russia’s military academics have displayed a consistently developing train of thought on the changing nature of conflict and how to prevail in it, including – but not limited to – the successful application of military power.
Russia’s practice of information warfare has become a real cause for concern. This type of activity has developed rapidly, while still following key principles that can be traced to Soviet roots. This development has consisted of a series of adaptations following failed information campaigns by Russia, accompanied by successful adoption of the internet.
Russian disinformation campaigns continue to be described in the West as failing due to the implausibility of the narratives. But by applying Western notions of the nature and importance of truth, this approach measures these campaigns by the wrong criteria, and apparently misunderstands their objectives.
Russia continues to present itself as being under approaching threat from the West, and is mobilizing to address that threat. Its security initiatives, even if it views or presents them as defensive measures, are likely to have negative consequences for its neighbours.
Russia’s growing confidence in pursuing its objectives will make it even harder for the West to protect itself against its assertiveness, without the implementation of measures to resist Russian information warfare, and without the availability of significant military force to act as an immediate and present deterrent in the front-line states.
According to the European Parliament Research Service, the visibility of disinformation as a tool to undermine democracies increased in the context of Russia’s hybrid war against Ukraine. It gained notoriety as a global challenge during the UK referendum on EU membership as well as the United States presidential election campaign in 2016.
As a result, the European Union and the European Parliament have stepped up efforts to tackle online disinformation for the May 2019 European elections.
The phenomenon of false, misleading news stories is at least as old as the printing press.
However, social media and their personalisation tools have accelerated the spread of rumours, hoaxes and conspiracy theories. The phenomenon gained global visibility during the 2016 US presidential election, when viral false news or ‘junk news’ across the political spectrum received more engagement on Facebook than real news.
Research has shown that Russian accounts posted over 45 000 Brexit messages in the last 48 hours of the campaign.
When designed to deceive users for political purposes, digital gossip falls under ‘disinformation’ – the dissemination of verifiably false or misleading information which non-state and state actors can use to intentionally deceive the public and cause public harm.
The Kremlin continues its disinformation campaigns in its ongoing hybrid war against Ukraine, and is applying them in its information warfare against the West.
Pro-Kremlin information campaigns boost Moscow’s narrative of a morally decayed EU on the brink of collapse, and seek to exploit divisions in Western societies.
In November 2017, British Prime Minister Theresa May accused Russia of ‘weaponising information’, and a February 2018 report by UK communications agency 89up.org found Russian pro-Brexit social media interference worth up to €4.6 million during the campaign.
In August 2017, the US imposed fresh sanctions on Russia over its interference in the 2016 election. Following the nerve-gas attack on a former Russian spy, Sergei Skripal, and his daughter on British soil in March 2018, the US imposed new sanctions, including on 16 Russian entities and individuals linked to the Internet Research Agency (a Russian ‘troll factory’ spreading disruptive content via social media) indicted by US Special Counsel Robert Mueller for their role in election-meddling operations.
The European Commission and the Vice-President of the Commission / High Representative of the Union for Foreign Affairs and Security Policy (HR) responded with a June 2018 joint communication on boosting resilience against hybrid threats, emphasising strategic communications as a priority.
Whereas US tech giants had previously played down the volume of content purchased by Russian actors during the 2016 US presidential election campaign, Facebook, Google and Twitter told US lawmakers in November 2017 that pro-Kremlin actors bought and published divisive ads aimed at influencing both liberals and conservatives.
Facebook said Russia-backed posts reached up to 126 million Americans during and after the 2016 election. The March 2018 disclosure that user data from 87 million Facebook users – including that of 2.7 million EU citizens – had been improperly shared with the controversial political consultancy company Cambridge Analytica (which used the data to micro-target and mobilise voters in the US and the UK) further increased the focus on the role of online platforms, not only in spreading, but also in monetising disinformation.
In April 2018, Facebook CEO Mark Zuckerberg told the US Congress that tens of thousands of fake accounts were deleted to prevent election interference in 2017. He explained that Russian accounts primarily used ads to influence views on issues rather than promoting specific candidates or political messaging.
In May 2018, Zuckerberg dodged questions about data protection, fake news and election security, posed by Members of the European Parliament in Brussels. Confidential emails from Zuckerberg, published in December 2018 – suggesting that Facebook secretly gave some companies access to users’ friends’ data – cast further doubt about Facebook’s ethics.
EU steps up anti-disinformation efforts to protect democracy
The Facebook data breach disclosure reignited the ongoing debate on the role of online platforms in the spread of conspiracy theories, disinformation and false news.
In its June 2017 resolution on online platforms and the digital single market, the European Parliament had already called on the Commission to analyse the legal framework with regard to ‘fake news’, and to look into the possibility of legislative intervention to limit the dissemination of fake content. President Jean-Claude Juncker tasked Mariya Gabriel, Commissioner for the Digital Economy and Society, to look into the democratic challenges that online platforms create as regards the spread of fake information, as well as to reflect on possible action at EU level.
The Commission then launched a public consultation on fake news and online disinformation. It also set up a high-level expert group (HLEG) representing academia, online platforms, news media and civil society. TheCommission’s April 2018 communication on ‘Tackling online disinformation: a European approach’ took recommendations of the HLEG into account and proposed an EU-wide Code of Practice – signed by the online platforms – to ensure transparency by explaining how algorithms select news, as well as improving the visibility and accessibility of reliable news.
The communication also recommended support for an independent network of fact-checkers as well as actions to boost quality journalism and media literacy.
Coordinating the response to disinformation of the European elections
Responding to the June 2018 call by the European Council to protect the EU’s democratic systems and ‘combat disinformation, including in the context of the European elections’, the Commission and the HR in December 2018 presented an ‘action plan against disinformation’ with specific proposals for a coordinated European response.
The action plan builds on existing Commission initiatives as well as the work of the East StratCom Task Force, set up in 2015 under the European External Action Service.
The action plan focuses on four main areas:
Improved detection. Strategic Communication Task Forces and the EU Hybrid Fusion Cell in th EEAS, as well as the EU delegations in the Neighbourhood countries will receive additional specialised staff and data analysis tools.
The EEAS’s budget for strategic communication to address and raise awareness about disinformation is planned to more than double, from €1.9 million in 2018 to €5 million in 2019.
Coordinated response. A dedicated Rapid Alert System will be set up among the EU institutions and Member States to facilitate data sharing and to provide alerts on disinformation threats in real time.
Online platforms and industry. The signatories of the EU-wide Code of Practice on Disinformation (signed on 26 September 2018) are urged to swiftly and effectively implement the commitments, focusing on actions that are urgent for the European elections. This includes deleting fake accounts, labelling messaging activities by ‘bots’ and cooperating with fact-checkers and researchers to detect disinformation and make fact-checked content more visible.
Raising awareness and empowering citizens.In addition to targeted awareness campaigns, the EU institutions and Member States will promote media literacy as well as support national teams of independent fact-checkers and researchers to detect and expose disinformation on social networks.
As Russia continues to present itself as being under approaching threat from the West, and is mobilizing to address that threat, its security initiatives, even if it views or presents them as defensive measures, are likely to have severe consequences for its neighbours.
Russia’s growing confidence in pursuing its objectives will make it even harder for the West to protect itself against Russian assertiveness, without the implementation of measures to resist Russian information warfare, and without the availability of significant military force to act as an immediate and present deterrent in the front-line states.
In short, Russian military interventions and associated information warfare campaigns in the past two years have not been an anomaly. Instead they are examples of Russia implementing its long-standing intent to challenge the West now that it feels strong enough to do so.
For Western governments and leaders, an essential first step towards more successful management of the relationship with Moscow would perhaps be to recognize that the West’s values and strategic interests and those of Russia are fundamentally incompatible.