‘Disinformation is a business’: media execs explore how to demonetize falsehoods

Consumers are better informed about the realities of online disinformation than ever. For brands that aren’t being proactive about ensuring their ad spend goes to reputable platforms, that means they run the risk of serious brand damage and questions about their own priorities.

Disinformation is a profitable business. Its creation and promulgation across platforms including social networks and news websites generates a significant amount of revenue for many parties. At the same time it levies a cost on the public that in some cases leads to real-world harm and disinformation-related deaths.

As a result, the media and advertising industries are being urged to clean up the digital advertising ecosystem in order to remove the financial incentive for the creation of disinformation – and remove their own complicity. At the AOP’s Crunch 4.4 event on disinformation a panel of experts took a look at how best to fix that issue – and addressed some of the impediments to making it a thing of the past.

Clare Melford is co-founder of the Global Disinformation Index (GDI), which aims to provide a framework for determining what is and isn’t disinformation for brands and advertisers. She began by noting that around $235m in ad revenue is flowing to disinformation sites each year, and that the estimate is on the lower end: “Disinformation is a business, a profitable one.”

That, she says, has led to blunt tools including keyword blocking being adopted by brands who do not want to risk any reputational damage for appearing alongside potential disinformation.

She also notes that disinformation is more complex than it might appear, and that things that are true in the strictest sense of the word can still count as disinformation. She says: “If true things were not problematic, there would be no problem with Breitbart’s Immigrant Crime section”, which reportedly truthfully on crimes committed by immigrants in America. However, because of how that information was presented, it created the false narrative that they committed crimes more often than people born in America. She says it creates a “lens of adversarial narrative conflict”, where the disinformation creates a risk of harm.

Iman Atta is director of Faith Matters UK. She cited The Sun’s front-page story from 2015 that claimed “nearly one in five British Muslims has some sympathy with those who had fled the UK to fight for Isis in Syria” as an example of harmful disinformation. Notably The Sun was forced to retract and admit the claim was “significantly misleading” in 2016.

The panel discussed how even publications that don’t mean to can end up contributing to the spread of disinformation when their stories are presented out of context elsewhere. Rita Jabri Markwell of the Australian Muslim Advocacy Network said that such instances comprise an underreported proportion of disinformation, since many of the bodies that look into online harm only look at instances in isolation: “None of this was picked up by their tools because they don’t look at aggregate harm … traditional news can be repurposed really harmfully.”

The panel noted that in addition to the financial incentives, many of those smaller outlets have an ideological incentive to create disinformation. However, in order to create a better ecosystem, the panel argued the best way is to create a financial disincentive for the creation and dissemination of disinformation.

Removing the financial incentive

Steve Chester is director of media at ISBA. He argues that the best solution is a combination of statutory regulation of platforms first, with advertisers taking a more proactive approach. He believes that there has been progress made since 2018 – with the online harms bill due to receive its first reading in the UK soon – and that arguments for platform self-regulation have become weaker: “It became rapidly apparent that trying to put the genie [back] in the bottle wasn’t possible. Platforms can’t mark their own homework … it will never be a zero-risk game.”

As a result, he believes that the best solution is an ASA-type regulatory body with oversight of the industry, and potential punitive damages of up to 10% of a platform’s global revenue for breaches. These breaches would be based on a measurement system that determines whether content is brand-safe based on 11 categories – though he notes that disinformation and misinformation are under review as a potential twelfth category, rather than being bundled into the others.

However, the big issue facing any such attempt is getting buy-in from players across the board. Chester believes that platforms including some UK newspapers have an “allergy” to any oversight, for instance, and wouldn’t be well-disposed to attempts to measure dis- or misinformation on their sites. “No reputable news company wants to disseminate hate but there was a rebellion against [oversight]. There’s a slight allergy from news outlets, rather than all saying, ‘we all have a responsibility together.’”

Melford says that, while GDI works with many news platforms that are keen to improve their site’s scores when it comes to disinformation, it is typically in countries with less mature media markets: “We’ve had a very wide range of responses, so in countries that have newer democracies with less developed media markets … Georgie and Latvia … we’ve had in general a very positive reaction from the sites we have assessed and they’ve been very keen to learn how they can improve [the sites].”

By contrast, she says that in other countries the response has been much more hostile, and that some titles consider it a “personal affront”. However, she notes that as more and more initiatives seek to remove the financial incentive for disinformation, those platforms will feel more compelled to sign up to a system that mediates and measures their output.

As the panel made clear throughout the discussion, the ramifications of disinformation are playing out around us in 2021. From vaccine hesitancy to the storming of the Capitol at the start of year, to the hate directed at individuals as a result of concerted disinformation campaigns, we see the impacts daily. As IPG also advocated this week, collective action on behalf of advertisers to remove those financial incentives is one potential solution.

Leave a Reply

Your email address will not be published. Required fields are marked *