(32) Module: Countering information campaigns
By Onno Hansen-Staszyński | Last Updated: 12 February 2025
[screen 1]
We may use two types of action when countering anti-liberal democratic narratives: incapacitating myths and pushing positive narratives. Often these two go hand in hand.
Key instruments include strategic communication (StratCom), debunking, information operations, crowdsourced verification, content labeling, content moderation, strategic silence, defamation lawsuits, sanctions, denial of services, and local media support.
[screen 2]
StratCom
StratCom is a planned effort by public bodies to shape public opinion, reinforce alliances, counter disinformation, and support national security interests through strategic communication. It focuses on what is communicated, as well as how, when, and why.
StratCom aligns with national policies to counter harmful myths and promote alternative narratives.
[screen 3]
Debunking
Debunking aims to correct widespread disinformation campaigns by identifying myths, explaining their falsehoods, often using scientific research and logical reasoning, and providing accurate information with supporting sources.
[screen 4]
Debunking, often carried out by NGOs, can also be performed by public bodies. For maximum impact, public entities sometimes integrate debunking into their broader StratCom efforts rather than issuing isolated statements.
[screen 5]
Info ops
Information operations work tactically to influence the perceptions, behaviors, and decisions of selected target groups.
Psyops try to affect adversaries’ beliefs to demoralize or sway their decisions. Cyber operations aim to disrupt communication and spread counter-information. Electronic warfare involves jamming and network disruption.
[screen 6]
Crowdsourced verification
Not only state bodies or NGOs engage in countering information campaigns; the public does too by means of crowdsourced factfinding projects that do not necessarily need a centralized organizing body.
Anyone can use OSINT (open-source intelligence). For instance Bellingcat, an independent investigative collective of researchers, investigators, and citizen journalists, designs and shares verifiable methods for ethical digital investigation.
[screen 7]
Content labeling
Warning labels can be applied to content to indicate that it has undergone fact-checking, debunking, or crowdsourced verification, and was found to be false or at least disputed. These labels help users identify questionable information.
[screen 8]
Content moderation
Content moderation involves reviewing, filtering, and managing user-generated content on platforms by human editors or artificial intelligence. It aims to remove or restrict access to content that violates the law and/or community guidelines, including illegal content, myths, hate speech, and other (potentially) harmful material.
[screen 9]
Strategic silence
Instead of engaging with information campaigns, it can be decided to ignore these, thereby reducing their amplification. The assumption is that engagement – also negative engagement – may increase the reach and spread of information campaigns.
[screen 10]
Defamation lawsuits
Legal action can be taken by those whose reputation was damaged by presumed libelous information campaigns.
Famous defamation lawsuits were directed against Alex Jones for spreading false claims about the Sandy Hook Elementary School shooting, leading to Jones’s personal bankruptcy and leaving the fate of his Infowars media platform uncertain.
[screen 11]
Sanctions
For international actors it can be difficult to pursue legal actions. This is why sanctions have been imposed on individuals known to have conducted harmful online campaigns.
[screen 12]
Denial of services
Denial of services encompasses any intervention that limits access to content and content suppliers: from content throttling and access restrictions to deplatforming and prohibition. The rationale behind it is to stop myths and publishers of myths from polluting the public discourse.
[screen 13]
A selection of moderate denial of services options:
- Content throttling/ shadow banning: Reducing the visibility or reach of content.
- Access restrictions: Limiting features or services available to users.
- Geo-blocking: Restricting content access based on geographic location.
- Demonetizing: Restricting or disabling account revenue streams.
[screen 14]
A selection of more radical denial of services options:
- Account suspension/ deplatforming: Temporarily or permanently disabling accounts that violate platform policies.
- Prohibition: Forbidding or making a content publishing service illegal through legislation or legal authority.
[screen 15]
Local media support
Another intervention method is bolstering already trusted sources by investing in trusted, high-quality information for people to consume. Although trust in media globally is on the decline many people still trust their local news sources.
[screen 16] Effectiveness
It is hard to state anything conclusive about the effectiveness of this type of interventions. Some of the challenges are:
• Most interventions lack a theoretical underpinning;
• Hardly any intervention of this type addresses motivated reasoning;
• Effectiveness research mainly concerns the US and adults;
• Limited knowledge exists about the longevity of effects.
Subscribe now &
Get the latest updates
Subscribe
