FIMI module: how do you know if it is foreign?
By Julian | Last Updated: 25 November 2024
Assess the Actor:
A number of organizations are tasked with a mandate to address Foreign Information Manipulation Interference (FIMI). A critical part of this is assessing whether content is “foreign”. This document is meant as a guide of indicators to consider when assessing whether an online asset is derived from a foreign origin. This is not meant to be a comprehensive guide encompassing every element that might be considered and only showcases common indicators found online.
Making attributions can be challenging, particularly when threat actors are well resourced, and take measures to conceal their identity. The ability to make an attribution to a foreign actor can be further complicated by domestic actors embracing the narratives of an influence operation, adopting and adapting them, whether they are aware, or suspect the involvement, of a foreign actor in the original content or not.
Indicators of foreign origin can include:
Technical indicators
-
IP addresses or Web domains
-
Check the domain registrant by looking at the WHOIS information for the domain. Sometimes actors will put in their own name and address, however many will use a privacy service to conceal themselves.
-
The use of web social media or ad-tech infrastructure previously attributed to a foreign actor by a reliable source.
Contextual indicators:
-
Overt or semi overt indicators of foreign origin such as overtly self-professed foreign ownership or sponsorship. As well as internet assets registered with a government as owned or directed by a foreign agent (in the US this falls under the Foreign Agents Registration Act or FARA). Additionally you might find content that is repeated verbatim from known state-owned media channels or foreign government officials, this can be to further the spread of this content. Sometimes it may be genuine interest from content in these channels but it may also be artificial amplification so definitely check the accounts that are spreading the content.
-
Is the aim of the content aligned with the interests of a foreign state? This requires knowledge of what foreign states aim to do online. Additionally, many people may share the same viewpoint as foreign states, this does not mean they are being directly influenced by the foreign state or are acting on their behalf.
-
Languages separate from the target country or non-native usage of language. There are certain linguistic “tells” when people operate in non-native languages for example translations from specific languages can be identified or non-native usage of the language (eg. formality seldom used or misused expressions). However it is important to note that uses of different languages and non-native usage of language can still be very much present in domestic settings with people coming from or interacting with others in many countries. What you should look out for are discrepancies in the use of language (for example a person claiming to have been born and raised in Kentucky but with the linguistic tells of someone from elsewhere). This is quite weak contextual evidence.
-
Publicly available or leaked datasets
Behavioral indicators
-
Time period in which content was posted. If it aligns with the working hours of a certain country that can indicate that spreading misinformation messages is a part of someone’s work routine in another country
-
Coordination of accounts, do the suspected appear to be connected by activities such as liking and sharing each other’s content
-
Amplification patterns, do the accounts seem to have the same methods of posting or cross posting (eg. posting the content around the same time, posting it to the same groups or channels)
However these may not be definitive indicators that a campaign is foreign in nature. A combination of indicators are usually required to make can lead to an attribution to a likely actor.
Subscribe now &
Get the latest updates
Subscribe
