Synthetic Content Now Represents A Significant Threat!
Malicious Actors Almost Certainly Will Leverage Synthetic Content for Cyber and Foreign Influence Operations
Malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.
Foreign actors are currently using synthetic content in their influence campaigns, and the FBI anticipates it will be increasingly used by foreign and criminal cyber actors for spearphishing and social engineering in an evolution of cyber operational tradecraft.
Explaining Synthetic Content
The FBI defines synthetic content as the broad spectrum of generated or manipulated digital content, which includes images, video, audio, and text. While traditional techniques like Photoshop can be used to create synthetic content, this report highlights techniques based on artificial intelligence (AI) or machine learning (ML) technologies. These techniques are known popularly as deepfakes or GANs (generative adversarial networks). Generally, synthetic content is considered protected speech under the First Amendment. The FBI, however, may investigate malicious synthetic content which is attributed to foreign actors or is otherwise associated with criminal activities.
Recent and Anticipated Uses of Synthetic Content
Since late 2019, private sector researchers have identified multiple campaigns which have leveraged synthetic content in the form of ML-generated social media profile images.
Additionally, advances in AI- (artificial intelligence-based) and ML-based (machine learning-based) content generation and manipulation technologies likely could be used by malicious cyber actors to advance tradecraft and increase the impact of their activities. ML-generated profile images may help malicious actors spread their narratives, increasing the likelihood they will be more widely shared, making the message and messenger appear more authentic to consumers.
- Russian, Chinese, and Chinese-language actors are using synthetic profile images derived from GANs, according to multiple private-sector research reports. These profile images are associated with foreign influence campaigns, according to the same sources.
- Since 2017, unknown actors have created fictitious “journalists” who generated articles that were unwittingly published and amplified by a variety of online and print media outlets, according to press reports. These falsified personas often have a seemingly robust online presence, including the use of GANs profile images, however, basic fact checks can quickly reveal that the profiles are fraudulent.
Currently, individuals are more likely to encounter information online whose context has been altered by malicious actors versus fraudulent, synthesized content. This trend, however, will likely change as AL and ML technologies continue to advance
We anticipate malicious cyber actors will use these techniques broadly across their cyber operations—likely as an extension of existing spearphishing and social engineering campaigns, but with more severe and widespread impact due to the sophistication level of the synthetic media used.
- Malicious cyber actors may use synthetic content to create highly believable spearphishing messages or engage in sophisticated social engineering attacks, according to a late 2020 joint research report.
Synthetic content may also be used in a newly defined cyber-attack vector referred to as Business Identity Compromise (BIC). BIC will represent an evolution in Business Email Compromise (BEC) tradecraft by leveraging advanced techniques and new tools. Whereas BEC primarily includes the compromise of corporate email accounts to conduct fraudulent financial activities, BIC will involve the use of content generation and manipulation tools to develop synthetic corporate personas or to create a sophisticated emulation of an existing employee.
This emerging attack vector will likely have very significant financial and reputational impacts to victim businesses and organizations.
How to Identify and Mitigate Synthetic Content
- Visual indicators such as distortions, warping, or inconsistencies in images and video may be an indicator of synthetic images, particularly in social media profile avatars (profile images). For example, distinct, consistent eye spacing and placement across a wide sample of synthetic images provide one indicator of synthetic content.
- Similar visual inconsistencies are typically present in the synthetic video, often demonstrated by noticeable head and torso movements as well as syncing issues between face and lip movement, and any associated audio.
- Third-party research and forensic organizations, as well as some reputable cybersecurity companies, can aid in the identification and evaluation of suspected synthetic content.
- Finally, familiarity with media resiliency frameworks like the SIFT methodology can help mitigate the impact of cyber and influence operations.
The “SIFT” methodology encourages individuals to Stop, Investigate the source, Find trusted coverage, and Trace the original content when consuming information online.