Disinformation Campaigns And Tactics
Deception Through Disinformation – Understanding What It Is and How It Works!
How Scams Work – A SCARS Guide
Authors:
• Tim McGuinness, Ph.D., DFin, MCPO, MAnth – Anthropologist, Scientist, Director of the Society of Citizens Against Relationship Scams Inc.
• U.S. Cybersecurity and Infrastructure Security Agency (CISA)
Article Abstract
Disinformation, particularly from state actors, aims to deceive and manipulate by spreading false information to achieve geopolitical goals and influence public opinion. This leads to confusion, polarization, and undermines trust in democratic institutions. State-sponsored campaigns are particularly dangerous due to their resources and sophisticated technologies, enabling them to exploit societal divisions and destabilize nations.
Disinformation is spread by various actors, including political groups, special interest organizations, businesses, cybercriminals, extremists, individuals, foreign propaganda outlets, and even reputable news media. Combating disinformation requires coordinated efforts, including promoting media literacy, strengthening cybersecurity, and fostering international cooperation.
Effective tactics used in disinformation campaigns include creating fake personas, using deepfakes, amplifying conspiracy theories, astroturfing, exploiting information gaps, manipulating unsuspecting actors, and spreading targeted content.
Understanding and recognizing these tactics is important for protecting democracy, personal information, informed decision-making, social cohesion, media literacy, and national security.
The Methods & Tactics of Disinformation – The Online Engine of Deception
Disinformation poses significant dangers, particularly when propagated by state actors. Disinformation refers to deliberately false or misleading information spread to deceive people, disrupt societies, or manipulate public opinion. State actors often engage in disinformation campaigns to achieve geopolitical goals, influence elections, and undermine the trust in democratic institutions. These campaigns can create confusion, polarize societies, and erode public confidence in the media and government, ultimately destabilizing the targeted nation.
The dangers of state-sponsored disinformation are profound because of the resources and sophistication these actors can deploy. They can leverage advanced technologies, cyber capabilities, and vast networks to amplify their messages, making it difficult for individuals to discern truth from falsehood. Additionally, state actors can tailor disinformation to exploit existing societal divisions, deepening polarization and conflict. This not only undermines social cohesion but also weakens a nation’s ability to respond to genuine crises effectively. Addressing the threat of disinformation requires coordinated efforts, including promoting media literacy, strengthening cybersecurity, and fostering international cooperation to hold perpetrators accountable.
Who Spreads Disinformation?
Disinformation can be spread by a variety of actors, each with their own motives and methods. Here are the main groups involved:
State Actors: Governments or government-affiliated entities often engage in disinformation campaigns to achieve political, economic, or military objectives. Examples include attempts to influence foreign elections, undermine trust in democratic institutions, or create instability in rival nations. State actors possess significant resources and sophisticated technologies to carry out these campaigns.
Political Groups: Political parties and movements might spread disinformation to gain an advantage over opponents, shape public opinion, or mobilize support. This can occur both within countries and on the international stage, affecting elections and policy debates.
Special Interest Groups: Organizations representing specific industries or causes may disseminate disinformation to protect their interests. This can include misinformation about scientific research, health information, environmental issues, or economic policies.
Businesses: Corporations and commercial entities might spread false information for competitive advantage, market manipulation, or to deflect negative publicity. This can include misleading advertising, financial misinformation, and corporate espionage.
Cybercriminals: Hackers and cybercriminals often use disinformation as part of broader cyber-attacks. Their goals can range from financial gain to disrupting organizations, stealing data, or causing reputational damage.
Extremist Groups: Ideologically driven groups, including terrorists and other extremist organizations, use disinformation to recruit members, spread propaganda, and incite violence. Their messages often aim to polarize societies and create chaos.
Individual Actors: Individuals, including social media influencers and ordinary users, can spread disinformation either intentionally or unintentionally. Personal biases, a desire for attention, or simply the sharing of unverified information can contribute to the spread of false narratives.
Foreign Propaganda Outlets: Media organizations controlled or influenced by foreign governments may disseminate disinformation to shape public perception and further the geopolitical aims of their sponsors.
News Media: Even reputable news organizations can sometimes spread disinformation, whether through sensationalism, biased reporting, or failure to properly fact-check. Additionally, some media outlets may have ideological leanings or financial incentives that lead them to publish misleading or false information.
Each of these actors leverages various platforms, including social media, traditional media, and online forums, to reach and influence their target audiences. The widespread nature of these activities makes combating disinformation a complex and ongoing challenge.
Why We Need to Understand About Disinformation
Understanding disinformation and how it works is critical for several reasons:
Protecting Democracy: Disinformation can undermine democratic processes by spreading false information about candidates, policies, and elections. By recognizing disinformation, individuals can make informed decisions and support the integrity of democratic institutions.
Safeguarding Personal Information: Disinformation often accompanies scams and phishing attempts designed to steal personal information or money. Knowing how to identify and avoid disinformation can help protect against these threats.
Promoting Informed Decision-Making: Accurate information is essential for making informed decisions about health, finances, education, and other aspects of life. Understanding disinformation helps individuals critically evaluate the information they receive and make better choices.
Preventing Social Division: Disinformation can exacerbate social divisions by spreading falsehoods that fuel mistrust and conflict between different groups. By recognizing and rejecting disinformation, people can contribute to a more cohesive and harmonious society.
Enhancing Media Literacy: With the proliferation of digital media, it is important for individuals to develop media literacy skills. Understanding how disinformation works is a key component of media literacy, enabling people to critically assess the sources and content of the information they consume.
Resisting Manipulation: Disinformation is often used to manipulate public opinion and behavior for political, commercial, or ideological purposes. By understanding these tactics, individuals can better resist manipulation and maintain autonomy over their beliefs and actions.
Supporting Public Health: Disinformation about health issues, such as vaccines or treatments, can have serious consequences for public health. Understanding disinformation helps people make informed health choices and support evidence-based medical practices.
Strengthening National Security: State actors and other malicious entities use disinformation to weaken national security by creating confusion and distrust. Being aware of these tactics can help individuals support efforts to counteract these threats.
By understanding disinformation and its mechanisms, individuals can better navigate the complex information landscape, contribute to a more informed and resilient society, and protect themselves from the negative impacts of false information.
A Brief History of Disinformation: Pre-World War II to Present
Disinformation has a long history, evolving from simple deceptive tactics to sophisticated digital operations. Before World War II, disinformation was often used in military and political contexts. Ancient civilizations, such as the Greeks and Romans, used deceptive tactics to mislead enemies. In the 16th century, the printing press allowed for the spread of false information more widely, including during the Protestant Reformation when both sides used pamphlets to spread misinformation.
During World War I, disinformation became more systematic. The British government created the War Propaganda Bureau to influence public opinion and enemy morale. They spread false stories to garner support for the war effort, such as the fabricated speculative reports of German atrocities in Belgium – though ironically they turned out to be true. Similarly, the Bolsheviks used disinformation to consolidate power during the Russian Revolution.
World War II saw further advancements in disinformation tactics. The Nazis used propaganda to justify their actions and demonize enemies, employing sophisticated techniques to control information. The British and Americans also engaged in disinformation, such as Operation Mincemeat, which successfully misled the Germans about the location of the Allied invasion of Southern Europe.
The Cold War era marked the beginning of state-sponsored disinformation on a global scale. The Soviet Union’s KGB was particularly active, spreading false information to discredit opponents and influence international opinion. The CIA also engaged in disinformation to counter Soviet influence, exemplified by the covert publication of books and articles to undermine communist ideologies.
The digital age has transformed disinformation. The internet and social media have provided new platforms for the rapid and widespread dissemination of false information. During the 2016 U.S. presidential election, Russian operatives used social media to spread disinformation, creating fake accounts and websites to influence public opinion and sow discord. Similar tactics have been observed in various other countries, as state and non-state actors exploit digital tools to advance their agendas.
In recent years, deepfakes and AI-generated content have emerged as new disinformation tools. These technologies allow for the creation of highly convincing false images, videos, and audio, making it increasingly difficult to discern truth from falsehood. Governments and tech companies are now grappling with the challenge of countering these sophisticated disinformation tactics.
Disinformation remains a significant threat to democratic institutions, public trust, and societal cohesion. Addressing it requires a combination of media literacy, robust cybersecurity measures, and international cooperation to hold perpetrators accountable.
HOW DISINFORMATION IS DONE!
Introduction
Disinformation actors use a variety of tactics to influence others, stir them to action, and cause harm. Understanding these tactics can increase preparedness and promote resilience when faced with disinformation.
Disinformation operations spread disinformation narratives that pose a risk to critical infrastructure. Each of these tactics are designed to make disinformation actors’ messages more credible or to manipulate their audience to a specific end. They often seek to polarize their target audience across contentious political or social divisions, making the audience more receptive to disinformation.
These methods can and have been weaponized by disinformation actors to generate threats to U.S. critical infrastructure. The Tactics of Disinformation series helps organizations understand and manage the risks posed by disinformation by breaking down common tactics, sharing real-world examples, and providing concrete steps to counter these narratives with accurate information. Any organization and its staff can be targeted by disinformation campaigns, and all organizations have a role to play in building a resilient information environment. This product, and others available in the CISA MDM Resource Library, supports critical infrastructure organizations in assessing their risk posture and building resilience in their communities.
Disinformation Tactics Overview
Cultivate Fake or Misleading Personas and Websites: Disinformation actors create networks of fake personas and websites to increase the believability of their message with their target audience. Fake expert networks use inauthentic credentials (e.g., fake “experts”, journalists, think tanks, or academic institutions) to lend undue credibility to their influence content and make it more believable.
Create Deepfakes and Synthetic Media: Synthetic media content may include photos, videos, and audio clips that have been digitally manipulated or entirely fabricated to mislead the viewer. Artificial intelligence (AI) tools can make synthetic content nearly indistinguishable from real life. Synthetic media content may be deployed as part of disinformation campaigns to promote false information and manipulate audiences.
Devise or Amplify Conspiracy Theories: Conspiracy theories attempt to explain important events as secret plots by powerful actors. Conspiracy theories not only impact an individual’s understanding of a particular topic; they can shape and influence their entire worldview. Disinformation actors capitalize on conspiracy theories by generating disinformation narratives that align with the conspiracy worldview, increasing the likelihood that the narrative will resonate with the target audience.
Astroturfing and Flooding the Information Environment: Disinformation campaigns will often post overwhelming amounts of content with the same or similar messaging from several inauthentic accounts. This practice, known as astroturfing, creates the impression of widespread grassroots support or opposition to a message while concealing its true origin. A similar tactic, flooding, involves spamming social media posts and comment sections with the intention of shaping a narrative or drowning out opposing viewpoints.
Abuse Alternative Platforms: Disinformation actors may abuse alternative social media platforms to intensify belief in a disinformation narrative among specific user groups. Disinformation actors may seek to take advantage of platforms with fewer user protections, less stringent content moderation policies, and fewer controls to detect and remove inauthentic content and accounts than other social media platforms.
Exploit Information Gaps: Data voids, or information gaps, occur when there is insufficient credible information to satisfy a search inquiry. Disinformation actors can exploit these gaps by generating their own influence content and seeding the search term on social media to encourage people to look it up. This increases the likelihood that audiences will encounter disinformation content without any accurate or authoritative search results to refute it.
Manipulate Unsuspecting Actors: Disinformation actors target prominent individuals and organizations to help amplify their narratives. Targets are often unaware that they are repeating a disinformation actor’s narrative or that the narrative is intended to manipulate.
Spread Targeted Content: Disinformation actors produce tailored influence content likely to resonate with a specific audience based on their worldview and interests. These actors gain insider status and grow an online following that can make future manipulation efforts more successful. This tactic often takes a “long game” approach of spreading targeted content over time to build trust and credibility with the target audience.
Step 1: Cultivate Fake or Misleading Personas and Websites
Disinformation actors create networks of fake personas and websites to increase the believability of their message with their target audience. Such networks may include fake academic or professional “experts,” journalists, think tanks, and/or academic institutions. Some fake personas are even able to validate their social media accounts (for example, a blue or gray checkmark next to a username), further confusing audiences about their authenticity. Fake expert networks use inauthentic credentials to make their content more believable.
Disinformation actors also increase the credibility of these fake personas by generating falsified articles or research papers and sharing them online. Sometimes, these personas and their associated publications are intentionally amplified by other actors. In some instances, these materials are also unwittingly shared by legitimate organizations and users. The creation or amplification of content from these fake personas makes it difficult for audiences to distinguish real experts from fake ones.
Adversaries have also demonstrated a “long game” approach with this tactic by building a following and credibility with seemingly innocuous content before switching their focus to creating and amplifying disinformation. This lends false credibility to campaigns.
Examples
Russia’s military intelligence agency, the GRU, utilized fake experts in their influence efforts around the 2016 U.S. Presidential election. GRU operatives created fake think tanks and news sites populated with articles by inauthentic personas. They established dozens of public Facebook pages to post and amplify the content. Content ranged from expressing support for Russian interests in the Syrian and 2014 Ukrainian conflicts to issues of racial justice in the United States.1
The Iranian-aligned network of fake websites and personas known as “Endless Mayfly” impersonates legitimate media outlets to spread disinformation narratives. They then use their fake personas to amplify content on social media.
Step 2: Create Deepfakes and Synthetic Media
Synthetic media content may include photos, videos, and audio clips that have been digitally manipulated or entirely fabricated to mislead the viewer. ‘Cheapfakes’ are a less sophisticated form of manipulation involving real audio clips or videos that have been sped up, slowed down, or shown out of context to mislead. In contrast, deepfakes are developed by training artificial intelligence (AI) algorithms on reference content until they can produce media that is nearly indistinguishable from real life. Deepfake technology makes it possible to convincingly depict someone doing something they haven’t done or saying something they haven’t said. While synthetic media technology is not inherently malicious, it can be deployed as part of disinformation campaigns to share false information or manipulate audiences.
Deepfake photos by disinformation actors can be used to generate realistic profile pictures to create a large network of inauthentic social media accounts. Deepfake videos often use AI technology to map one person’s face to another person’s body. In the case of audio deepfakes, a “voice clone” can produce new sentences as audio alone or as part of a video deepfake, often with only a few hours (or even minutes) of reference audio clips. Finally, an emerging use of deepfake technology involves AI-generated text, which can produce realistic writing and presents a unique challenge due to its ease of production.
Examples
The pro-Chinese political spam network Spamouflage Dragon used AI-generated profiles to create a cluster of inauthentic profiles to spread its English-language cheapfake videos attacking U.S. policy in June 2020. Many videos featured selectively edited news coverage overlaid by awkward, automated voice-overs and captions.
In September 2020, Facebook took down thirteen accounts attributed to the Russian Internet Research Agency that used AI-generated profile pictures to appear more believable to unwitting audiences.
Step 3: Devise or Amplify Conspiracy Theories
Conspiracy theories attempt to explain important events as secret plots by powerful actors.1 Conspiracy theories not only impact an individual’s understanding of a particular topic; they can shape and influence their entire worldview. Conspiracy theories often present an attractive alternative to reality by explaining uncertain events in a simple and seemingly cohesive manner, especially during times of heightened uncertainty and anxiety.
Disinformation actors capitalize on conspiracy theories by generating disinformation narratives that align with the conspiracy worldview, increasing the likelihood that the narrative will resonate with the target audience. By repeating certain tropes across multiple narratives, malign actors increase the target audience’s familiarity with the narrative and therefore its believability. Conspiracy theories can also present a pathway for radicalization to violence among certain adherents. Conspiracy theories can alter a person’s fundamental worldview and can be very difficult to counter retroactively, so proactive resilience building is especially critical to prevent conspiratorial thinking from taking hold.
Examples
In 2020, Chinese state media channels and government officials claimed that COVID-19 had originated in the United States and had been brought to China by a member of the U.S. military. Such narratives were present earlier among conspiracy theory communities online, but China’s extensive information operations networks legitimized and amplified these narratives broadly across traditional and social media in an effort to redirect criticism from China’s own handling of the outbreak and to attempt to discredit its geopolitical rivals.
Russia’s Defense Ministry deployed the disinformation narrative that the U.S. government is funding military programs in Ukraine to produce bioweapons. Further amplified by the Chinese Foreign Ministry, these narratives seek to justify Russia’s invasion as a mission to neutralize the alleged bioweapons and to provide grounds for blaming the U.S. or Ukraine in potential false-flag operation.
Step 4: Astroturfing and Flooding the Information Environment
Disinformation campaigns will often post overwhelming amounts of content with the same or similar messaging from several inauthentic accounts, either created by automated programs known as bots or by professional disinformation groups known as troll farms. By consistently seeing the same narrative repeated, the audience sees it as a popular and widespread message and is more likely to believe it. This practice, known as astroturfing, creates the impression of widespread grassroots support or opposition to a message while concealing its true origin.
A similar tactic, flooding, involves spamming social media posts and comment sections with the intention of shaping a narrative or drowning out opposing viewpoints, often using many fake and/or automated accounts. Flooding may also be referred to as “firehosing.” This tactic is used to stifle legitimate debate, such as the discussion of a new policy or initiative, and discourage people from participating in online spaces. Information manipulators use flooding to dull the sensitivity of targets through repetition and create a sense that nothing is true. Researchers call these tactics “censorship by noise,” where artificially amplified narratives are meant to drown out all other viewpoints. Artificial intelligence and other advanced technologies enable astroturfing and flooding to be deployed at speed and scale, more easily manipulating the information environment and influencing public opinion.
Examples
In 2016, Russian agents, part of the Internet Research Agency, impersonated activists on both sides of the political spectrum to flood social media channels with inflammatory content, as well as to call for activists to attend events.
The Chinese government has been suspected of hiring as many as two million people, known as the “50 Cent Party,” to flood the web in China with pro-regime messaging. The 50 Cent Party drowns out critics and distracts from policy issues by sharing an overwhelming amount of positive news on online platforms.
Step 5: Abuse Alternative Platforms
Disinformation actors often seek opportunities for their narratives to gain traction among smaller audiences before attempting to go viral. While alternative social media platforms are not inherently malicious, disinformation actors may take advantage of less stringent platform policies to intensify belief in a disinformation narrative among specific user groups. These policies may include fewer user protections, less stringent content moderation policies, and fewer controls to detect and remove inauthentic content and accounts than some of the other social media platforms.
Alternative platforms often promote unmoderated chat and file sharing/storage capabilities, which is not inherently malicious but may be appealing for actors who want to share disinformation.* While some alternative platforms forbid the promotion of violence on public channels, they may have less visibility into private channels or groups promoting violence. Disinformation actors will recruit followers to alternative platforms by promoting a sense of community, shared purpose, and the perception of fewer restrictions. Groups on alternative platforms may operate without the scrutiny or detection capabilities that other platforms have. Often, groups focus on specific issues or activities to build audience trust, and disinformation actors can, in turn, abuse this trust and status to establish credibility on other platforms.
Examples
The Russian government has encouraged users to turn to specific platforms for pro-Kremlin content from state-affiliated media, including Sputnik and RT News. These channels spread disinformation concealed as fake “war correspondents” or fake “fact-checking” about Russia’s invasion of Ukraine.
Foreign terrorist organizations sometimes leverage disinformation tactics to abuse alternative platforms as well. Terrorist organizations like ISIS have leveraged the platforms to spread malign content, recruit new followers, and coordinate activities. Research shows that communications by ISIS on alternative platforms played a role in the uptick in terrorist attacks in Europe between 2015 and 2016.
Step 6: Exploit Information Gaps
Data voids, or information gaps, occur when there is insufficient credible information to satisfy a search inquiry, such as when a term falls out of use or when an emerging topic or event first gains prominence (e.g., breaking news).
When a user searches for the term or phrase, the only results available may be false, misleading, or have low credibility. While search engines work to mitigate this problem, disinformation actors can exploit this gap by generating their own influence content and seeding the search term on social media to encourage people to look it up. Because the specific terms that create data voids are difficult to identify beforehand, credible sources of information are often unable to proactively mitigate their impacts with accurate information. Disinformation actors can exploit data voids to increase the likelihood a target will encounter disinformation without accurate information for context thus increasing the likelihood the content is seen as true or authoritative. Additionally, people often perceive information that they find themselves on search engines as more credible, and it can be challenging to reverse the effects of disinformation once accepted.
Example
In 2015 as part of its effort to undermine opponents in the Syrian Civil War, Russia exploited data voids to falsely associate a Syrian humanitarian organization with terrorism. A small number of Russia-backed sources, including state media outlets, generated hundreds of articles that were further amplified by Russian disinformation networks on social media, overwhelming search engines with influence content. Individuals searching for information about the organization were met with many narratives pushing Russia’s agenda, which overwhelmed accurate authoritative information sources that appeared lower down in search results.
Step 7: Manipulate Unsuspecting Actors
Disinformation campaigns target prominent individuals and organizations to help amplify their narratives. These secondary spreaders of disinformation narratives add perceived credibility to the messaging and help seed these narratives at the grassroots level while disguising their original source. Targets are often unaware that they are repeating a disinformation actors’ narrative or that the narrative is intended to manipulate. The content is engineered to appeal to their and their follower’s emotions, causing the influencers to become unwitting facilitators of disinformation campaigns.
Examples
In 2016, the Russian Internet Research Agency conducted a campaign to spread divisive content and covertly recruited U.S. persons across the political spectrum to unknowingly amplify this content. Then again in 2020, the Russian Internet Research Agency deployed a campaign to covertly recruit unwitting journalists to write freelance for fabricated news outlets.
In August 2021, Facebook removed several accounts connected to a UK marketing firm for its Russian-linked operations. Starting in 2020, several fake accounts were created and began posting memes and comments claiming the AstraZeneca COVID-19 vaccine would turn recipients into chimpanzees. The hashtags and petitions associated with these accounts were then shared by several health and wellbeing influencers. The UK firm allegedly also contacted influencers on YouTube, Instagram, and TikTok to ask them to push anti-vaccine content for payment.
Following the United States’ “diplomatic boycott” of the 2022 Winter Olympics in Beijing, China hired a U.S. based public relations firm to discreetly recruit social media influencers in the U.S. to amplify positive messaging, including disinformation, about China and the competition. Influencers were chosen to reach target audience segments with content that deflects from allegations of human rights abuses in China. Many posts did not properly attribute their sponsorship, a violation of platform requirements that increased the seemingly organic content’s credibility.
Step 8: Spread Targeted Content
Disinformation actors surveil a targeted online community to understand its worldview, interests, and key influencers and then attempt to infiltrate it by posting tailored influence content likely to resonate with its members. By starting with entertaining or non-controversial posts that are agreeable to targeted communities, disinformation actors gain “insider” status and grow an online following that can make future manipulation efforts more successful. This tactic may be used in combination with cultivating fake experts, who spread targeted content over time, taking a “long game” approach that lends false credibility to the campaign. Targeted content often takes highly shareable forms, like memes or videos, and can be made to reach very specific audiences by methods such as paid advertising and exploited social media algorithms.
Examples
In its effort to sow division within the United States during the 2016 presidential election, the Russian Internet Research Agency (IRA) deployed a vast network of inauthentic social media accounts, pages, and groups to target specific American communities, including racial and ethnic groups and adherents to specific political movements or ideologies. For example, the IRA attempted to discourage participation among Black Americans in the electoral process by creating an ecosystem of connected fake accounts posing as media outlets. The network of fake accounts pushed repetitive narratives and sometimes manipulated legitimate influencers into amplifying its content, lending it the appearance of insider status within the community.
An extensive, pro-China network of inauthentic online accounts has expanded efforts to target global audiences in recent years. The operation has spread to dozens of social media platforms and websites, including alternative forums catering to niche audiences, and has deployed disinformation content in at least seven languages, including Russian and Spanish. Like the IRA efforts, many of the accounts in the pro-China network shared the same content and linked to in-network accounts on other platforms. The targeted content often seeks to spur real-world action. For example, in April 2020, content targeting Asian Americans sought to mobilize protests within the U.S. against findings that COVID-19 originated in China.
Actions You Can Take Against Disinformation
Although disinformation tactics are designed to deceive and manipulate, critically evaluating content and verifying information with credible sources before deciding to share it can increase resilience against disinformation and slow its spread. Share these tips:
Recognize the risk. Understand how disinformation actors leverage these tactics to push their agenda. Be wary of manipulative content that tries to divide.
Question the source. Critically evaluate the content and its origin to determine whether it’s trustworthy. Research the author’s credentials, consider the outlet’s agenda, and verify the supporting facts.
Investigate the issue. Conduct a thorough, unbiased search into contentious issues by looking at what credible sources are saying and considering other perspectives. Rely on credible sources of information, such as government sites.
Think before you link. Slow down. Don’t immediately click to share content you see online. Check the facts first. Some of the most damaging disinformation spreads rapidly via shared posts that seek to elicit an emotional reaction that overpowers critical thinking.
Talk with your social circle. Engage in private, respectful conversations with friends and family when you see them sharing information that looks like disinformation. Be thoughtful about what you post on social media.
Please Rate This Article
Please Leave Us Your Comment
Also, tell us of any topics we might have missed.
Thank you for your comment. You may receive an email to follow up. We never share your data with marketers.
Recent Reader Comments
- on Scam Victim Self-Hatred And Self-Loathing: “I did experience self-hatred for almost a year, although with less intensity as the months were passing by because I…” Dec 17, 18:43
- on Facebook’s Algorithm Monster Actively Works Against Scam Victims – 2024: “Bueno es algo que no debería sorprenderme después de que Facebook está lleno de perfiles falsos .. en fin las…” Dec 17, 12:18
- on Scam Victims – Preparing for the Next Scam Fraud or Cybercrime – 2024: “I cannot afford to let my guard down. That is my new normal mode of operation.” Dec 15, 22:14
- on Danielle Delaunay – An Analysis of Her Value to Romance Scammers – 2024: “It (she-her image) GOT to ME. I didn’t know to look here, but I do now. As well as [deleted],…” Dec 13, 15:35
- on Romance Scams – Common Characteristics of the Photos of Men & Women that Scammers Use to Lure their Scam Victims – 2024: “I can relate with the Sunk Cost Fallacy where despite seeing discrepancies I felt already so invested in the relationship…” Dec 12, 22:49
- on The Normalcy Bias: Understanding the Cognitive Bias that Can Put You in Danger: “It is hard to understand or realize that you need help and have biases and other vulnerabilities. A first step…” Dec 12, 11:52
- on Victim Vulnerability After The Scam – Physical Risks: “Very insightful and important information for us as victims but also for those who haven’t been vicitmized.” Dec 12, 11:45
- on The Normalcy Bias: Understanding the Cognitive Bias that Can Put You in Danger: “I think that the text that comes after the statement about victim blaming should come before the links to the…” Dec 12, 11:27
- on SCARS Position Statement Against Scambaiting: “It is important to stay informed and receive support from reliable sources such as SCARS.” Dec 11, 18:21
- on How To Know If You Are In Scam or Recovery Denial?: “Very useful information not just for us as crime victims but also to support those we care about.” Dec 11, 10:17
Important Information for New Scam Victims
- Please visit www.ScamVictimsSupport.org – a SCARS Website for New Scam Victims & Sextortion Victims
- Enroll in FREE SCARS Scam Survivor’s School now at www.SCARSeducation.org
- Please visit www.ScamPsychology.org – to more fully understand the psychological concepts involved in scams and scam victim recovery
If you are looking for local trauma counselors please visit counseling.AgainstScams.org or join SCARS for our counseling/therapy benefit: membership.AgainstScams.org
If you need to speak with someone now, you can dial 988 or find phone numbers for crisis hotlines all around the world here: www.opencounseling.com/suicide-hotlines
A Question of Trust
At the SCARS Institute, we invite you to do your own research on the topics we speak about and publish, Our team investigates the subject being discussed, especially when it comes to understanding the scam victims-survivors experience. You can do Google searches but in many cases, you will have to wade through scientific papers and studies. However, remember that biases and perspectives matter and influence the outcome. Regardless, we encourage you to explore these topics as thoroughly as you can for your own awareness.
SCARS Resources:
- Getting Started Right: ScamVictimsSupport.org
- Sextortion Scam Victims: Sextortion Victims Support – The Essentials (scamvictimssupport.org)
- For New Victims of Relationship Scams newvictim.AgainstScams.org
- Subscribe to SCARS Newsletter newsletter.againstscams.org
- Sign up for SCARS professional support & recovery groups, visit support.AgainstScams.org
- Join our Scam Survivors United Chat & Discussion Group facebook.com/groups/scam.survivors.united
- Find competent trauma counselors or therapists, visit counseling.AgainstScams.org
- Become a SCARS Member and get free counseling benefits, visit membership.AgainstScams.org
- Report each and every crime, learn how to at reporting.AgainstScams.org
- Learn more about Scams & Scammers at RomanceScamsNOW.com and ScamsNOW.com
- Scammer photos ScammerPhotos.com
- SCARS Videos youtube.AgainstScams.org
- Self-Help Books for Scam Victims are at shop.AgainstScams.org
- Worldwide Crisis Hotlines: https://blog.opencounseling.com/suicide-hotlines/
Other Cyber Resources
- Block Scam Domains: Quad9.net
- Global Cyber Alliance ACT Cybersecurity Tool Website: Actionable Cybersecurity Tools (ACT) (globalcyberalliance.org) https://act.globalcyberalliance.org/index.php/Actionable_Cybersecurity_Tools_(ACT)_-_Simplified_Cybersecurity_Protection
- Wizer Cybersecurity Training – Free Security Awareness Training, Phishing Simulation and Gamification (wizer-training.com)
-/ 30 /-
What do you think about this?
Please share your thoughts in a comment below!
Do You Need Support?
Get It Now!
SCARS provides the leading Support & Recovery program for relationship scam victims – completely FREE!
Our managed peer support groups allow victims to talk to other survivors and recover in the most experienced environment possible, for as long as they need. Recovery takes as long as it takes – we put no limits on our support!
SCARS is the most trusted support & education provider in the world. Our team is certified in trauma-informed care, grief counseling, and so much more!
To apply to join our groups visit support.AgainstScams.org
We also offer separate support groups for family & friends too.
Become a
SCARS STAR™ Member
SCARS offers memberships in our STAR program, which includes many benefits for a very low annual membership fee!
SCARS STAR Membership benefits include:
- FREE Counseling or Therapy Benefit from our partner BetterHelp.com
- Exclusive members-only content & publications
- Discounts on SCARS Self-Help Books Save
- And more!
To learn more about the SCARS STAR Membership visit membership.AgainstScams.org
To become a SCARS STAR Member right now visit join.AgainstScams.org
To Learn More Also Look At Our Article Catalogs
Scam & Crime Types
More SCARS
- ScamsNOW Magazine – ScamsNOW.com
- ContraEstafas.org
- ScammerPhotos.com
- AnyScam.com – reporting
- AgainstScams.org – SCARS Corporate Website
- SCARS YouTube Video Channel
Leave a Reply