Disinformation Campaigns And Tactics – An Engine Of Deception All Around Us – 2024

0
(0)

Disinformation Campaigns And Tactics

Deception Through Disinformation – Understanding What It Is and How It Works!

How Scams Work – A SCARS Guide

Authors:
•  Tim McGuinness, Ph.D., DFin, MCPO, MAnth – Anthropologist, Scientist, Director of the Society of Citizens Against Relationship Scams Inc.
•  U.S. Cybersecurity and Infrastructure Security Agency (CISA)

Article Abstract

Disinformation, particularly from state actors, aims to deceive and manipulate by spreading false information to achieve geopolitical goals and influence public opinion. This leads to confusion, polarization, and undermines trust in democratic institutions. State-sponsored campaigns are particularly dangerous due to their resources and sophisticated technologies, enabling them to exploit societal divisions and destabilize nations.

Disinformation is spread by various actors, including political groups, special interest organizations, businesses, cybercriminals, extremists, individuals, foreign propaganda outlets, and even reputable news media. Combating disinformation requires coordinated efforts, including promoting media literacy, strengthening cybersecurity, and fostering international cooperation.

Effective tactics used in disinformation campaigns include creating fake personas, using deepfakes, amplifying conspiracy theories, astroturfing, exploiting information gaps, manipulating unsuspecting actors, and spreading targeted content.

Understanding and recognizing these tactics is important for protecting democracy, personal information, informed decision-making, social cohesion, media literacy, and national security.

Disinformation Campaigns And Tactics - An Engine Of Deception All Around Us - 2024 - on SCARS Institute Encyclopedia of Scams RomanceScamsNOW.com

The Methods & Tactics of Disinformation – The Online Engine of Deception

Disinformation poses significant dangers, particularly when propagated by state actors. Disinformation refers to deliberately false or misleading information spread to deceive people, disrupt societies, or manipulate public opinion. State actors often engage in disinformation campaigns to achieve geopolitical goals, influence elections, and undermine the trust in democratic institutions. These campaigns can create confusion, polarize societies, and erode public confidence in the media and government, ultimately destabilizing the targeted nation.

The dangers of state-sponsored disinformation are profound because of the resources and sophistication these actors can deploy. They can leverage advanced technologies, cyber capabilities, and vast networks to amplify their messages, making it difficult for individuals to discern truth from falsehood. Additionally, state actors can tailor disinformation to exploit existing societal divisions, deepening polarization and conflict. This not only undermines social cohesion but also weakens a nation’s ability to respond to genuine crises effectively. Addressing the threat of disinformation requires coordinated efforts, including promoting media literacy, strengthening cybersecurity, and fostering international cooperation to hold perpetrators accountable.

Who Spreads Disinformation?

Disinformation can be spread by a variety of actors, each with their own motives and methods. Here are the main groups involved:

State Actors: Governments or government-affiliated entities often engage in disinformation campaigns to achieve political, economic, or military objectives. Examples include attempts to influence foreign elections, undermine trust in democratic institutions, or create instability in rival nations. State actors possess significant resources and sophisticated technologies to carry out these campaigns.

Political Groups: Political parties and movements might spread disinformation to gain an advantage over opponents, shape public opinion, or mobilize support. This can occur both within countries and on the international stage, affecting elections and policy debates.

Special Interest Groups: Organizations representing specific industries or causes may disseminate disinformation to protect their interests. This can include misinformation about scientific research, health information, environmental issues, or economic policies.

Businesses: Corporations and commercial entities might spread false information for competitive advantage, market manipulation, or to deflect negative publicity. This can include misleading advertising, financial misinformation, and corporate espionage.

Cybercriminals: Hackers and cybercriminals often use disinformation as part of broader cyber-attacks. Their goals can range from financial gain to disrupting organizations, stealing data, or causing reputational damage.

Extremist Groups: Ideologically driven groups, including terrorists and other extremist organizations, use disinformation to recruit members, spread propaganda, and incite violence. Their messages often aim to polarize societies and create chaos.

Individual Actors: Individuals, including social media influencers and ordinary users, can spread disinformation either intentionally or unintentionally. Personal biases, a desire for attention, or simply the sharing of unverified information can contribute to the spread of false narratives.

Foreign Propaganda Outlets: Media organizations controlled or influenced by foreign governments may disseminate disinformation to shape public perception and further the geopolitical aims of their sponsors.

News Media: Even reputable news organizations can sometimes spread disinformation, whether through sensationalism, biased reporting, or failure to properly fact-check. Additionally, some media outlets may have ideological leanings or financial incentives that lead them to publish misleading or false information.

Each of these actors leverages various platforms, including social media, traditional media, and online forums, to reach and influence their target audiences. The widespread nature of these activities makes combating disinformation a complex and ongoing challenge.

Why We Need to Understand About Disinformation

Understanding disinformation and how it works is critical for several reasons:

Protecting Democracy: Disinformation can undermine democratic processes by spreading false information about candidates, policies, and elections. By recognizing disinformation, individuals can make informed decisions and support the integrity of democratic institutions.

Safeguarding Personal Information: Disinformation often accompanies scams and phishing attempts designed to steal personal information or money. Knowing how to identify and avoid disinformation can help protect against these threats.

Promoting Informed Decision-Making: Accurate information is essential for making informed decisions about health, finances, education, and other aspects of life. Understanding disinformation helps individuals critically evaluate the information they receive and make better choices.

Preventing Social Division: Disinformation can exacerbate social divisions by spreading falsehoods that fuel mistrust and conflict between different groups. By recognizing and rejecting disinformation, people can contribute to a more cohesive and harmonious society.

Enhancing Media Literacy: With the proliferation of digital media, it is important for individuals to develop media literacy skills. Understanding how disinformation works is a key component of media literacy, enabling people to critically assess the sources and content of the information they consume.

Resisting Manipulation: Disinformation is often used to manipulate public opinion and behavior for political, commercial, or ideological purposes. By understanding these tactics, individuals can better resist manipulation and maintain autonomy over their beliefs and actions.

Supporting Public Health: Disinformation about health issues, such as vaccines or treatments, can have serious consequences for public health. Understanding disinformation helps people make informed health choices and support evidence-based medical practices.

Strengthening National Security: State actors and other malicious entities use disinformation to weaken national security by creating confusion and distrust. Being aware of these tactics can help individuals support efforts to counteract these threats.

By understanding disinformation and its mechanisms, individuals can better navigate the complex information landscape, contribute to a more informed and resilient society, and protect themselves from the negative impacts of false information.

A Brief History of Disinformation: Pre-World War II to Present

Disinformation has a long history, evolving from simple deceptive tactics to sophisticated digital operations. Before World War II, disinformation was often used in military and political contexts. Ancient civilizations, such as the Greeks and Romans, used deceptive tactics to mislead enemies. In the 16th century, the printing press allowed for the spread of false information more widely, including during the Protestant Reformation when both sides used pamphlets to spread misinformation.

During World War I, disinformation became more systematic. The British government created the War Propaganda Bureau to influence public opinion and enemy morale. They spread false stories to garner support for the war effort, such as the fabricated speculative reports of German atrocities in Belgium – though ironically they turned out to be true. Similarly, the Bolsheviks used disinformation to consolidate power during the Russian Revolution.

World War II saw further advancements in disinformation tactics. The Nazis used propaganda to justify their actions and demonize enemies, employing sophisticated techniques to control information. The British and Americans also engaged in disinformation, such as Operation Mincemeat, which successfully misled the Germans about the location of the Allied invasion of Southern Europe.

The Cold War era marked the beginning of state-sponsored disinformation on a global scale. The Soviet Union’s KGB was particularly active, spreading false information to discredit opponents and influence international opinion. The CIA also engaged in disinformation to counter Soviet influence, exemplified by the covert publication of books and articles to undermine communist ideologies.

The digital age has transformed disinformation. The internet and social media have provided new platforms for the rapid and widespread dissemination of false information. During the 2016 U.S. presidential election, Russian operatives used social media to spread disinformation, creating fake accounts and websites to influence public opinion and sow discord. Similar tactics have been observed in various other countries, as state and non-state actors exploit digital tools to advance their agendas.

In recent years, deepfakes and AI-generated content have emerged as new disinformation tools. These technologies allow for the creation of highly convincing false images, videos, and audio, making it increasingly difficult to discern truth from falsehood. Governments and tech companies are now grappling with the challenge of countering these sophisticated disinformation tactics.

Disinformation remains a significant threat to democratic institutions, public trust, and societal cohesion. Addressing it requires a combination of media literacy, robust cybersecurity measures, and international cooperation to hold perpetrators accountable.

HOW DISINFORMATION IS DONE!

Introduction

Disinformation actors use a variety of tactics to influence others, stir them to action, and cause harm. Understanding these tactics can increase preparedness and promote resilience when faced with disinformation.

Disinformation operations spread disinformation narratives that pose a risk to critical infrastructure. Each of these tactics are designed to make disinformation actors’ messages more credible or to manipulate their audience to a specific end. They often seek to polarize their target audience across contentious political or social divisions, making the audience more receptive to disinformation.

These methods can and have been weaponized by disinformation actors to generate threats to U.S. critical infrastructure. The Tactics of Disinformation series helps organizations understand and manage the risks posed by disinformation by breaking down common tactics, sharing real-world examples, and providing concrete steps to counter these narratives with accurate information. Any organization and its staff can be targeted by disinformation campaigns, and all organizations have a role to play in building a resilient information environment. This product, and others available in the CISA MDM Resource Library, supports critical infrastructure organizations in assessing their risk posture and building resilience in their communities.

Disinformation Tactics Overview

Cultivate Fake or Misleading Personas and Websites: Disinformation actors create networks of fake personas and websites to increase the believability of their message with their target audience. Fake expert networks use inauthentic credentials (e.g., fake “experts”, journalists, think tanks, or academic institutions) to lend undue credibility to their influence content and make it more believable.

Create Deepfakes and Synthetic Media: Synthetic media content may include photos, videos, and audio clips that have been digitally manipulated or entirely fabricated to mislead the viewer. Artificial intelligence (AI) tools can make synthetic content nearly indistinguishable from real life. Synthetic media content may be deployed as part of disinformation campaigns to promote false information and manipulate audiences.

Devise or Amplify Conspiracy Theories: Conspiracy theories attempt to explain important events as secret plots by powerful actors. Conspiracy theories not only impact an individual’s understanding of a particular topic; they can shape and influence their entire worldview. Disinformation actors capitalize on conspiracy theories by generating disinformation narratives that align with the conspiracy worldview, increasing the likelihood that the narrative will resonate with the target audience.

Astroturfing and Flooding the Information Environment: Disinformation campaigns will often post overwhelming amounts of content with the same or similar messaging from several inauthentic accounts. This practice, known as astroturfing, creates the impression of widespread grassroots support or opposition to a message while concealing its true origin. A similar tactic, flooding, involves spamming social media posts and comment sections with the  intention of shaping a narrative or drowning out opposing viewpoints.

Abuse Alternative Platforms: Disinformation actors may abuse alternative social media platforms to intensify belief in a disinformation narrative among specific user groups. Disinformation actors may seek to take advantage of platforms with fewer user protections, less stringent content moderation policies, and fewer controls to detect and remove inauthentic content and accounts than other social media platforms.

Exploit Information Gaps: Data voids, or information gaps, occur when there is insufficient credible information to satisfy a search inquiry. Disinformation actors can exploit these gaps by generating their own influence content and seeding the search term on social media to encourage people to look it up. This increases the likelihood that audiences will encounter disinformation content without any accurate or authoritative search results to refute it.

Manipulate Unsuspecting Actors: Disinformation actors target prominent individuals and organizations to help amplify their narratives. Targets are often unaware that they are repeating a disinformation actor’s narrative or that the narrative is intended to manipulate.

Spread Targeted Content: Disinformation actors produce tailored influence content likely to resonate with a specific audience based on their worldview and interests. These actors gain insider status and grow an online following that can make future manipulation efforts more successful. This tactic often takes a “long game” approach of spreading targeted content over time to build trust and credibility with the target audience.

Step 1: Cultivate Fake or Misleading Personas and Websites

Disinformation actors create networks of fake personas and websites to increase the believability of their message with their target audience. Such networks may include fake academic or professional “experts,” journalists, think tanks, and/or academic institutions. Some fake personas are even able to validate their social media accounts (for example, a blue or gray checkmark next to a username), further confusing audiences about their authenticity. Fake expert networks use inauthentic credentials to make their content more believable.

Disinformation actors also increase the credibility of these fake personas by generating falsified articles or research papers and sharing them online. Sometimes, these personas and their associated publications are intentionally amplified by other actors. In some instances, these materials are also unwittingly shared by legitimate organizations and users. The creation or amplification of content from these fake personas makes it difficult for audiences to distinguish real experts from fake ones.

Adversaries have also demonstrated a “long game” approach with this tactic by building a following and credibility with seemingly innocuous content before switching their focus to creating and amplifying disinformation. This lends false credibility to campaigns.

Examples

Russia’s military intelligence agency, the GRU, utilized fake experts in their influence efforts around the 2016 U.S. Presidential election. GRU operatives created fake think tanks and news sites populated with articles by inauthentic personas. They established dozens of public Facebook pages to post and amplify the content. Content ranged from expressing support for Russian interests in the Syrian and 2014 Ukrainian conflicts to issues of racial justice in the United States.1

The Iranian-aligned network of fake websites and personas known as “Endless Mayfly” impersonates legitimate media outlets to spread disinformation narratives. They then use their fake personas to amplify content on social media.

Step 2: Create Deepfakes and Synthetic Media

Synthetic media content may include photos, videos, and audio clips that have been digitally manipulated or entirely fabricated to mislead the viewer. ‘Cheapfakes’ are a less sophisticated form of manipulation involving real audio clips or videos that have been sped up, slowed down, or shown out of context to mislead. In contrast, deepfakes are developed by training artificial intelligence (AI) algorithms on reference content until they can produce media that is nearly indistinguishable from real life. Deepfake technology makes it possible to convincingly depict someone doing something they haven’t done or saying something they haven’t said. While synthetic media technology is not inherently malicious, it can be deployed as part of disinformation campaigns to share false information or manipulate audiences.

Deepfake photos by disinformation actors can be used to generate realistic profile pictures to create a large network of inauthentic social media accounts.  Deepfake videos often use AI technology to map one person’s face to another person’s body. In the case of audio deepfakes, a “voice clone” can produce new sentences as audio alone or as part of a video deepfake, often with only a few hours (or even minutes) of reference audio clips. Finally, an emerging use of deepfake technology involves AI-generated text, which can produce realistic writing and presents a unique challenge due to its ease of production.

Examples

The pro-Chinese political spam network Spamouflage Dragon used AI-generated profiles to create a cluster of inauthentic profiles to spread its English-language cheapfake videos attacking U.S. policy in June 2020. Many videos featured selectively edited news coverage overlaid by awkward, automated voice-overs and captions.

In September 2020, Facebook took down thirteen accounts attributed to the Russian Internet Research Agency that used AI-generated profile pictures to appear more believable to unwitting audiences.

Step 3: Devise or Amplify Conspiracy Theories

Conspiracy theories attempt to explain important events as secret plots by powerful actors.1 Conspiracy theories not only impact an individual’s understanding of a particular topic; they can shape and influence their entire worldview. Conspiracy theories often present an attractive alternative to reality by explaining uncertain events in a simple and seemingly cohesive manner, especially during times of heightened uncertainty and anxiety.

Disinformation actors capitalize on conspiracy theories by generating disinformation narratives that align with the conspiracy worldview, increasing the likelihood that the narrative will resonate with the target audience. By repeating certain tropes across multiple narratives, malign actors increase the target audience’s familiarity with the narrative and therefore its believability. Conspiracy theories can also present a pathway for radicalization to violence among certain adherents. Conspiracy theories can alter a person’s fundamental worldview and can be very difficult to counter retroactively, so proactive resilience building is especially critical to prevent conspiratorial thinking from taking hold.

Examples

In 2020, Chinese state media channels and government officials claimed that COVID-19 had originated in the United States and had been brought to China by a member of the U.S. military. Such narratives were present earlier among conspiracy theory communities online, but China’s extensive information operations networks legitimized and amplified these narratives broadly across traditional and social media in an effort to redirect criticism from China’s own handling of the outbreak and to attempt to discredit its geopolitical rivals.

Russia’s Defense Ministry deployed the disinformation narrative that the U.S. government is funding military programs in Ukraine to produce bioweapons.  Further amplified by the Chinese Foreign Ministry, these narratives seek to justify Russia’s invasion as a mission to neutralize the alleged bioweapons and to provide grounds for blaming the U.S. or Ukraine in potential false-flag operation.

Step 4: Astroturfing and Flooding the Information Environment

Disinformation campaigns will often post overwhelming amounts of content with the same or similar messaging from several inauthentic accounts, either created by automated programs known as bots or by professional disinformation groups known as troll farms. By consistently seeing the same narrative repeated, the audience sees it as a popular and widespread message and is more likely to believe it. This practice, known as astroturfing, creates the impression of widespread grassroots support or opposition to a message while concealing its true origin.

A similar tactic, flooding, involves spamming social media posts and comment sections with the intention of shaping a narrative or drowning out opposing viewpoints, often using many fake and/or automated accounts. Flooding may also be referred to as “firehosing.” This tactic is used to stifle legitimate debate, such as the discussion of a new policy or initiative, and discourage people from participating in online spaces. Information manipulators use flooding to dull the sensitivity of targets through repetition and create a sense that nothing is true. Researchers call these tactics “censorship by noise,” where artificially amplified narratives are meant to drown out all other viewpoints. Artificial intelligence and other advanced technologies enable astroturfing and flooding to be deployed at speed and scale, more easily manipulating the information environment and influencing public opinion.

Examples

In 2016, Russian agents, part of the Internet Research Agency, impersonated activists on both sides of the political spectrum to flood social media channels with inflammatory content, as well as to call for activists to attend events.

The Chinese government has been suspected of hiring as many as two million people, known as the “50 Cent Party,” to flood the web in China with pro-regime messaging. The 50 Cent Party drowns out critics and distracts from policy issues by sharing an overwhelming amount of positive news on online platforms.

Step 5: Abuse Alternative Platforms

Disinformation actors often seek opportunities for their narratives to gain traction among smaller audiences before attempting to go viral. While alternative social media platforms are not inherently malicious, disinformation actors may take advantage of less stringent platform policies to intensify belief in a disinformation narrative among specific user groups. These policies may include fewer user protections, less stringent content moderation policies, and fewer controls to detect and remove inauthentic content and accounts than some of the other social media platforms.

Alternative platforms often promote unmoderated chat and file sharing/storage capabilities, which is not inherently malicious but may be appealing for actors who want to share disinformation.* While some alternative platforms forbid the promotion of violence on public channels, they may have less visibility into private channels or groups promoting violence. Disinformation actors will recruit followers to alternative platforms by promoting a sense of community, shared purpose, and the perception of fewer restrictions. Groups on alternative platforms may operate without the scrutiny or detection capabilities that other platforms have. Often, groups focus on specific issues or activities to build audience trust, and disinformation actors can, in turn, abuse this trust and status to establish credibility on other platforms.

Examples

The Russian government has encouraged users to turn to specific platforms for pro-Kremlin content from state-affiliated media, including Sputnik and RT News. These channels spread disinformation concealed as fake “war correspondents” or fake “fact-checking” about Russia’s invasion of Ukraine.

Foreign terrorist organizations sometimes leverage disinformation tactics to abuse alternative platforms as well. Terrorist organizations like ISIS have leveraged the platforms to spread malign content, recruit new followers, and coordinate activities. Research shows that communications by ISIS on alternative platforms played a role in the uptick in terrorist attacks in Europe between 2015 and 2016.

Step 6: Exploit Information Gaps

Data voids, or information gaps, occur when there is insufficient credible information to satisfy a search inquiry, such as when a term falls out of use or when an emerging topic or event first gains prominence (e.g., breaking news).

When a user searches for the term or phrase, the only results available may be false, misleading, or have low credibility. While search engines work to mitigate this problem, disinformation actors can exploit this gap by generating their own influence content and seeding the search term on social media to encourage people to look it up. Because the specific terms that create data voids are difficult to identify beforehand, credible sources of information are often unable to proactively mitigate their impacts with accurate information. Disinformation actors can exploit data voids to increase the likelihood a target will encounter disinformation without accurate information for context thus increasing the likelihood the content is seen as true or authoritative. Additionally, people often perceive information that they find themselves on search engines as more credible, and it can be challenging to reverse the effects of disinformation once accepted.

Example

In 2015 as part of its effort to undermine opponents in the Syrian Civil War, Russia exploited data voids to falsely associate a Syrian humanitarian organization with terrorism. A small number of Russia-backed sources, including state media outlets, generated hundreds of articles that were further amplified by Russian disinformation networks on social media, overwhelming search engines with influence content. Individuals searching for information about the organization were met with many narratives pushing Russia’s agenda, which overwhelmed accurate authoritative information sources that appeared lower down in search results.

Step 7: Manipulate Unsuspecting Actors

Disinformation campaigns target prominent individuals and organizations to help amplify their narratives. These secondary spreaders of disinformation narratives add perceived credibility to the messaging and help seed these narratives at the grassroots level while disguising their original source. Targets are often unaware that they are repeating a disinformation actors’ narrative or that the narrative is intended to manipulate. The content is engineered to appeal to their and their follower’s emotions, causing the influencers to become unwitting facilitators of disinformation campaigns.

Examples

In 2016, the Russian Internet Research Agency conducted a campaign to spread divisive content and covertly recruited U.S. persons across the political spectrum to unknowingly amplify this content. Then again in 2020, the Russian Internet Research Agency deployed a campaign to covertly recruit unwitting journalists to write freelance for fabricated news outlets.

In August 2021, Facebook removed several accounts connected to a UK marketing firm for its Russian-linked operations. Starting in 2020, several fake accounts were created and began posting memes and comments claiming the AstraZeneca COVID-19 vaccine would turn recipients into chimpanzees. The hashtags and petitions associated with these accounts were then shared by several health and wellbeing influencers. The UK firm allegedly also contacted influencers on YouTube, Instagram, and TikTok to ask them to push anti-vaccine content for payment.

Following the United States’ “diplomatic boycott” of the 2022 Winter Olympics in Beijing, China hired a U.S. based public relations firm to discreetly recruit social media influencers in the U.S. to amplify positive messaging, including disinformation, about China and the competition. Influencers were chosen to reach target audience segments with content that deflects from allegations of human rights abuses in China. Many posts did not properly attribute their sponsorship, a violation of platform requirements that increased the seemingly organic content’s credibility.

Step 8: Spread Targeted Content

Disinformation actors surveil a targeted online community to understand its worldview, interests, and key influencers and then attempt to infiltrate it by posting tailored influence content likely to resonate with its members. By starting with entertaining or non-controversial posts that are agreeable to targeted communities, disinformation actors gain “insider” status and grow an online following that can make future manipulation efforts more successful. This tactic may be used in combination with cultivating fake experts, who spread targeted content over time, taking a “long game” approach that lends false credibility to the campaign. Targeted content often takes highly shareable forms, like memes or videos, and can be made to reach very specific audiences by methods such as paid advertising and exploited social media algorithms.

Examples

In its effort to sow division within the United States during the 2016 presidential election, the Russian Internet Research Agency (IRA) deployed a vast network of inauthentic social media accounts, pages, and groups to target specific American communities, including racial and ethnic groups and adherents to specific political movements or ideologies. For example, the IRA attempted to discourage participation among Black Americans in the electoral process by creating an ecosystem of connected fake accounts posing as media outlets. The network of fake accounts pushed repetitive narratives and sometimes manipulated legitimate influencers into amplifying its content, lending it the appearance of insider status within the community.

An extensive, pro-China network of inauthentic online accounts has expanded efforts to target global audiences in recent years. The operation has spread to dozens of social media platforms and websites, including alternative forums catering to niche audiences, and has deployed disinformation content in at least seven languages, including Russian and Spanish. Like the IRA efforts, many of the accounts in the pro-China network shared the same content and linked to in-network accounts on other platforms. The targeted content often seeks to spur real-world action. For example, in April 2020, content targeting Asian Americans sought to mobilize protests within the U.S. against findings that COVID-19 originated in China.

Actions You Can Take Against Disinformation

Although disinformation tactics are designed to deceive and manipulate, critically evaluating content and verifying information with credible sources before deciding to share it can increase resilience against disinformation and slow its spread. Share these tips:

Recognize the risk. Understand how disinformation actors leverage these tactics to push their agenda. Be wary of manipulative content that tries to divide.

Question the source. Critically evaluate the content and its origin to determine whether it’s trustworthy. Research the author’s credentials, consider the outlet’s agenda, and verify the supporting facts.

Investigate the issue. Conduct a thorough, unbiased search into contentious issues by looking at what credible sources are saying and considering other perspectives. Rely on credible sources of information, such as government sites.

Think before you link. Slow down. Don’t immediately click to share content you see online. Check the facts first. Some of the most damaging disinformation spreads rapidly via shared posts that seek to elicit an emotional reaction that overpowers critical thinking.

Talk with your social circle. Engage in private, respectful conversations with friends and family when you see them sharing information that looks like disinformation. Be thoughtful about what you post on social media.

Please Rate This Article

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

As you found this post useful...

Follow us on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Please Leave Us Your Comment
Also, tell us of any topics we might have missed.

Leave a Reply

Your comments help the SCARS Institute better understand all scam victim/survivor experiences and improve our services and processes. Thank you

Your email address will not be published. Required fields are marked *

Thank you for your comment. You may receive an email to follow up. We never share your data with marketers.

Recent Reader Comments

Important Information for New Scam Victims

If you are looking for local trauma counselors please visit counseling.AgainstScams.org or join SCARS for our counseling/therapy benefit: membership.AgainstScams.org

If you need to speak with someone now, you can dial 988 or find phone numbers for crisis hotlines all around the world here: www.opencounseling.com/suicide-hotlines

A Question of Trust

At the SCARS Institute, we invite you to do your own research on the topics we speak about and publish, Our team investigates the subject being discussed, especially when it comes to understanding the scam victims-survivors experience. You can do Google searches but in many cases, you will have to wade through scientific papers and studies. However, remember that biases and perspectives matter and influence the outcome. Regardless, we encourage you to explore these topics as thoroughly as you can for your own awareness.

SCARS Resources:

Other Cyber Resources

-/ 30 /-

What do you think about this?
Please share your thoughts in a comment below!

SCARS FREE Support & Recovery Program - 4 EVER FREE

Do You Need Support?
Get It Now!

SCARS provides the leading Support & Recovery program for relationship scam victims – completely FREE!

Our managed peer support groups allow victims to talk to other survivors and recover in the most experienced environment possible, for as long as they need. Recovery takes as long as it takes – we put no limits on our support!

SCARS is the most trusted support & education provider in the world. Our team is certified in trauma-informed care, grief counseling, and so much more!

To apply to join our groups visit support.AgainstScams.org

We also offer separate support groups for family & friends too.

SCARS STAR Membership

Become a
SCARS STAR™ Member

SCARS offers memberships in our STAR program, which includes many benefits for a very low annual membership fee!

SCARS STAR Membership benefits include:

  • FREE Counseling or Therapy Benefit from our partner BetterHelp.com
  • Exclusive members-only content & publications
  • Discounts on SCARS Self-Help Books Save
  • And more!

To learn more about the SCARS STAR Membership visit membership.AgainstScams.org

To become a SCARS STAR Member right now visit join.AgainstScams.org

PLEASE SHARE SO OTHERS WILL KNOW

SCARS Publishing Self-Help Recovery Books Available At shop.AgainstScams.org

Scam Victim Self-Help Do-It-Yourself Recovery Books

SCARS Printed Books For Every Scam Survivor From SCARS Publishing

Visit shop.AgainstScams.org

Each is based on our SCARS Team’s 32-plus years of experience.

SCARS Website Visitors receive an Extra 10% Discount
Use Discount Code “romanacescamsnow” at Checkout

Always Report All Scams – Anywhere In The World To:

Go to reporting.AgainstScams.org to learn how

U.S. FTC at https://reportfraud.ftc.gov/#/?orgcode=SCARS and SCARS at www.Anyscams.com
Visit reporting.AgainstScams.org to learn more!

Legal Disclaimer:

The content provided on this platform regarding psychological topics is intended solely for educational and entertainment purposes. The publisher makes no representations or warranties regarding the accuracy or completeness of the information presented. The content is designed to raise awareness about various psychological subjects, and readers are strongly encouraged to conduct their own research and verify information independently.

The information presented does not constitute professional advice, diagnosis, or treatment of any psychological disorder or disease. It is not a substitute for professional medical or mental health advice, diagnosis, or treatment. Readers are advised to seek the guidance of a licensed medical professional for any questions or concerns related to their mental health.

The publisher disclaims any responsibility for actions taken or not taken based on the content provided. The treatment of psychological issues is a serious matter, and readers should consult with qualified professionals to address their specific circumstances. The content on this platform is not intended to create, and receipt of it does not constitute, a therapist-client relationship.

Interpretation and Definitions

Definitions

For the purposes of this Disclaimer:

  • Company (referred to as either “the Company”, “We”, “Us” or “Our” in this Disclaimer) refers to Society of Citizens Against Relationship Scams Inc. (registered d.b.a. “SCARS”,) 9561 Fountainbleau Blvd., Suit 602, Miami FL 33172.
  • Service refers to the Website.
  • You means the individual accessing this website, or the company, or other legal entity on behalf of which such individual is accessing or using the Service, as applicable.
  • Website refers to RomanceScamsNOW.com, accessible from https://romancescamsnow.com

Website Disclaimer

The information contained on this website is for general information purposes only.

The Company assumes no responsibility for errors or omissions in the contents of the Service.

In no event shall the Company be liable for any special, direct, indirect, consequential, or incidental damages or any damages whatsoever, whether in an action of contract, negligence or other tort, arising out of or in connection with the use of the Service or the contents of the Service. The Company reserves the right to make additions, deletions, or modifications to the contents on the Service at any time without prior notice.

The Company does not warrant this website in any way.

External Links Disclaimer

This website may contain links to external websites that are not provided or maintained by or in any way affiliated with the Company.

Please note that the Company does not guarantee the accuracy, relevance, timeliness, or completeness of any information on these external websites.

Errors and Omissions Disclaimer

The information given by SCARS is for general guidance on matters of interest only. Even if the Company takes every precaution to ensure that the content of this website is both current and accurate, errors can occur. Plus, given the changing nature of laws, rules, and regulations, there may be delays, omissions, or inaccuracies in the information contained on this website.

SCARS is not responsible for any errors or omissions, or for the results obtained from the use of this information.

Fair Use Disclaimer

SCARS may use copyrighted material that has not always been specifically authorized by the copyright owner. The Company is making such material available for criticism, comment, news reporting, teaching, scholarship, or research.

The Company believes this constitutes a “fair use” of any such copyrighted material as provided for in section 107 of the United States Copyright law.

If You wish to use copyrighted material from this website for your own purposes that go beyond fair use, You must obtain permission from the copyright owner.

Views Expressed Disclaimer

The Service may contain views and opinions which are those of the authors and do not necessarily reflect the official policy or position of any other author, agency, organization, employer, or company, including SCARS.

Comments published by users are their sole responsibility and the users will take full responsibility, liability, and blame for any libel or litigation that results from something written in or as a direct result of something written in a comment. The Company is not liable for any comment published by users and reserves the right to delete any comment for any reason whatsoever.

No Responsibility Disclaimer

The information on the Service is provided with the understanding that the Company is not herein engaged in rendering legal, accounting, tax, medical or mental health, or other professional advice and services. As such, it should not be used as a substitute for consultation with professional accounting, tax, legal, medical or mental health, or other competent advisers.

In no event shall the Company, its team, board of directors, volunteers, or its suppliers be liable for any special, incidental, indirect, or consequential damages whatsoever arising out of or in connection with your access or use or inability to access or use the Service.

“Use at Your Own Risk” Disclaimer

All information on this website is provided “as is”, with no guarantee of completeness, accuracy, timeliness or of the results obtained from the use of this information, and without warranty of any kind, express or implied, including, but not limited to warranties of performance, merchantability, and fitness for a particular purpose.

SCARS will not be liable to You or anyone else for any decision made or action taken in reliance on the information given by the Service or for any consequential, special, or similar damages, even if advised of the possibility of such damages.

Contact Us

If you have any questions about this Disclaimer, You can contact Us:

  • By email: contact@AgainstScams.org

PLEASE NOTE: Psychology Clarification

The following specific modalities within the practice of psychology are restricted to psychologists appropriately trained in the use of such modalities:

  • Diagnosis: The diagnosis of mental, emotional, or brain disorders and related behaviors.
  • Psychoanalysis: Psychoanalysis is a type of therapy that focuses on helping individuals to understand and resolve unconscious conflicts.
  • Hypnosis: Hypnosis is a state of trance in which individuals are more susceptible to suggestion. It can be used to treat a variety of conditions, including anxiety, depression, and pain.
  • Biofeedback: Biofeedback is a type of therapy that teaches individuals to control their bodily functions, such as heart rate and blood pressure. It can be used to treat a variety of conditions, including stress, anxiety, and pain.
  • Behavioral analysis: Behavioral analysis is a type of therapy that focuses on changing individuals’ behaviors. It is often used to treat conditions such as autism and ADHD.
    Neuropsychology: Neuropsychology is a type of psychology that focuses on the relationship between the brain and behavior. It is often used to assess and treat cognitive impairments caused by brain injuries or diseases.

SCARS and the members of the SCARS Team do not engage in any of the above modalities in relationship to scam victims. SCARS is not a mental healthcare provider and recognizes the importance of professionalism and separation between its work and that of the licensed practice of psychology.

SCARS is an educational provider of generalized self-help information that individuals can use for their own benefit to achieve their own goals related to emotional trauma. SCARS recommends that all scam victims see professional counselors or therapists to help them determine the suitability of any specific information or practices that may help them.

SCARS cannot diagnose or treat any individuals, nor can it state the effectiveness of any educational information that it may provide, regardless of its experience in interacting with traumatized scam victims over time. All information that SCARS provides is purely for general educational purposes to help scam victims become aware of and better understand the topics and to be able to dialog with their counselors or therapists.

It is important that all readers understand these distinctions and that they apply the information that SCARS may publish at their own risk, and should do so only after consulting a licensed psychologist or mental healthcare provider.

SCARS IS A DIGITAL PUBLISHER AND DOES NOT OFFER HEALTH OR MEDICAL ADVICE, LEGAL ADVICE, FINANCIAL ADVICE, OR SERVICES THAT SCARS IS NOT LICENSED OR REGISTERED TO PERFORM.

IF YOU’RE FACING A MEDICAL EMERGENCY, CALL YOUR LOCAL EMERGENCY SERVICES IMMEDIATELY, OR VISIT THE NEAREST EMERGENCY ROOM OR URGENT CARE CENTER. YOU SHOULD CONSULT YOUR HEALTHCARE PROVIDER BEFORE FOLLOWING ANY MEDICALLY RELATED INFORMATION PRESENTED ON OUR PAGES.

ALWAYS CONSULT A LICENSED ATTORNEY FOR ANY ADVICE REGARDING LEGAL MATTERS.

A LICENSED FINANCIAL OR TAX PROFESSIONAL SHOULD BE CONSULTED BEFORE ACTING ON ANY INFORMATION RELATING TO YOUR PERSONAL FINANCES OR TAX RELATED ISSUES AND INFORMATION.

SCARS IS NOT A PRIVATE INVESTIGATOR – WE DO NOT PROVIDE INVESTIGATIVE SERVICES FOR INDIVIDUALS OR BUSINESSES. ANY INVESTIGATIONS THAT SCARS MAY PERFORM IS NOT A SERVICE PROVIDED TO THIRD-PARTIES. INFORMATION REPORTED TO SCARS MAY BE FORWARDED TO LAW ENFORCEMENT AS SCARS SEE FIT AND APPROPRIATE.

This content and other material contained on the website, apps, newsletter, and products (“Content”), is general in nature and for informational purposes only and does not constitute medical, legal, or financial advice; the Content is not intended to be a substitute for licensed or regulated professional advice. Always consult your doctor or other qualified healthcare provider, lawyer, financial, or tax professional with any questions you may have regarding the educational information contained herein. SCARS makes no guarantees about the efficacy of information described on or in SCARS’ Content. The information contained is subject to change and is not intended to cover all possible situations or effects. SCARS does not recommend or endorse any specific professional or care provider, product, service, or other information that may be mentioned in SCARS’ websites, apps, and Content unless explicitly identified as such.

The disclaimers herein are provided on this page for ease of reference. These disclaimers supplement and are a part of SCARS’ website’s Terms of Use. 

All original content is Copyright © 1991 – 2023 Society of Citizens Against Relationship Scams Inc. (Registered D.B.A SCARS) All Rights Reserved Worldwide & Webwide. Third-party copyrights acknowledge.

U.S. State of Florida Registration Nonprofit (Not for Profit) #N20000011978 [SCARS DBA Registered #G20000137918] – Learn more at www.AgainstScams.org

View the claimed and or registered indicia, service marks, and trademarks of Society of Citizens Against Relationship Scams Inc., All Rights Reserved Worldwide

Contact the law firm for the Society of Citizens Against Relationship Scams Incorporated by email at legal@AgainstScams.org

Share This Information - Choose Your Social Media!

Leave A Comment

Your comments help the SCARS Institute better understand all scam victim/survivor experiences and improve our services and processes. Thank you


Thank you for your comment. You may receive an email to follow up. We never share your data with marketers.