AI Is Already Changing Everything In Scams And Fraud, But It Is Going To Get Worse!
Commentary about AI by Michael Atleson, Attorney, FTC Division of Advertising Practices
You may have heard of simulation theory, the notion that nothing is real and we’re all part of a giant AI computer simulation program. Let’s assume at least for the length of this blog post that this notion is untrue. Nonetheless, we may be heading for a future in which a substantial portion of what we see, hear, and read is a computer-generated simulation. We always keep it real here at the FTC, but what happens when none of us can tell real from fake?
In a recent blog post, we discussed how the term “AI” can be used as a deceptive selling point for new products and services. Let’s call that the fake AI problem. Today’s topic is the use of AI behind the screen to create or spread deception. Let’s call this the AI fake problem. The latter is a deeper, emerging threat that companies across the digital ecosystem need to address. Now.
AI Fake Problem
Most of us spend lots of time looking at things on a device. Thanks to AI tools that create “synthetic media” or otherwise generate content, a growing percentage of what we’re looking at is not authentic, and it’s getting more difficult to tell the difference. And just as these AI tools are becoming more advanced, they’re also becoming easier to access and use. Some of these tools may have beneficial uses, but scammers can also use them to cause widespread harm.
Generative AI and synthetic media are colloquial terms used to refer to chatbots developed from large language models and to technology that simulates human activity, such as software that creates deepfake videos and voice clones. Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals. They can use chatbots to generate spear-phishing emails, fake websites, fake posts, fake profiles, and fake consumer reviews, or to help create malware, ransomware, and prompt injection attacks. They can use deepfakes and voice clones to facilitate imposter scams, extortion, and financial fraud. And that’s very much a non-exhaustive list.
The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose.
So consider:
Should you even be making or selling it? If you develop or offer a synthetic media or generative AI product, consider at the design stage and thereafter the reasonably foreseeable – and often obvious – ways it could be misused for fraud or cause other harm. Then ask yourself whether such risks are high enough that you shouldn’t offer the product at all. It’s become a meme, but here we’ll paraphrase Dr. Ian Malcolm, the Jeff Goldblum character in “Jurassic Park,” who admonished executives for being so preoccupied with whether they could build something that they didn’t stop to think if they should.
Are you effectively mitigating the risks? If you decide to make or offer a product like that, take all reasonable precautions before it hits the market. The FTC has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury. Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors. Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal. If your tool is intended to help people, also ask yourself whether it really needs to emulate humans or can be just as effective looking, talking, speaking, or acting like a bot.
Are you over-relying on post-release detection? Researchers continue to improve on detection methods for AI-generated videos, images, and audio. Recognizing AI-generated text is more difficult. But these researchers are in an arms race with companies developing the generative AI tools, and the fraudsters using these tools will often have moved on by the time someone detects their fake content. The burden shouldn’t be on consumers, anyway, to figure out if a generative AI tool is being used to scam them.
Are you misleading people about what they’re seeing, hearing, or reading? If you’re an advertiser, you might be tempted to employ some of these tools to sell, well, just about anything. Celebrity deepfakes are already common, for example, and have been popping up in ads. We’ve previously warned companies that misleading consumers via doppelgängers, such as fake dating profiles, phony followers, deepfakes, or chatbots, could result – and in fact have resulted – in FTC enforcement actions.
While the focus of this post is on fraud and deception, these new AI tools carry with them a host of other serious concerns, such as potential harms to children, teens, and other populations at risk when interacting with or subject to these tools. Commission staff is tracking those concerns closely as companies continue to rush these products to market and as human-computer interactions keep taking new and possibly dangerous turns.
Source: U.S. Federal Trade Commission
This article is by:
SCARS™ – Society of Citizens Against Relationship Scams Inc.
A scams & financial fraud crime victims assistance & crime prevention nonprofit organization headquartered in Miami Florida USA & Monterrey NL Mexico, with partners in more than 60 countries worldwide
If you are a Victim sign up for our FREE SCARS Support Groups at support.AgainstScams.org
To learn more about SCARS, or to Join, Volunteer, or Donate visit: AgainstScams.org
To see Scammer Photos visit www.ScammerPhotos.com
Contact Us: Contact@AgainstScams.org
-/ 30 /-
What do you think about this?
Please share your thoughts in a comment below!
Do You Need Support?
Get It Now!
SCARS provides the leading Support & Recovery program for relationship scam victims – completely FREE!
Our managed peer support groups allow victims to talk to other survivors and recover in the most experienced environment possible, for as long as they need. Recovery takes as long as it takes – we put no limits on our support!
SCARS is the most trusted support & education provider in the world. Our team is certified in trauma-informed care, grief counseling, and so much more!
To apply to join our groups visit support.AgainstScams.org
We also offer separate support groups for family & friends too.
Become a
SCARS STAR™ Member
SCARS offers memberships in our STAR program, which includes many benefits for a very low annual membership fee!
SCARS STAR Membership benefits include:
- FREE Counseling or Therapy Benefit from our partner BetterHelp.com
- Exclusive members-only content & publications
- Discounts on SCARS Self-Help Books Save
- And more!
To learn more about the SCARS STAR Membership visit membership.AgainstScams.org
To become a SCARS STAR Member right now visit join.AgainstScams.org
To Learn More Also Look At Our Article Catalogs
Scam & Crime Types
More SCARS
- ScamsNOW Magazine – ScamsNOW.com
- ContraEstafas.org
- ScammerPhotos.com
- AnyScam.com – reporting
- AgainstScams.org – SCARS Corporate Website
- SCARS YouTube Video Channel
Leave A Comment