SCARS Institute Scam Survivor's Community portal banner
SCARS Institute's Encyclopedia of Scams™ RomanceScamsNOW.com Published Continuously for 25 Years

SCARS Institute’s Encyclopedia of Scams™ Published Continuously for 25 Years

SCARS Institute - 12 Years of Service to Scam Victims/Survivors

Scammers Are Using AI Right Now!

Europol Releases A Report On The Dangers Of Generative AI

How Scams Work – A SCARS Insight

The Criminal Use Of ChatGPT – A Cautionary Tale About Large Language Models

What Are Large Language Models?

A large language model is a type of AI system that can process, manipulate, and generate text.

Training a Large Language Model (LLM) involves feeding it large amounts of data, such as books, articles, and websites so that it can learn the patterns and connections between words to generate new content.

ChatGPT is an LLM that was developed by OpenAI and released to the wider public as part of a research preview in November 2022.

The current publicly accessible model underlying ChatGPT is capable of processing and generating human-like text in response to user prompts. Specifically, the model can answer questions on an unbelievable variety of topics, translate text, engage in conversational exchanges (‘chatting’), generate new content, and produce functional code.

The Dark Side Of Large Language Models

As the capabilities of LLMs such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals (scammers) provides a grim outlook.

The following three crime areas are amongst the many areas of concern identified by Europol’s experts recently:

  • Fraud and social engineering: ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
  • Disinformation: ChatGPT excels at producing authentic-sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.
  • Cybercrime: In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code.

As technology progresses, and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse.

Criminal Use Cases

The release of GPT-4 was meant not only to improve the functionality of ChatGPT, but also to make the model less likely to produce potentially harmful output. Europol workshops involving subject matter experts from across Europol’s array of expertise identified a diverse range of criminal use cases in GPT-3.5. A subsequent check of GPT-4, however, showed that all of them still worked. In some cases, the potentially harmful responses from GPT-4 were even more advanced.

ChatGPT excels at providing the user with ready-to-use information in response to a wide range of prompts. If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps. As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse. The identified use cases that emerged from the workshops Europol carried out with its experts are by no means exhaustive. Rather, the aim is to give an idea of just how diverse and potentially dangerous LLMs such as ChatGPT can be in the hands of malicious actors.
While all of the information ChatGPT provides is freely available on the internet, the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime.

Fraud, impersonation, and social engineering

ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes. Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organization or individual in a highly realistic manner even with only a basic grasp of the English language.

Critically, the context of the phishing email can be adapted easily depending on the needs of the threat actor, ranging from fraudulent investment opportunities to business e-mail compromise and CEO fraud10. ChatGPT may therefore offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to respond to messages in context and adopt a specific writing style. Additionally, various types of online fraud can be given added legitimacy by using ChatGPT to generate fake social media engagement, for instance, to promote a fraudulent investment offer.

To date, these types of deceptive communications have been something criminals would have to produce on their own. In the case of mass-produced campaigns, targets of these types of crime would often be able to identify the inauthentic nature of a message due to obvious spelling or grammar mistakes or its vague or inaccurate content. With the help of LLMs, these types of phishing and online fraud can be created faster, much more authentically, and at a significantly increased scale.

The ability of LLMs to detect and re-produce language patterns do not only facilitate phishing and online fraud but can also generally be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.

In addition to the criminal activities outlined above, the capabilities of ChatGPT lend themselves to a number of potential abuse cases in the area of terrorism, propaganda, and disinformation. As such, the model can be used to generally gather more information that may facilitate terrorist activities, such as for instance, terrorism financing or anonymous file sharing.

ChatGPT excels at producing authentic-sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort. For instance, ChatGPT can be used to generate online propaganda on behalf of other actors to promote or defend certain views that have been debunked as disinformation or fake news.

These examples provide merely a glimpse of what is possible. While ChatGPT refuses to provide answers to prompts it considers obviously malicious, it is possible – similar to the other use cases detailed in this report – to circumvent these restrictions. Not only would this type of application facilitate the perpetration of disinformation, hate speech, and terrorist content online – it would also allow users to give it misplaced credibility, having been generated by a machine and, thus, possibly appearing more objective to some than if it was produced by a human.

Cybercrime

In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. As with the other use cases, it is possible to generate a range of practical outputs in a matter of minutes by entering the right prompts. One of the crime areas for which this could have a significant impact is cybercrime. With the current version of ChatGPT, it is already possible to create basic tools for a variety of malicious purposes. Despite the tools being only basic (i.e. to produce phishing pages or malicious VBA scripts), this provides a start for cybercrime as it enables someone without technical knowledge to exploit an attack vector on a victim’s system.

This type of automated code generation is particularly useful for those criminal actors with little to no knowledge of coding and development. Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing. If prompts are broken down into individual steps, it is trivial to bypass these safety measures.

While the tools produced by ChatGPT are still quite simple, the active exploitation of it by threat actors provides a grim outlook in view of inevitable improvements of such tools in the coming years. In fact, ChatGPT’s ability to transform natural language prompts into working code was quickly exploited by malicious actors to create malware. Shortly after the public release of ChatGPT, a Check Point Research blog post of December 2022 demonstrated how ChatGPT can be used to create a full infection flow, from spear-phishing to running a reverse shell that accepts commands in English.

The capabilities of generative models such as ChatGPT to assist with the development of code is expected to further improve over time. GPT-4, the latest release, has already made improvements over its previous versions and can, as a result, provide even more effective assistance for cybercriminal purposes. The newer model is better at understanding the context of the code, as well as at correcting error messages and fixing programming mistakes. For a potential criminal with little technical knowledge, this is an invaluable resource. At the same time, a more advanced user can exploit these improved capabilities to further refine or even automate sophisticated cybercriminal modi operandi.

Europol Tech Watch Flash – The Impact of Large Language Models on Law Enforcement

[pdf-embedder url=”https://romancescamsnow.com/wp-content/uploads/2023/03/Tech-Watch-Flash-The-Impact-of-Large-Language-Models-on-Law-Enforcement.pdf” title=”Europol Tech Watch Flash – The Impact of Large Language Models on Law Enforcement”]

Always Report All Scams – Anywhere In The World To:

Go to reporting.AgainstScams.org to learn how

U.S. FTC at https://reportfraud.ftc.gov/#/?orgcode=SCARS and SCARS at www.Anyscams.com
Visit reporting.AgainstScams.org to learn more!

-/ 30 /-

What do you think about this?
Please share your thoughts in a comment below!

Article Rating

0
(0)

Table of Contents

ARTICLE CATEGORIES

Rapid Report Scammers

SCARS-CDN-REPORT-SCAMEMRS-HERE

Visit SCARS www.Anyscam.com

Quick Reporting

  • Valid Emails Only

  • This field is hidden when viewing the form
    Valid Phone Numbers Only

Subscribe & New Item Updates

In the U.S. & Canada

U.S. & Canada Suicide Lifeline 988

U.S. & Canada Suicide Lifeline 988

RATE THIS ARTICLE?

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

As you found this post useful...

Follow us on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

LEAVE A COMMENT?

Your comments help the SCARS Institute better understand all scam victim/survivor experiences and improve our services and processes. Thank you


Thank you for your comment. You may receive an email to follow up. We never share your data with marketers.

Recent Comments
On Other Articles

ARTICLE META

Published On: March 27th, 2023Last Updated: March 27th, 2023Categories: ♦ CYBERSECURITY, 2023, AI - Artificial Intelligence, Cybercrime, InsightsTags: , , , , , 0 Comments on Scammers Are Using AI Right Now!1560 words7.8 min readTotal Views: 952Daily Views: 1

Important Information for New Scam Victims

If you are looking for local trauma counselors please visit counseling.AgainstScams.org or join SCARS for our counseling/therapy benefit: membership.AgainstScams.org

If you need to speak with someone now, you can dial 988 or find phone numbers for crisis hotlines all around the world here: www.opencounseling.com/suicide-hotlines

A Note About Labeling!

We often use the term ‘scam victim’ in our articles, but this is a convenience to help those searching for information in search engines like Google. It is just a convenience and has no deeper meaning. If you have come through such an experience, YOU are a Survivor! It was not your fault. You are not alone! Axios!

A Question of Trust

At the SCARS Institute, we invite you to do your own research on the topics we speak about and publish, Our team investigates the subject being discussed, especially when it comes to understanding the scam victims-survivors experience. You can do Google searches but in many cases, you will have to wade through scientific papers and studies. However, remember that biases and perspectives matter and influence the outcome. Regardless, we encourage you to explore these topics as thoroughly as you can for your own awareness.

Statement About Victim Blaming

Some of our articles discuss various aspects of victims. This is both about better understanding victims (the science of victimology) and their behaviors and psychology. This helps us to educate victims/survivors about why these crimes happened and to not blame themselves, better develop recovery programs, and to help victims avoid scams in the future. At times this may sound like blaming the victim, but it does not blame scam victims, we are simply explaining the hows and whys of the experience victims have.

These articles, about the Psychology of Scams or Victim Psychology – meaning that all humans have psychological or cognitive characteristics in common that can either be exploited or work against us – help us all to understand the unique challenges victims face before, during, and after scams, fraud, or cybercrimes. These sometimes talk about some of the vulnerabilities the scammers exploit. Victims rarely have control of them or are even aware of them, until something like a scam happens and then they can learn how their mind works and how to overcome these mechanisms.

Articles like these help victims and others understand these processes and how to help prevent them from being exploited again or to help them recover more easily by understanding their post-scam behaviors. Learn more about the Psychology of Scams at www.ScamPsychology.org

Psychology Disclaimer:

All articles about psychology and the human brain on this website are for information & education only

The information provided in this article is intended for educational and self-help purposes only and should not be construed as a substitute for professional therapy or counseling.

While any self-help techniques outlined herein may be beneficial for scam victims seeking to recover from their experience and move towards recovery, it is important to consult with a qualified mental health professional before initiating any course of action. Each individual’s experience and needs are unique, and what works for one person may not be suitable for another.

Additionally, any approach may not be appropriate for individuals with certain pre-existing mental health conditions or trauma histories. It is advisable to seek guidance from a licensed therapist or counselor who can provide personalized support, guidance, and treatment tailored to your specific needs.

If you are experiencing significant distress or emotional difficulties related to a scam or other traumatic event, please consult your doctor or mental health provider for appropriate care and support.

Also read our SCARS Institute Statement about Professional Care for Scam Victims – click here to go to our ScamsNOW.com website.

If you are in crisis, feeling desperate, or in despair please call 988 or your local crisis hotline.