The Criminal Use Of ChatGPT – A Cautionary Tale About Large Language Models
What Are Large Language Models?
A large language model is a type of AI system that can process, manipulate, and generate text.
Training a Large Language Model (LLM) involves feeding it large amounts of data, such as books, articles, and websites so that it can learn the patterns and connections between words to generate new content.
ChatGPT is an LLM that was developed by OpenAI and released to the wider public as part of a research preview in November 2022.
The current publicly accessible model underlying ChatGPT is capable of processing and generating human-like text in response to user prompts. Specifically, the model can answer questions on an unbelievable variety of topics, translate text, engage in conversational exchanges (‘chatting’), generate new content, and produce functional code.
The Dark Side Of Large Language Models
As the capabilities of LLMs such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals (scammers) provides a grim outlook.
The following three crime areas are amongst the many areas of concern identified by Europol’s experts recently:
- Fraud and social engineering: ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
- Disinformation: ChatGPT excels at producing authentic-sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.
- Cybercrime: In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code.
As technology progresses, and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse.
Criminal Use Cases
The release of GPT-4 was meant not only to improve the functionality of ChatGPT, but also to make the model less likely to produce potentially harmful output. Europol workshops involving subject matter experts from across Europol’s array of expertise identified a diverse range of criminal use cases in GPT-3.5. A subsequent check of GPT-4, however, showed that all of them still worked. In some cases, the potentially harmful responses from GPT-4 were even more advanced.
ChatGPT excels at providing the user with ready-to-use information in response to a wide range of prompts. If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps. As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse. The identified use cases that emerged from the workshops Europol carried out with its experts are by no means exhaustive. Rather, the aim is to give an idea of just how diverse and potentially dangerous LLMs such as ChatGPT can be in the hands of malicious actors.
While all of the information ChatGPT provides is freely available on the internet, the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime.
Fraud, impersonation, and social engineering
ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes. Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organization or individual in a highly realistic manner even with only a basic grasp of the English language.
Critically, the context of the phishing email can be adapted easily depending on the needs of the threat actor, ranging from fraudulent investment opportunities to business e-mail compromise and CEO fraud10. ChatGPT may therefore offer criminals new opportunities, especially for crimes involving social engineering, given its abilities to respond to messages in context and adopt a specific writing style. Additionally, various types of online fraud can be given added legitimacy by using ChatGPT to generate fake social media engagement, for instance, to promote a fraudulent investment offer.
To date, these types of deceptive communications have been something criminals would have to produce on their own. In the case of mass-produced campaigns, targets of these types of crime would often be able to identify the inauthentic nature of a message due to obvious spelling or grammar mistakes or its vague or inaccurate content. With the help of LLMs, these types of phishing and online fraud can be created faster, much more authentically, and at a significantly increased scale.
The ability of LLMs to detect and re-produce language patterns do not only facilitate phishing and online fraud but can also generally be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.
In addition to the criminal activities outlined above, the capabilities of ChatGPT lend themselves to a number of potential abuse cases in the area of terrorism, propaganda, and disinformation. As such, the model can be used to generally gather more information that may facilitate terrorist activities, such as for instance, terrorism financing or anonymous file sharing.
ChatGPT excels at producing authentic-sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort. For instance, ChatGPT can be used to generate online propaganda on behalf of other actors to promote or defend certain views that have been debunked as disinformation or fake news.
These examples provide merely a glimpse of what is possible. While ChatGPT refuses to provide answers to prompts it considers obviously malicious, it is possible – similar to the other use cases detailed in this report – to circumvent these restrictions. Not only would this type of application facilitate the perpetration of disinformation, hate speech, and terrorist content online – it would also allow users to give it misplaced credibility, having been generated by a machine and, thus, possibly appearing more objective to some than if it was produced by a human.
Cybercrime
In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. As with the other use cases, it is possible to generate a range of practical outputs in a matter of minutes by entering the right prompts. One of the crime areas for which this could have a significant impact is cybercrime. With the current version of ChatGPT, it is already possible to create basic tools for a variety of malicious purposes. Despite the tools being only basic (i.e. to produce phishing pages or malicious VBA scripts), this provides a start for cybercrime as it enables someone without technical knowledge to exploit an attack vector on a victim’s system.
This type of automated code generation is particularly useful for those criminal actors with little to no knowledge of coding and development. Critically, the safeguards preventing ChatGPT from providing potentially malicious code only work if the model understands what it is doing. If prompts are broken down into individual steps, it is trivial to bypass these safety measures.
While the tools produced by ChatGPT are still quite simple, the active exploitation of it by threat actors provides a grim outlook in view of inevitable improvements of such tools in the coming years. In fact, ChatGPT’s ability to transform natural language prompts into working code was quickly exploited by malicious actors to create malware. Shortly after the public release of ChatGPT, a Check Point Research blog post of December 2022 demonstrated how ChatGPT can be used to create a full infection flow, from spear-phishing to running a reverse shell that accepts commands in English.
The capabilities of generative models such as ChatGPT to assist with the development of code is expected to further improve over time. GPT-4, the latest release, has already made improvements over its previous versions and can, as a result, provide even more effective assistance for cybercriminal purposes. The newer model is better at understanding the context of the code, as well as at correcting error messages and fixing programming mistakes. For a potential criminal with little technical knowledge, this is an invaluable resource. At the same time, a more advanced user can exploit these improved capabilities to further refine or even automate sophisticated cybercriminal modi operandi.
-/ 30 /-
What do you think about this?
Please share your thoughts in a comment below!
Do You Need Support?
Get It Now!
SCARS provides the leading Support & Recovery program for relationship scam victims – completely FREE!
Our managed peer support groups allow victims to talk to other survivors and recover in the most experienced environment possible, for as long as they need. Recovery takes as long as it takes – we put no limits on our support!
SCARS is the most trusted support & education provider in the world. Our team is certified in trauma-informed care, grief counseling, and so much more!
To apply to join our groups visit support.AgainstScams.org
We also offer separate support groups for family & friends too.
Become a
SCARS STAR™ Member
SCARS offers memberships in our STAR program, which includes many benefits for a very low annual membership fee!
SCARS STAR Membership benefits include:
- FREE Counseling or Therapy Benefit from our partner BetterHelp.com
- Exclusive members-only content & publications
- Discounts on SCARS Self-Help Books Save
- And more!
To learn more about the SCARS STAR Membership visit membership.AgainstScams.org
To become a SCARS STAR Member right now visit join.AgainstScams.org
To Learn More Also Look At Our Article Catalogs
Scam & Crime Types
More SCARS
- ScamsNOW Magazine – ScamsNOW.com
- ContraEstafas.org
- ScammerPhotos.com
- AnyScam.com – reporting
- AgainstScams.org – SCARS Corporate Website
- SCARS YouTube Video Channel
Leave A Comment