
SCARS Institute’s Encyclopedia of Scams™ Published Continuously for 25 Years


The Danger of Using AI in Law Enforcement – The Grandmother Wrongly Arrested as a Fraudster
The AI’s Accusation: How a Tennessee Grandmother Lost Everything to an Artificial Intelligence Mistake
A True Victim’s Story – A SCARS Institute Insight
Author:
• Tim McGuinness, Ph.D., DFin, MCPO, MAnth – Anthropologist, Scientist, Director of the Society of Citizens Against Relationship Scams Inc.
See Author Biographies Below
Article Abstract
A Tennessee grandmother, Angela Lipps, was wrongfully arrested and jailed for nearly six months after facial recognition software incorrectly identified her as a suspect in a North Dakota bank fraud case. Law enforcement relied heavily on the AI-generated match without verifying her identity or confirming her presence in the state. Despite her claims of innocence, she was extradited and charged until her attorney proved she had been in Tennessee at the time of the crimes. The case highlights the limitations of facial recognition technology, the risks of confirmation bias, and the lack of accountability following such errors. Lipps lost her home, car, and personal stability, underscoring the serious consequences of unverified AI use in law enforcement.

The AI’s Accusation: How a Tennessee Grandmother Lost Everything to an Artificial Intelligence Mistake
“You Have The Wrong Woman!”
In an era where artificial intelligence is increasingly trusted to guide critical decisions, the assumption of accuracy can create a dangerous illusion of certainty. The case of Angela Lipps, a Tennessee grandmother wrongfully identified by AI facial recognition software and subjected to wrongful arrest, incarceration, and devastating personal loss, reveals how quickly that illusion can collapse. Her experience illustrates what happens when technology is treated as evidence rather than a tool requiring human verification. It also exposes the human cost of overreliance on automated systems in law enforcement, where a single unverified match can override common sense, due process, and basic investigative diligence. This case does not simply reflect a technological failure. It highlights a systemic vulnerability where human judgment is replaced by algorithmic authority, placing innocent lives at risk.
Angela Lipps’ Story
Angela Lipps was living a quiet life in north-central Tennessee, a mother of three and grandmother of five who had built her world around family and community. She had never been on an airplane, never ventured far from her Tennessee roots, and certainly never imagined that her face would become the subject of a criminal investigation more than 1,200 miles away in North Dakota. Yet in the summer of 2025, that is exactly what happened, setting in motion a nightmare that would cost her nearly six months of freedom, her home, her car, her beloved dog, and her sense of security in a world increasingly governed by artificial intelligence.
The trouble began in April and May of 2025, when detectives with the Fargo Police Department in North Dakota were investigating a series of bank fraud incidents. Surveillance footage showed a woman using a fake US Army military ID to withdraw tens of thousands of dollars from local banks. The investigators needed to identify this suspect, and they turned to a tool that has become increasingly common in law enforcement: facial recognition software. The algorithm analyzed the surveillance images and produced a match. According to court documents reviewed by local news outlets, a detective concluded that the suspect appeared to match Angela Lipps based on facial features, body type, and hairstyle.
What happened next reveals the dangerous power that AI identification can wield when human verification is absent. No one from the Fargo Police Department contacted Lipps to ask questions or verify her identity. There was no phone call, no email, no attempt to confirm whether this Tennessee grandmother had ever set foot in North Dakota. Instead, the department moved forward with an arrest warrant based almost entirely on the AI’s conclusion.
In July 2025, Lipps’s life was shattered in an instant. While she was babysitting four children at her Tennessee home, US marshals arrived at her door. They took her away at gunpoint, booking her into a county jail as a fugitive from justice in North Dakota. The shock and confusion must have been overwhelming. “I’ve never been to North Dakota, I don’t know anyone from North Dakota,” Lipps would later tell reporters, her words carrying the weight of disbelief and desperation.
The legal machinery that had been set in motion by the AI’s identification showed no regard for her protestations of innocence. Lipps was charged with four counts of unauthorized use of personal identifying information and four counts of theft. She remained in a Tennessee jail for nearly four months without bail while awaiting extradition to North Dakota. Each day that passed represented another layer of loss, time with her grandchildren, the ability to manage her affairs, and the simple freedom of walking out her front door.
The extradition process itself was agonizingly slow. Authorities in North Dakota did not transport Lipps from Tennessee until the end of October, 108 days after her initial arrest. She appeared in a North Dakota courtroom the next day, still trying to comprehend how her life had unraveled so completely. Her attorney, Jay Greenwood, would later point out the fundamental flaw in the case against her: “If the only thing you have is facial recognition, I might want to dig a little deeper.”
That deeper digging would ultimately prove Lipps’s innocence, but not before she had endured months of wrongful incarceration and its devastating consequences. Greenwood obtained her bank records and presented them to investigators, showing definitively that Lipps was more than 1,200 miles away in Tennessee at the time the fraud occurred in Fargo. The evidence was clear and irrefutable. On Christmas Eve 2025, Lipps was finally released from custody.
The timing of her release, while symbolic, offered little comfort. The Fargo Police Department did not pay for her trip home, leaving her stranded in North Dakota during the holiday season. Local defense attorneys helped cover the cost of a hotel room and food on Christmas Eve and Christmas Day. A non-profit organization called the F5 Project eventually stepped in to help her return to Tennessee. The irony was profound: the same system that had wrongly imprisoned her offered no assistance in helping her rebuild her life.
Back home in Tennessee, Lipps confronted the full extent of what had been lost. While she was jailed and unable to pay her bills, she lost her house, her car, and even her dog. The material losses were devastating, but the emotional toll was perhaps even greater. She told reporters that no one from the Fargo Police Department had apologized for the catastrophic error that had upended her life. The absence of an acknowledgment of wrongdoing or an expression of regret spoke volumes about the institutional indifference and injustice that can accompany technological mistakes.
Lipps’s case is not an isolated incident but part of a growing pattern of AI errors with serious consequences for ordinary people. The same technology that can enhance law enforcement capabilities can also produce life-altering mistakes when deployed without adequate safeguards. In October 2025, an AI system apparently mistook a Baltimore high school student’s bag of Doritos for a firearm, leading to a terrifying encounter with armed police. Earlier this year, police in the UK arrested a man for a burglary in a city he had never visited after facial recognition software confused him with another person of South Asian heritage.
These cases reveal the fundamental limitations of current AI systems, particularly facial recognition technology. The algorithms are trained on datasets that may not adequately represent the full diversity of human appearances, leading to higher error rates for certain demographic groups. The technology can struggle with variations in lighting, camera angles, and facial expressions. Perhaps most importantly, AI lacks the contextual understanding that humans use to evaluate information, factors like geographic impossibility, lack of motive, or absence of opportunity.
The reliance on AI identification in law enforcement raises troubling questions about due process and the presumption of innocence. When an algorithm produces a match, it can create a powerful confirmation bias that leads investigators to overlook contradictory evidence. The technology’s aura of scientific objectivity may give undue weight to its conclusions, even when those conclusions are based on flawed or incomplete data. In Lipps’s case, the simple question of whether she had ever been to North Dakota should have prompted further investigation before an arrest was made.
The aftermath of Lipps’s ordeal highlights another critical issue: the lack of accountability and support for victims of technological errors. The financial and emotional damages she suffered were not compensated. The time she lost with her family cannot be reclaimed. The betrayal trauma of being arrested at gunpoint and incarcerated for months cannot be easily healed. Without institutional acknowledgment of responsibility or mechanisms for restitution, victims are left to piece together their lives on their own.
As society increasingly delegates decision-making to artificial intelligence systems, the case of Angela Lipps serves as a cautionary tale about the human cost of technological errors. The convenience and efficiency that AI promises must be balanced against the potential for catastrophic mistakes. Law enforcement agencies need clear protocols for verifying AI-generated identifications before taking action that can irreversibly damage lives. There must be transparency about how these systems work, their known limitations, and their error rates.
For Angela Lipps, the journey of rebuilding continues. The grandmother who once cared for children in her Tennessee home now faces the daunting task of starting over without the home, car, or companion dog that gave her comfort and stability. The trust she placed in the justice system has been broken, replaced by the knowledge that a computer algorithm can strip away everything she held dear without so much as an apology.
Her story stands as a reminder that behind every data point and algorithmic match lies a human life with relationships, responsibilities, and dreams. When we deploy artificial intelligence in contexts where mistakes can destroy lives, we must proceed with caution, humility, and a deep commitment to protecting the innocent from the unintended consequences of our technological ambitions. The price of getting it wrong, as Angela Lipps learned in the most painful way possible, is one that no algorithm can calculate and no system should be allowed to pay.
Conclusion
The experience of Angela Lipps demonstrates that artificial intelligence, when used without proper safeguards, can become a source of harm rather than a tool for justice. Her wrongful arrest and prolonged detention were not caused by technology alone, but by the absence of critical thinking, verification, and accountability in how that technology was applied. The failure to ask simple questions, such as whether she had ever been in the state where the crime occurred, reflects a breakdown in investigative responsibility.
This case underscores the urgent need for clear standards governing the use of AI in law enforcement. Algorithms must never replace human judgment, and any AI-generated identification should be treated as a lead, not as proof. Independent verification, transparency about error rates, and procedural safeguards are essential to prevent similar outcomes.
Equally important is the need for accountability when errors occur. Without acknowledgment, restitution, and institutional reform, individuals harmed by these failures are left to rebuild their lives alone. The damage extends far beyond financial loss, affecting trust, safety, and psychological well-being.
As artificial intelligence continues to expand into high-stakes domains, this case serves as a warning. Systems designed to enhance efficiency must not erode fundamental rights. Protecting innocent individuals requires a deliberate balance between innovation and responsibility, where human dignity remains the priority over technological convenience.
Author’s Note: What This Means for Scam Victims
For scam victims, this case highlights a risk that is often overlooked. When financial transactions, communication records, or identity elements become entangled in a scam, those same data points can later be analyzed by automated systems without context. If a victim does not report the crime, the available data may resemble participation rather than victimization. AI systems used by banks, law enforcement, or fraud detection platforms may flag patterns such as money transfers, account activity, or identity use and interpret them as intentional involvement. Without a documented record showing that the individual reported the scam and sought help, there may be nothing to distinguish a victim from a willing participant in the eyes of an algorithm or an investigator relying on it.
This is why timely reporting is not just about seeking justice. It is also a form of protection. Filing reports with law enforcement, financial institutions, and appropriate agencies creates an official record that establishes victim status early. That documentation can provide critical context if questions arise later, especially in systems that rely on pattern recognition rather than lived reality. Reporting helps anchor the truth in verifiable records, making it far less likely that a victim will be misinterpreted, flagged, or even accused based on incomplete or misleading data. In a world where AI increasingly evaluates behavior without understanding intent, clear documentation becomes one of the most effective safeguards a victim can have.

Glossary
- AI-generated identification — AI-generated identification refers to a match produced by software that compares images or data patterns to suggest a person’s identity. It can appear authoritative, yet it requires human verification because it may lack context and can produce incorrect conclusions that affect innocent individuals.
- Algorithmic authority — Algorithmic authority describes the tendency to treat outputs from artificial intelligence systems as inherently accurate or trustworthy. It can lead decision-makers to rely on machine conclusions without sufficient questioning, increasing the risk of serious errors when systems are flawed or incomplete.
- Algorithmic match — Algorithmic match refers to the result produced when a system identifies similarities between input data and stored datasets. It may suggest a connection between a person and a suspected activity, but it does not confirm identity or intent without further investigation.
- Artificial intelligence systems — Artificial intelligence systems are computational tools designed to analyze data, recognize patterns, and support decision-making processes. They can improve efficiency, yet they lack human judgment and may misinterpret situations when used without safeguards.
- Automated decision-making — Automated decision-making involves relying on computer systems to make or influence outcomes without full human review. It can streamline processes, but it may also introduce risk when complex human factors are reduced to data points.
- Bank fraud investigation — Bank fraud investigation refers to the process of identifying and examining suspicious financial activity involving theft or deception. It often uses surveillance and data analysis, which may include AI tools that require careful validation.
- Confirmation bias in AI use — Confirmation bias in AI use occurs when investigators accept AI-generated results that support initial assumptions while overlooking conflicting evidence. It can reinforce errors and prevent objective evaluation of all available facts.
- Contextual understanding limitation — Contextual understanding limitation refers to the inability of AI systems to interpret real-world circumstances such as location, intent, or personal history. This limitation can lead to conclusions that ignore obvious contradictions in a case.
- Data pattern interpretation — Data pattern interpretation involves analyzing trends or similarities within datasets to draw conclusions. It can be useful for identifying risks, but it may misclassify individuals when patterns are incomplete or misleading.
- Dataset representation gaps — Dataset representation gaps occur when training data does not adequately reflect the diversity of real populations. These gaps can increase error rates and lead to biased or inaccurate identification outcomes.
- Due process concerns — Due process concerns arise when individuals are subjected to legal action without proper verification, fairness, or opportunity to challenge evidence. Reliance on AI without checks can undermine these fundamental protections.
- Error rate variability — Error rate variability refers to differences in how often AI systems produce incorrect results across conditions or populations. These variations can make outcomes unreliable, especially when systems are used without transparency about limitations.
- Evidence versus investigative lead — Evidence versus investigative lead distinguishes between verified proof and preliminary information that requires further validation. AI outputs should be treated as leads rather than conclusive evidence to prevent wrongful actions.
- Extradition delay — Extradition delay refers to the prolonged process of transferring a person between jurisdictions for legal proceedings. Such delays can extend wrongful detention and increase the emotional and financial impact on individuals.
- Facial recognition software — Facial recognition software is a type of AI that compares facial features from images to identify individuals. It can assist investigations, but it is prone to errors influenced by lighting, angles, and dataset limitations.
- False identification — False identification occurs when a system or investigator incorrectly identifies a person as involved in an activity or crime. It can lead to severe consequences, especially when decisions are made without verification.
- Financial loss during detention — Financial loss during detention refers to the economic harm experienced when an individual cannot manage obligations while incarcerated. It can include loss of housing, assets, and long-term financial stability.
- Geographic impossibility factor — Geographic impossibility factor refers to evidence showing that a person could not have been present at a location where a crime occurred. It is a critical contextual element that AI systems may fail to consider.
- Human verification requirement — Human verification requirement emphasizes the need for people to review and confirm AI-generated results. It serves as a safeguard against errors by incorporating judgment, reasoning, and contextual awareness.
- Institutional accountability gap — Institutional accountability gap describes the absence of responsibility or corrective action when systems cause harm. It can leave affected individuals without support, acknowledgment, or compensation.
- Investigative diligence breakdown — Investigative diligence breakdown occurs when basic steps such as verification, questioning, and evidence review are skipped. It can result from overreliance on technology or pressure to resolve cases quickly.
- Law enforcement AI reliance — Law enforcement AI reliance refers to the increasing use of artificial intelligence tools in policing and investigations. While useful, it can create risks when used without oversight or balanced judgment.
- Lack of contextual inquiry — Lack of contextual inquiry occurs when investigators fail to consider surrounding facts that could clarify or contradict evidence. It can allow incorrect conclusions to proceed unchecked.
- Machine-generated suspicion — Machine-generated suspicion refers to the identification of potential wrongdoing based on algorithmic analysis. It can initiate investigations but should not replace thorough human evaluation.
- Material loss consequences — Material loss consequences describe the tangible impacts of wrongful actions, including loss of property, income, and possessions. These losses can have long-term effects on stability and recovery.
- Non-compensated damages — Non-compensated damages refer to losses that are not reimbursed or addressed by responsible institutions. Individuals may be left to recover without financial or systemic support.
- Pattern recognition error — Pattern recognition error occurs when an AI system incorrectly interprets similarities in data as meaningful connections. It can lead to false conclusions about identity or behavior.
- Presumption of innocence erosion — Presumption of innocence erosion occurs when individuals are treated as guilty based on unverified or automated findings. It undermines fundamental legal protections and fairness.
- Procedural safeguard absence — Procedural safeguard absence refers to missing checks and balances designed to prevent errors in decision-making. Without these safeguards, incorrect actions may proceed without interruption.
- Reporting documentation importance — Reporting documentation importance highlights the value of creating official records when a person is involved in or affected by suspicious activity. It can help establish accurate context in future evaluations.
- Safeguard protocol necessity — Safeguard protocol necessity refers to the requirement for structured procedures that verify and validate AI outputs. These protocols reduce the risk of harm by ensuring multiple levels of review.
- Surveillance image analysis — Surveillance image analysis involves reviewing visual recordings to identify individuals or actions. When combined with AI, it can accelerate identification but requires careful human oversight.
- Systemic vulnerability — Systemic vulnerability describes weaknesses within systems that allow errors to occur and persist. In this context, it reflects reliance on technology without adequate human checks.
- Technological overreliance — Technological overreliance refers to excessive trust in automated systems to make decisions. It can reduce critical thinking and increase the likelihood of unchecked errors.
- Transparency requirement — Transparency requirement emphasizes the need for clear information about how AI systems function, including limitations and error rates. It supports informed decision-making and accountability.
- Unverified arrest warrant — Unverified arrest warrant refers to a legal order issued based on insufficiently confirmed information. It can result from reliance on incomplete or inaccurate data sources.
- Verification failure — Verification failure occurs when information is not adequately checked before action is taken. It can allow incorrect assumptions to drive serious decisions with lasting consequences.
- Victim misclassification risk — Victim misclassification risk refers to the possibility that individuals affected by crimes may be incorrectly identified as participants. This risk increases when data is analyzed without context or documentation.
- Wrongful incarceration impact — Wrongful incarceration impact describes the emotional, financial, and social harm caused by being detained without cause. It can affect long-term well-being and trust in institutions.
- AI-assisted policing limitations — AI-assisted policing limitations refer to the constraints and weaknesses of using artificial intelligence in law enforcement. These limitations highlight the need for human oversight and careful application.
- Identity verification lapse — An identity verification lapse occurs when authorities fail to confirm a person’s identity through multiple reliable methods. It can lead to incorrect accusations and legal consequences.
- Investigative shortcut reliance — Investigative shortcut reliance describes the use of faster, less thorough methods in place of comprehensive investigation. It can increase efficiency but raises the risk of serious errors.
- Data without context problem — Data without context problem refers to interpreting information without understanding surrounding circumstances. It can lead to conclusions that do not reflect reality or intent.
- Emotional harm from institutional error — Emotional harm from institutional error refers to psychological distress caused by wrongful actions taken by authorities or systems. It may include anxiety, loss of trust, and long-term trauma effects.
Author Biographies
-/ 30 /-
What do you think about this?
Please share your thoughts in a comment below!
Table of Contents
LEAVE A COMMENT?
Recent Comments
On Other Articles
- SCARS Institute Editorial Team on Establishing The United States Task Force To Eliminate Fraud – 2026: “Make Online Safe Again!” Mar 17, 19:11
- on Establishing The United States Task Force To Eliminate Fraud – 2026: “Finally an administration takes the initiative to help solve the gigantic situation. Make America Safe Again, thank you!” Mar 17, 15:41
- on What Is The Difference Between A Scam Victim And A Scam Survivor? [Updated]: “Due to my scams I don’t fully trust anyone online and I shouldn’t even those people I knew in high…” Mar 7, 12:46
- on SCARS 3 Steps For New Scam Victims 2024: “I was very fearful “he” would come to my home because I was knew my address, I spoke to someone…” Mar 7, 09:50
- on Sadness & Scam Recovery: “Before my scam, my mom passed away and I got a divorce, so I was dealing with the loss of…” Mar 7, 09:37
- on The Story Of Kira Lee Orsag (aka Dani Daniels) [Updated]: “There is NO evidence and she is not, she is a victim too. Sebastian, stop letting your anger think for…” Mar 6, 23:18
- on The Story Of Kira Lee Orsag (aka Dani Daniels) [Updated]: “There is real evidence that behind these two people there is something that not many people know. This woman is…” Mar 4, 03:58
- on Signs of Good & Bad Scam Victim Emotional Health: “ty this helps me with knowing why I cant quit eating when I am not hungry and when I crave…” Mar 2, 20:43
- on The SCARS Institute Top 50 Celebrity Impersonation Scams – 2025: “You should probably add Lawrence O’donnell as a scam also. I clicked on a site on tic tok for msnbc,…” Mar 2, 08:41
- on Finally Tax Relief for American Scam Victims is on the Horizon – 2026: “I just did my taxes for 2025 my tax account said so far for romances scam we cd not take…” Feb 25, 19:50
ARTICLE META
Important Information for New Scam Victims
- Please visit www.ScamVictimsSupport.org – a SCARS Website for New Scam Victims & Sextortion Victims
- Enroll in FREE SCARS Scam Survivor’s School now at www.SCARSeducation.org
- Please visit www.ScamPsychology.org – to more fully understand the psychological concepts involved in scams and scam victim recovery
If you are looking for local trauma counselors please visit counseling.AgainstScams.org or join SCARS for our counseling/therapy benefit: membership.AgainstScams.org
If you need to speak with someone now, you can dial 988 or find phone numbers for crisis hotlines all around the world here: www.opencounseling.com/suicide-hotlines
A Note About Labeling!
We often use the term ‘scam victim’ in our articles, but this is a convenience to help those searching for information in search engines like Google. It is just a convenience and has no deeper meaning. If you have come through such an experience, YOU are a Survivor! It was not your fault. You are not alone! Axios!
A Question of Trust
At the SCARS Institute, we invite you to do your own research on the topics we speak about and publish, Our team investigates the subject being discussed, especially when it comes to understanding the scam victims-survivors experience. You can do Google searches but in many cases, you will have to wade through scientific papers and studies. However, remember that biases and perspectives matter and influence the outcome. Regardless, we encourage you to explore these topics as thoroughly as you can for your own awareness.
Statement About Victim Blaming
SCARS Institute articles examine different aspects of the scam victim experience, as well as those who may have been secondary victims. This work focuses on understanding victimization through the science of victimology, including common psychological and behavioral responses. The purpose is to help victims and survivors understand why these crimes occurred, reduce shame and self-blame, strengthen recovery programs and victim opportunities, and lower the risk of future victimization.
At times, these discussions may sound uncomfortable, overwhelming, or may be mistaken for blame. They are not. Scam victims are never blamed. Our goal is to explain the mechanisms of deception and the human responses that scammers exploit, and the processes that occur after the scam ends, so victims can better understand what happened to them and why it felt convincing at the time, and what the path looks like going forward.
Articles that address the psychology, neurology, physiology, and other characteristics of scams and the victim experience recognize that all people share cognitive and emotional traits that can be manipulated under the right conditions. These characteristics are not flaws. They are normal human functions that criminals deliberately exploit. Victims typically have little awareness of these mechanisms while a scam is unfolding and a very limited ability to control them. Awareness often comes only after the harm has occurred.
By explaining these processes, these articles help victims make sense of their experiences, understand common post-scam reactions, and identify ways to protect themselves moving forward. This knowledge supports recovery by replacing confusion and self-blame with clarity, context, and self-compassion.
Additional educational material on these topics is available at ScamPsychology.org – ScamsNOW.com and other SCARS Institute websites.
Psychology Disclaimer:
All articles about psychology and the human brain on this website are for information & education only
The information provided in this article is intended for educational and self-help purposes only and should not be construed as a substitute for professional therapy or counseling.
While any self-help techniques outlined herein may be beneficial for scam victims seeking to recover from their experience and move towards recovery, it is important to consult with a qualified mental health professional before initiating any course of action. Each individual’s experience and needs are unique, and what works for one person may not be suitable for another.
Additionally, any approach may not be appropriate for individuals with certain pre-existing mental health conditions or trauma histories. It is advisable to seek guidance from a licensed therapist or counselor who can provide personalized support, guidance, and treatment tailored to your specific needs.
If you are experiencing significant distress or emotional difficulties related to a scam or other traumatic event, please consult your doctor or mental health provider for appropriate care and support.
Also read our SCARS Institute Statement about Professional Care for Scam Victims – click here to go to our ScamsNOW.com website.







Thank you for your comment. You may receive an email to follow up. We never share your data with marketers.