blog
AI Scams Targeting the Elderly (Part 2): Scammers Impersonating Celebrities and Law Enforcement

Scammers use AI face-swapping technology to replace their faces with those of relatives and even synthesize their voices. By leveraging deepfake videos and audio, they accurately mimic the tone and expression of loved ones, easily gaining the victim’s trust and committing fraud. Similarly, impersonating law enforcement officers is a common scam tactic. Scammers use AI technology to fake identities, wear police uniforms, and pose as law enforcement personnel conducting "case investigations," creating a false sense of urgency and coercing victims into compliance.

Even more alarmingly, scammers also impersonate celebrities, especially well-known entrepreneurs and public figures, using video calls to interact with victims, building false trust to carry out financial scams.

Using AI to Impersonate Celebrities

Case 1: AI Face-Swapping Scam in Fengtai District, Beijing
In November 2023, a community in Fangzhuang Street, Fengtai District, Beijing, successfully intercepted a scam involving AI face-swapping technology. Criminals cleverly impersonated Jack Ma, attempting to persuade an elderly person to purchase health products.

2025011301.png

The scam began with a phone call to the elderly person, during which the scammers, posing as a "disciple of Jack Ma," promoted a poverty alleviation project and promised high investment returns. To enhance credibility, the scammers suggested a video call with "Jack Ma" and used AI face-swapping technology to disguise themselves as him. Due to the high level of realism, the elderly victim did not suspect anything and genuinely believed they were conversing with Jack Ma. The scammers then asked the victim to download QQ software to proceed with the investment.

Fortunately, community staff and police detected the scam in time. They arranged another video call with the elderly person, exposing the scammer’s true identity. The scammer panicked and quickly ended the call when police appeared, preventing the elderly person from suffering financial losses. The case was resolved promptly.

Case 2: AI-Synthesized Celebrity Scam in Yichun, Jiangxi
The Public Security Bureau of Yichun City, Jiangxi Province, received a call for help from a bank reporting that an elderly person insisted on taking out a loan and refused to leave without doing so. Upon questioning, the police learned that the elderly person wanted to borrow 2 million yuan to support her so-called “boyfriend,” Jin Dong, in producing a film.
2025011302.png The elderly victim claimed she had met “Jin Dong” on a short video platform and developed a romantic relationship with him. The scammer even posted photos and videos together, claiming that the funds were necessary for filming.

After careful investigation, the police discovered that “Jin Dong” was entirely fabricated, with the photos and videos being AI-generated. Despite this, the elderly victim remained convinced she was in a relationship with the celebrity. Police patiently explained the principles of AI synthesis and demonstrated how the technology worked, ultimately making the elderly person realize she was being scammed and deciding to abandon the loan application.

These cases highlight the use of AI face-swapping and synthesis technology in scams, especially in emotionally manipulating the elderly. Scammers, disguised as celebrities, exploit the elderly's trust in family and idols to lure them into fraudulent activities.

Using AI to Impersonate Law Enforcement

Case 1: AI-Impersonated Police Scam in Wuxi
In May 2024, 75-year-old Ms. Hu received a call from someone claiming to be a police officer from the Wuxi Public Security Bureau in Jiangsu Province. The scammer alleged that her bank card was involved in a fraud case and requested a financial review.
2025011303.png Following the scammer’s instructions, Ms. Hu installed software and joined an online meeting. Unbeknownst to her, she had shared her screen with the scammer, who remotely controlled her phone. Throughout the process, Ms. Hu remained unaware until she checked her account and discovered her funds had disappeared. She then realized she had been scammed and reported the incident to the police.

Scammers cleverly avoided traditional phone scams by employing remote control and screen-sharing, ensuring the victim was entirely unaware. By posing as law enforcement personnel, the scammers leveraged authority and urgency to gain compliance, leading to financial theft.

Case 2: AI Face-Swapping Scam at Hunan Provincial Museum
Recently, the “Lei Feng Volunteer Police” on patrol at the Hunan Provincial Museum noticed an elderly man looking distressed while standing outside a mobile service store, engaged in a video call. The volunteer immediately suspected it was a scam and alerted the police.
2025011304.png

When the police arrived, the elderly man was in a video call with a “police officer.” The scammer, dressed in a police uniform with a stern expression, demanded the man cooperate in a financial review and take notes.

Upon speaking with the elderly man, the police learned the scammer had contacted him via FaceTime, claiming the elderly man was involved in criminal activities and needed to cooperate with an investigation. The scammer used AI face-swapping technology to replace their face with that of a uniformed “police officer,” increasing credibility. The scammer’s serious tone and fake uniform convinced the man to comply. The police immediately exposed the scam, prompting the scammer to end the call in a hurry.

These cases underscore how scammers use advanced AI face-swapping and video call technology to convincingly impersonate law enforcement officers. By creating highly realistic videos and disguising their identities, scammers exploit the elderly’s fear of legal authority, manipulating their actions and causing financial losses. Fortunately, timely community intervention and police actions prevented more significant damages.

Why Are the Elderly Particularly Prone to Falling Victim to Scams?

The elderly are especially vulnerable to scams due to a combination of factors, including a lack of understanding about AI technology, trust in authority, emotional dependence, and anxiety-driven decision-making. These factors are outlined below:

  1. Lack of Awareness About AI Technology and Its Risks
    Many elderly people are unaware of the existence of artificial intelligence technologies, let alone how they are misused in daily life. The advent of AI face-swapping, voice synthesis, and deepfake technology has significantly enhanced the believability of fake information, making it particularly difficult for older individuals to discern between real and fake. Due to their lack of technological literacy, many elderly individuals fail to recognize the risks and potential fraud associated with these technologies. They often assume that videos, voices, or images are genuine, leaving them highly susceptible to scammers' deception.

  2. Fear and Suppression of Rational Judgment
    Elderly individuals typically have strong traditional values and a high level of trust in authority. Scammers often disguise themselves as public security officers, prosecutors, or court officials, creating a sense of fear and urgency. Upon hearing accusations of alleged legal violations, the elderly are often too afraid to question the scammer’s identity and are easily overwhelmed by the authoritative language used.
    During such moments, fear dominates their decision-making, leading them to comply with the scammer's instructions, which often results in financial losses. For those eager to resolve perceived issues or avoid trouble, scammers exploit their anxiety to create a false sense of urgency, forcing them to make poor decisions under pressure.

  3. Social Isolation and Emotional Dependence
    As individuals age, their social circles often shrink, leaving many elderly people with limited companionship. This social isolation makes them emotionally vulnerable and more likely to trust people who feign concern or affection. Scammers impersonate family members, particularly children or grandchildren, to exploit this emotional reliance.
    Additionally, by mimicking well-known public figures such as celebrities or business leaders, scammers enhance the credibility of their schemes. Since celebrities are often associated with trustworthiness and familiarity, elderly individuals may believe the impersonated identity without question, making them prime targets for fraudulent schemes.

  4. Emotional Decision-Making and Reduced Cognitive Agility
    With age, elderly individuals often experience declines in cognitive function and reaction speed, making them more prone to emotional decision-making during unexpected situations. When faced with unexpected calls or video messages, they may react hastily due to anxiety, failing to verify information thoroughly. In such situations, they may not have the time or capacity to recognize they are being manipulated by scammers.

How Can the Elderly Protect Themselves from AI Scams?

In these types of scams, which leverage highly realistic AI-generated media, victims often struggle to detect fraud visually or audibly. Therefore, even if video or phone calls are used to "verify" a person's identity, elderly individuals must remain vigilant when confronted with urgent financial requests. Verifying the identity of the person requesting funds through multiple channels is essential to avoid hasty decisions. The Dingxiang Business Security Intelligence Center recommends that individuals enhance their awareness and adopt multi-layered verification measures.

  1. Call Back to Verify Identity
    Hang up and reconnect using a known, trusted number (such as one saved in your phone for a family member or a number obtained from an official website). Avoid using any phone numbers provided by the caller, as scammers often use spoofed numbers.

  2. Consult Trusted Friends or Family
    If you receive a call or message requesting donations or transfers, consult with close family or friends to verify whether others have received similar requests or if such incidents are plausible. If unsure, ask for help in verifying the situation.

  3. Increase Awareness of AI Forgery Technologies
    Even if the voice on the other end sounds familiar, pay attention to inconsistencies in tone, language, or emotional expressions. AI-generated voices may lack subtle details, such as natural fluctuations in tone or authentic emotional expressions. Any irregularities should raise suspicion.

  4. Report to Authorities for Assistance
    Dial your local emergency number (e.g., 110 in China) to report any suspected scams to the police in detail.

These measures not only help individuals defend against AI-related scams but also serve as a reminder to stay calm and vigilant when dealing with sudden financial requests to avoid falling into fraudsters' traps.

Platforms Should Strengthen Security Alerts

Scammers are combining advanced technologies with psychological manipulation to target victims with precision. In response to AI-related scams, collective societal efforts are required to build a robust defense through technology, education, and emotional support. Short video platforms, in particular, need to adopt multi-pronged measures to identify and mitigate fraudulent activities at the source effectively.

  1. Identifying Suspicious Devices
    Using Dingxiang Device Fingerprinting, platforms can track and identify potential fraudulent devices. By assigning unique identifiers to each device, this technology can detect maliciously manipulated devices, such as virtual machines, proxy servers, and emulators. It also analyzes irregular behaviors, such as multiple account logins, frequent IP changes, or unusual device attributes, to trace and identify fraudulent activities.

  2. Detecting Suspicious Account Activity
    Monitor for unusual account behaviors, such as remote logins, device changes, phone number changes, or sudden activity from dormant accounts. Continuous identity verification during sessions is crucial to maintain user authentication. Dingxiang atbCAPTCHA can distinguish between human and machine operators with speed and accuracy, identifying and blocking fraudulent activity in real-time.

2024062705.png

  1. Detecting Deepfake Videos
    The Dingxiang Full-Chain Facial Security Threat Perception Solution leverages multi-dimensional data—device environment, facial information, image forgery detection, user behavior, and interaction states—to conduct intelligent verification. It can identify injection attacks, spoofing attempts, image manipulation, camera hijacking, debugging risks, memory tampering, rooting/jailbreaking, malicious ROMs, and more than 30 other malicious behaviors. By identifying fake videos, fabricated facial images, and abnormal interactions, the system can automatically block fraudulent activities. It also supports flexible video verification configurations, dynamically adjusting verification intensity based on user risk profiles.

  2. Identifying Potential Fraud Threats
    Dingxiang Dinsight helps businesses conduct risk assessments, anti-fraud analysis, and real-time monitoring to enhance the efficiency and accuracy of risk control. With an average processing speed of under 100 milliseconds for daily risk control strategies, Dinsight supports multi-party data integration and risk strategy refinement. Combined with the Xintell Smart Model Platform, it automatically optimizes risk control strategies based on logs and data mining, uncovering potential threats. Leveraging network association and deep learning technologies, the platform standardizes the complex processes of data processing, feature engineering, and machine learning, providing end-to-end modeling services from data preparation to deployment.

2025-01-13
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.