blog
AI Impersonates Grandson, 82-Year-Old Man Scammed Out of 326,000 Yuan

Recently, an 82-year-old man, Mr. Wang from Hantai County, Shandong, received a phone call from abroad and was scammed out of 326,000 yuan. The fraudster used AI technology to synthesize the voice of Mr. Wang's grandson and impersonated him to ask for tuition fees, ultimately leading Mr. Wang to transfer a large sum of money to the fraudster’s account without verifying the request.

Using AI Synthesis to Deceive the Victim

The incident occurred a few days ago when Mr. Wang received a phone call from abroad. The voice on the other end sounded just like his grandson, and Mr. Wang immediately recognized the familiar tone. However, the “grandson” on the phone seemed to urgently need funds, claiming that due to tuition issues, the school had threatened to expel him, and if the money was not sent quickly, he would not be able to continue his studies.

2025012001.png

To make Mr. Wang believe the situation, the fraudster not only mimicked the grandson's voice but also cleverly incorporated the name of the school his grandson attended, the tuition fee amount, and some family details. These pieces of information convinced Mr. Wang that everything was real, and he did not suspect anything unusual about the caller’s identity.

The fraudster deliberately created a sense of urgency during the conversation, repeatedly stressing that time was running out, and if the money was not sent immediately, the grandson would face serious consequences. Mr. Wang's emotions were stirred, and eager to help his grandson, he transferred 326,000 yuan to the specified account under the fraudster’s instructions.

Why Did Mr. Wang Fall for the Scam So Easily?

According to the details disclosed in the case, the reasons why Mr. Wang was successfully deceived can be summarized as follows:

  1. Emotional Dependence and Trust: Mr. Wang had a very close relationship with his grandson, who held an important place in his heart. The fraudster exploited this emotional bond by synthesizing the grandson's voice, which made it difficult for Mr. Wang to detect anything amiss. His emotional reliance on his grandson led him to quickly respond with a desire to help, without questioning the content of the phone call.

  2. Advanced AI Forgery: The fraudster not only used the grandson's voice but also employed sophisticated AI technology to create a “deep fake” voice that was nearly indistinguishable from the real one, greatly enhancing the credibility of the call. Mr. Wang heard the voice of his grandson and did not suspect anything, thinking it was a genuine request for help.

2025010902.png

  1. Precise Social Engineering Tactics: In addition to mimicking the voice, the fraudster used social engineering techniques to gather detailed information about the family, such as the name of the school the grandson attended abroad, the amount of tuition fees, and even some private family conversations. These specific and accurate details made Mr. Wang trust the caller even more, believing the request was genuine.

  2. Sense of Urgency and Emotional Manipulation: The fraudster created a huge sense of urgency over the phone, repeatedly stressing that if the money was not sent quickly, the grandson would face expulsion. In the face of the “urgent need” to help his grandson, Mr. Wang’s emotions were strongly manipulated. His frantic thinking in the moment clouded his judgment, leading him to hastily decide to transfer the money.

  3. Lack of Awareness of New Fraud Techniques: Mr. Wang did not realize the complexity of this new type of online phone scam, especially the use of AI technology in fraud. Having never encountered similar scams before and lacking relevant preventative knowledge and education, he failed to recognize that he was facing a high-tech scam.

How to Avoid Similar Scams?

This case highlights the high-tech and covert nature of modern scams, especially the use of AI-generated deepfake technology, which makes it difficult for many people, particularly the elderly, to discern the truth. To help elderly individuals effectively prevent such new types of scams, the Dingxiang Defense Cloud Business Security Intelligence Center provides the following practical preventive measures, reminding everyone to stay alert when facing similar urgent requests for money and avoid being manipulated by fraudsters.

  1. Call Back to Confirm the Identity
    When receiving a suspicious or unfamiliar phone call, the first step is to stay calm and avoid hastily responding to any requests. Especially when faced with urgent requests for money, you can use the "call back" strategy. Hang up under the pretext of unclear signal and avoid making impulsive decisions in a state of panic. Then, call back using known phone numbers of family or friends (e.g., mobile phones, home phones, or other private contact channels) to verify if it’s really a relative or friend asking for help.

  2. Set up Family Authentication to Ensure Real Identity
    To further ensure identity security, it is recommended that elderly individuals and their family members set up a “safe word” or “challenge question” in advance as a unique method of confirming identity. This way, when receiving a suspicious call, they can ask the safe word or question to determine if the caller is indeed a relative or friend. If the caller cannot answer correctly or tries to avoid the question, the call can be immediately identified as a scam, and the phone should be hung up. This simple and effective identity verification method greatly increases the chances of preventing fraud.

  3. Consult Family and Friends, Don't Make Decisions Alone
    Fraudsters often use urgent and threatening language to create panic in the victim, speeding up their decision-making process. In such cases, elderly individuals should consult other family or friends before making any financial transfer decisions, helping to calmly analyze the situation. This approach can effectively reduce pressure and allow for a more rational decision.

  4. Report to the Authorities Promptly for Help
    If identity verification fails or it is confirmed that a scam has occurred, the first step should be to dial the local emergency number (110) and report the situation to the police, helping to track the fraudster’s movements. The police usually have more technical means to assist in the investigation. Prompt reporting can reduce losses and help recover the stolen funds. When reporting, try to provide detailed information about the case, including phone numbers, payment information provided by the fraudster, bank account details, etc., which will help the police with the investigation.

With the development of AI technology, scammers' tactics have become increasingly sophisticated, especially in areas such as voice synthesis and video forgery, significantly increasing the success rate of scams. The elderly are particularly vulnerable to these new types of online fraud. Therefore, family members and society as a whole should strengthen anti-fraud education for elderly individuals, helping them raise awareness and avoid making impulsive decisions in sudden emergencies, which could lead to irreparable losses.

How Platforms Can Strengthen Prevention?

Fraudsters are using a combination of technology and psychology to precisely manipulate their victims. In the face of AI-driven scams, society must work together through technological means, education, and emotional support to build a protective wall and prevent more people from falling victim. Short video platforms, in particular, need to adopt multiple technical measures to identify fraudulent accounts at the source.

  1. Identify Abnormal Devices
    Dingxiang Device Fingerprinting records and compares devices to distinguish between legitimate users and potential fraudulent behavior. This technology uniquely identifies and recognizes each device, identifying maliciously controlled devices such as virtual machines, proxy servers, and simulators. It analyzes whether the device shows abnormal or inconsistent behaviors, such as logging in with multiple accounts, frequently changing IP addresses, or altering device attributes, helping to track and identify fraudulent activities.

  2. Identify Accounts with Abnormal Operations
    Continuous identity verification during the session is crucial to ensure the user’s identity remains consistent throughout the usage. Dingxiang atbCAPTCHA quickly and accurately distinguishes whether the operator is a human or a machine, precisely identifying fraudulent activities, monitoring in real time, and intercepting abnormal behavior.

  3. Prevent Fake Videos from Face-Swapping
    Dingxiang's full-link panoramic face security threat perception solution conducts intelligent verification across multiple dimensions, including device environment, facial information, image authentication, user behavior, and interaction status. It quickly identifies over 30 types of malicious attacks, such as injection attacks, live-body forgery, image forgery, camera hijacking, debugging risks, memory tampering, Root/Jailbreak, malicious ROM, simulators, etc. Once fake videos, fraudulent facial images, or abnormal interactions are detected, the system automatically blocks the operation. It also allows flexible configuration of video verification strength and user-friendliness, enabling a dynamic mechanism that strengthens verification for abnormal users while maintaining regular atbCAPTCHA for normal users.

  4. Identify Potential Fraud Threats
    Dingxiang Dinsight helps enterprises with risk assessments, anti-fraud analysis, and real-time monitoring, improving the efficiency and accuracy of risk control. The average processing speed of Dinsight’s daily risk control strategies is under 100 milliseconds, supporting configurable multi-party data access and accumulation. Based on mature indicators, strategies, models, and deep learning technologies, it enables self-monitoring and self-iteration of risk control performance. Dinsight, paired with the Xintell smart model platform, optimizes security strategies for known risks, analyzes potential risks through data mining, and configures risk control strategies for different scenarios with one click. Using association networks and deep learning technologies, the platform standardizes complex data processing, mining, and machine learning processes, offering a one-stop modeling service from data processing and feature derivation to model construction and deployment.

2025-01-20
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.