Advisory on Detecting and Responding to Deepfake Scams

Published on 21 Mar 2024

  Artificial Intelligence (AI) is being used to produce increasingly convincing deepfakes that are indistinguishable even to the trained eye. ‘Deepfake’ refers to multimedia (images, video, and audio) that has been synthetically created or manipulated. Deepfakes can be used for entertainment but have also been used by malicious parties to create highly realistic and convincing content that seek to deceive users, such as in scams or phishing attacks. 

2.  Deepfakes undermine our ability to differentiate between fact and fiction. This will have far-reaching implications on our economy and society.

3.  In December 2023, a deepfake video of Prime Minister Lee Hsien Loong promoting a cryptocurrency scam was published. In the same month, another deepfake video showed Deputy Prime Minister Lawrence Wong endorsing an investment scam. More recently, in February 2024, a finance worker at a multinational firm in the Asia Pacific region had been tricked into paying US$25 million to scammers. The scammers had used deepfake technology to pose as the company’s chief financial officer in a video conference call, to convince the worker to make the fraudulent wire transfer. 

How to Detect Deepfakes – the 3A Approach

4.  Deepfakes can seem complex to detect, especially as they get more realistic by the day. However, you can use a simple ‘3A’ approach (Figure 1) to discern if what you are looking at is a deepfake.


Figure 1. The 3A approach for detecting deepfakes

Assess the Message

5.  Assess the message, by checking its source, context, and aim, — especially if the message is unsolicited. 

  1. Source: Is the source trustworthy and can you verify that it is truly who it claims to be? A trustworthy source can be an official organisation, institution, or individual that directly owns or knows the content.

  2. Context: Does the content read or behave as expected? For example, Singapore public officials are unlikely to ask the public to invest in third-party investment schemes.

  3. Aim of content:  Does the message ask you to do something urgently, unsafe, or unusual? Take caution if the content asks you to download unknown third-party applications, click on suspicious links, provide personal information or make purchases that sound too good to be true. 
Do ask around and check with your friends and family to see if they find the message suspicious. If the message appears to be from someone you know personally, take additional steps to verify its authenticity. This can include reaching out to the person directly through another communication channel or asking them about something known only between the two of you.

Analyse Audio-Visual Elements  

6.  Look out for tell-tale signs that the audio or video has been manipulated. The table below describes elements that you can look out for to decide if it has been altered. 

Multimedia Types Audio-visual Elements Description

Images and videos

Facial features
  • Blurring around edges of the face, facial features, or the side profile
  • Uneven resolution and unnatural shadows around facial features
  • Unnatural edges around features
Expression & eye movement
  • Unnatural or lack of blinking
  • Inconsistent light reflection in eyes
  • Unnatural facial expression
Skin texture & skin tone
  • Unnatural or inconsistent skin colour tone
  • Differences in resolution and skin textures
Background consistency
  • Blurred, out of focus, or distorted areas in the background
Audio and videosAudio-video consistency
  • Lips not synchronised with speech
  • Limited variance in tone inflection
  • Incongruent background noise

Figure 2. Analysis of audio-visual elements

Authenticate Content Using Tools 

7.  Internationally, there are ongoing efforts to build detection tools that use sophisticated AI techniques to analyse content, such as to assess if it has been generated or edited by AI. For example, pixel analysis is a technique used to detect inconsistencies in facial expressions and lighting. However, deepfake detection tools for general consumer use are still nascent1. Nonetheless, techniques such as content provenance that verify the origin and authenticity of data are starting to gain maturity, with companies such as Meta and OpenAI having announced their intention to include metadata tags or ‘watermarks’ to indicate if content is AI-generated. Users can look out for these tags and labels to better understand the nature of the content.

Responding to Deepfake Scams

8.  The 3A approach can be used as part of the CHECK phase in the ADD, CHECK, TELL (ACT) framework for responding to scams. Should you encounter a possible scam, false news or advertisements on social media you can TELL by reporting to the platform’s administrator. If you are unsure whether a situation you are facing is a scam, call the National Crime Prevention Council (NCPC) Anti-Scam Helpline for advice at 1800-722-6688, or visit the ScamAlert website (scamalert.sg).

9.  Combating deepfake scams is a community effort. If we observe family or friends sharing potentially deepfake content, we should intervene and encourage them to first analyse the message, inspect the audio/visual elements, and verify the content with official sources and using available tools. 

10.  Anyone can be vulnerable to deepfake scams. Deepfakes employ advanced machine learning algorithms that analyse and mimic the likeness of individuals from photos, audio, or videos of them. To safeguard your digital identity and prevent your images and videos from being abused in a deepfake scam, consider minimising personal information and images being shared online, and updating privacy settings on social media platforms.

Conclusion

11.  Deepfakes represent a sophisticated challenge in our digital age. They blur the lines between reality and fiction, and we should educate ourselves on how to detect deepfake content, especially as these are increasingly being used in scams to convince victims to send money or sensitive information.  The simple 3A approach — Assess the message, Analyse audio-visual elements, and Authenticate content using tools — will help you to better assess the likelihood of multimedia content being a deepfake. 
 
12.  When in doubt, verify and cross-reference with trusted sources and refrain from sharing the dubious content with others. If we work together, we can build resilience and ACT against the threat of malicious use of deepfakes.


1As deepfake detection tools are still at the early stages, CSA will continue to monitor these developments to address deepfake scams and will provide more information on new resources when they become available.