Deepfakes emerge as real cybersecurity threat
AICPA logo
AICPA logo
  • Home
Share your thoughts on the new quality management standards

Deepfakes emerge as real cybersecurity threat

4 months ago · 4 min read

Criminals are using doctored video and audio to fool organizations and steal data, money or both. Here’s what firms can do to protect themselves.

Criminals are increasingly deploying deepfakes as a tool in cyberattacks.

Survey results released in August found that 66% of the cybersecurity professionals polled had seen deepfakes used as part of a cyberattack. That’s a 13% increase year over year, VMware said in its eighth annual Global Incident Response Threat Report. Email was the delivery method used in 78% of those attacks.

The VMware report echoed warnings (such as this one) issued over the past couple of years by the FBI and its Internet Crime Complaint Center (IC3), which have urged organizations to learn about deepfake attack methods. As more companies use virtual solutions to embrace remote work, all of us become more susceptible to these types of crimes.

Deepfake refers to the use of advanced artificial intelligence and machine learning technologies to create video, audio, images, or textual data (SMS or written content). The technology can produce media that emulate someone’s appearance and voice. Deepfakes can look and sound like real people, which makes them effective in “social engineering” operations designed to convince individuals to reveal protected information, unknowingly participate in financial theft, or grant criminals access to firm networks.

A natural extension of spear-phishing and business email compromise scams, deepfake attacks are part of a new category of crime that the FBI calls “business identity compromise.”

Recent stories of criminals using deepfakes include:

  • Fraudsters created a deepfake hologram of a cryptocurrency chief communications officer that they used on Zoom calls to trick cryptocurrency executives into disclosing confidential information.

  • Real-time voice cloning allowed criminals to emulate the voice of a Dubai bank director and fool a Hong Kong bank manager into transferring $35 million to the criminals’ organization. The fake voice sounded so real that the manager “recognized it” on the phone.

  • An organization thought they hired a remote employee to provide technical support. Instead, they hired a criminal who created a false persona using deepfake technology and stolen personal identifiable information with the intent of gaining access to the company’s network and data.

  • Hackers sent faked voicemails emulating the voice of a CEO requesting that company employees and external suppliers contribute to charitable/disaster relief causes or make investments via faked websites that instead funneled funds to offshore accounts.

Other avenues of attack have included third-party phishing (hacking into or setting up fake marketing email services or social media profiles to send phishing emails) and search engine optimization (SEO) phishing (creating fake websites and then manipulating key search terms to rank the fake site above the authentic site on search results).

Whatever its form, a deepfake attack on your firm could have significant financial or operational consequences. So what can you do to address these new risks? The FBI outlines several steps, beginning with proactively educating firm personnel and clients about these new threats. For example, targeted spear-phishing attacks may come not only via emails with fake video or audio attachments (vishing) but could include live cloned voice or video conversations with employees.

All personnel should be trained to recognize suspicious or uncharacteristic mannerisms that would prompt them to immediately stop providing information in such situations. A recent report from the U.S. Department of Homeland Security suggests looking or listening for the following when trying to determine if an image, video or audio is fake.


  • Blurring evident in the face but not elsewhere in the image or video (or vice-versa)

  • A change of skin tone near the edge of the face

  • Double chins, double eyebrows, or double edges to the face

  • Whether the face gets blurry when it is partially obscured by a hand or another object

  • Lower-quality sections throughout the same video

  • Box-like shapes and cropped effects around the mouth, eyes, and neck

  • Blinking (or lack thereof), movements that are not natural

  • Changes in the background and/or lighting

  • Contextual clues – Is the background scene consistent with the foreground and subject?


  • Choppy sentences

  • Varying tone inflection in speech

  • Phrasing – would the speaker say it that way?

  • Context of message – Is it relevant to a recent discussion or can they answer related questions?

  • Contextual clues – Are background sounds consistent with the speaker’s presumed location?

Source: “The Increasing Threat of DeepFake Identities” from the U.S. Department of Homeland Security

If you spot any of these red flags or simply suspect something out of line, you should immediately authenticate the person via a secondary means. . For example, if you are on the phone, then send an email, Teams chat message or require confirmation via multi-factor authentication.

To deal with suspected fake incursions or content/media, the FBI suggests a SIFT response: Stop, Investigate the source, Find trusted coverage through multiple sources, and Trace the original content when consuming information online.

The FBI also noted the importance of keeping technology up to date to thwart Deepfake and other cyberattacks. Organizations should install software updates/patches to their network appliances and applications immediately upon release and use multi-factor authentication to access websites and social media accounts to minimize the risk of these resources being used to compromise employees or clients.

It is also recommended that companies adopt a “zero trust” security approach, which permits individuals to access only the networking resources essential to their work and requires that all internet-connected devices be authenticated before being allowed access, e.g. require your people to authenticate with multi-factor authentication every time they connect to your network and data. The FBI also encourages employers to develop a cyber-incident response plan and train employees on it.

Deepfakes will continue to complicate the rapidly changing cybersecurity landscape, but firms can mitigate their risk by keeping their technology up to date and educating their personnel on what Deepfakes are and how to spot the various methods of attack.

Roman H. Kepczyk, CPA.CITP, PAFM

Roman H. Kepczyk, CPA.CITP, CGMA is Director of Firm Technology Strategy for Right Networks and partners exclusively with accounting firms on production automation, application optimization and practice transformation. He has been consistently listed as one of INSIDE Public Accounting’s Most Recommended Consultants, Accounting Today’s Top 100 Most Influential People, and CPA Practice Advisor’s Top Thought Leaders.

What did you think of this?

Every bit of feedback you provide will help us improve your experience

What did you think of this?

Every bit of feedback you provide will help us improve your experience

Mentioned in this article



Manage preferences

Related content