Deepfake technology is an escalating cyber security threat to organisations. Cyber criminals are investing in artificial intelligence (AI) and machine learning to create synthetic or manipulated digital content (including images, video, audio and text) for use in cyber attacks and fraud.
This content can realistically replicate or alter appearance, voice, mannerisms or vocabulary with the aim of tricking targets both human and autonomous into believing that what they see, hear or read is authentic and trustworthy.
In March 2021, the FBI warned of a growing trend of malicious actors leveraging synthetic or manipulated digital content for cyber and foreign influence in extensions of existing spear phishing and social engineering campaigns.
These can have more severe and widespread impact due to the sophistication level of the synthetic media used, it added. Organisations therefore must be aware of growing deepfake cyber threats and take steps to defend against deepfake enhanced cyber attacks and scams.
Cyber criminals adopting deepfake technology
“It’s often been said that pornography drives technology adoption and that was certainly true of deepfakes when they first appeared. Now the technology is catching on in other less salacious circles -- notably with organised cyber crime groups,” Mark Ward, senior research analyst at the Information Security Forum, tells CSO.
Deepfake-derived attacks are currently few and far between, executed by specialist gangs or those that have the weight of a state behind them with only a handful of documented successful uses, Ward says. “However, it will spread as all such technologies do when the tools, techniques, and potential rewards become well known.”
This is already proving to be the case on underground and dark web forums, where criminals are sharing deepfake know-how and expertise.
VMware researchers discovered dark web tutorials illustrating deepfake tools and techniques, something that Rick McElroy, principal cyber security strategist, describes as one of the latest examples of threat actors collaborating for the purpose of compromising organisations.
“Threat actors have turned to the dark web to offer customised services and tutorials that incorporate visual and audio deepfake technologies designed to bypass and defeat security measures.”
Deepfake enhanced social engineering
Ward cites evidence, including dark web chatter, that deepfake technology is of growing interest to crime groups that specialise in sophisticated social engineering.
“These groups run the BEC [business email compromise] campaigns that trick finance and accounting staff in large organisations into sending cash to accounts controlled by scammers.” The tools currently being discussed in criminal chat rooms exploit the public profiles of senior executives grabbing video, audio and blogposts to create convincing simulacra, he says.
“These will then be used to lend weight to demands to move cash or make payments quickly, making this scam easier to perpetrate and harder to defend against.” Harman Singh, managing consultant at Cyphere, agrees, adding that deepfake audio impersonations can be particularly effective in social engineering attacks that go after corporate data and system access.
“Impersonating an executive who is travelling or away from the office to reset a password or perform an action that allows them access into a company’s assets is one trick,” Singh says, with such content providing an extra layer of believability with the capability to recreate recognisable features such as someone’s accent and speech patterns.
There has been a particular rise in this kind of attack as cyber criminals take advantage of the move to a distributed workforce, says McElroy.
“CISOs and their security teams are now witnessing deepfakes being used in phishing attacks or to compromise business email and platforms such as Slack and Microsoft Teams. Phishing campaigns via business communication platforms provide an ideal delivery mechanism for deepfakes as organisations and their users trust them implicitly.”
Deepfakes designed to bypass biometric authentication
Another risky deepfake trend is content created and used to bypass biometric verification. Biometrics such as face and voice recognition provide additional layers of security and can be used to automatically authenticate someone’s identity based on their unique features.
However, technology that can accurately recreate a person’s appearance or voice to circumvent such authentication poses a significant risk to businesses that rely on it as part of their identity and access management strategy, and it’s something that criminals are investing in amid widespread remote working.
“The onset of the pandemic and mass shift to the era of the ‘anywhere workforce’ has resulted in the creation of a wealth of audio and video data that can be fed into a machine learning system to create a compelling duplicate,” says McElroy.
Albert Roux, vice president of fraud at AI-based identity technology provider Onfido, concedes that deepfakes do indeed pose notable risk to biometric-based authentication.
“Any organisation that leverages identity verification to conduct their business and protect themselves from cyber criminals can be susceptible to deepfake attacks. Fraudsters have taken note of recent viral videos such as the Tom Cruise deepfake video and the popular YouTube creators like Corridor Digital, realising these deep fakes tools and code libraries could be leveraged to bypass identity verification checks online.”
Several free open-source applications make it easier for fraudsters, even with limited technical knowledge, to generate deepfakes videos and photos, he adds.
Defending against deepfake cyberthreats
Whether it’s through text, voice, or video manipulation, fraudsters invest in deepfake technology to distort digital reality for malicious gain, thriving on confusion and uncertainty. “We’re at a turning point of entering a new reality of distrust and distortion at the hands of attackers,” McElroy says.
While the threats posed by deepfake-assisted attacks may seem stark, there are several defences business can bring to bear against them. These range from training and education to advanced technology and threat intelligence, all designed to counter the heightened risks of malicious deepfake activity.
For Ward, teaching staff about deepfake social engineering attacks (particularly those most targeted) is an important element of mitigating the risks, and he advises focusing on employees in finance roles.
“Alerting them to the possibility and giving them permission to slow down the payment process if they get suspicious can help a lot. Overhauling the procedures for pushing through payments can thwart attackers if they are not up to speed with the latest changes an organisation has adopted.”
From a tech standpoint, Ward champions a growing number of analysis systems that can spot when content exhibits signs of manipulation that would otherwise be missed.
“Likewise, threat intelligence can also help as it can show if an organisation is being targeted, a sector is coming under scrutiny, or a particular group is getting active in this sphere. Deepfake scams take time to set up and execute. That can give potential victims ample time to spot the warning signs and act.”
Roux says effective defence can also be achieved by randomising the instructions users must follow to authenticate themselves when using verification technology.
“There are thousands of possible requests that deepfakes creators simply can’t predict, such as looking in different directions or reading a phrase. Users that repeatedly respond wrong can be flagged for additional investigation, and while deepfakes can be manipulated in real time, the quality of the video deteriorates significantly, as the heavy processing power required doesn’t lend itself to quick reactions.”