Deepfake Fraud: What Your Board and Your C-Suite Should Know

Nicholas J Price
Just when it seems that the intricacies of cybersecurity are catching up to the sophistication of cybercrime, businesses are having to deal with crimes of an artificial intelligence kind. Along with news that a top executive got scammed and lost a large sum of money was uncovered news of less sensational, but equally harmful, events that have occurred.

These events demonstrate how easily apps that are designed for entertainment can also be used for fraudulent schemes. They also uncover exactly how dangerous they can be. These incidents, along with others, spark new questions for boards to ponder and to be concerned about:
  • What laws and regulations can be passed to reduce or eliminate deepfake fraud?
  • Can experts create technology to prevent deepfake fraud?
  • Can researchers create tools to detect fake technology from real images?
  • What other crimes will criminals devise using artificial intelligence?
  • How can I protect my own company from similar schemes?

Top Energy Executive Scammed in a Deepfake Fraud Scheme

Recent news broke telling of an executive employed by an energy firm in the United Kingdom who got scammed out of ''200,000 by cyber criminals who used artificial intelligence to imitate his superior's voice.

The incident occurred when an executive received a phone call that he believed came from his superior at the parent company in Germany. The caller told the executive to transfer ''200,000 immediately to a Hungarian bank account. Unbeknownst to the executive, he was actually having a conversation with a software program that perfectly imitated his boss's voice, tonality, punctuation and German accent. The executive was told that the money would be replaced the following day.

The following day, the funds hadn't been reimbursed and the executive became suspicious. The criminals behind the scam called again from an Austrian number, asking for a second transfer of funds. The executive was talking on the office phone and getting to the truth with his real superior, Johannes, when another call came in on his mobile phone from the scammers, posing as Johannes. As soon as the executive asked who was calling, the caller hung up.

The scam used deepfake technology to execute the scam. Insurance firm Euler Hermes paid out the claim to the energy firm and hasn't identified the criminals as of yet.

Is Artificial Intelligence the New Cyber Weapon of Choice?

At first, the insurance firm believed this scam to be the first of its type. According to Symantec, they are aware of at least three other cases of deepfake voice fraud in which companies were fooled into sending money to a criminal-owned account. Philippe Amann is the head of strategy for Europol's European Cybercrime Center, which is the European police agency. Amann stated that it's not clear whether this was the first reported attack using artificial intelligence or whether companies haven't reported incidents or haven't been able to detect this particular type of technology in other crimes.

Symantec reported to the Washington Post that one company lost millions of dollars in the scam. The scam shines a spotlight on the danger posed by hackers who can use artificial intelligence to carry out scams that are automated and effective.

The concern is real when you consider that anyone with a Chinese phone number, a photo of themselves and an app called Zao can now transpose the face of someone in a video with their own face, complete with realistic action and sound. The transformation creates a genuine likeness and a video can be created within seconds. For video aficionados, it's the best thing since Photoshop.

Although the intent of the app may have been to entertain the public by allowing them to appear in clips of their favorite movies, the app could also be used for deepfake fraud that could harm the reputations of politicians, celebrities and others. In recent months, a doctored video of U.S. House of Representatives Chair Nancy Pelosi went viral on Facebook and other social media platforms. The video was slowed way down to make Pelosi appear drunk or sluggish and thus harm her character and reputation. The incident sparked outrage and spurred calls for tech firms to crack down on technology that creates fake or duplicate personas. Since the Pelosi video quickly went viral on Facebook, the social media giant has invested $10 million into a project to help them detect fraudulent videos.

Crackdowns on Deepfake Technology Could Be Swift

The Wall Street Journal reported that there could soon be serious legal or financial ramifications coming before there are any other incidents of deepfake fraud.

Tech companies have been researching the capabilities of artificial intelligence for legitimate purposes. New regulations could negatively affect the development of new technologies. Some of the tech companies in the Silicon Valley have been working diligently on artificial intelligence and audio technology. Competition with artificial intelligence is strong among tech companies, so new laws or stricter regulations could easily compromise new developments.

In 2018, Google launched its Duplex service, which mimics the voice of a real person to make calls on a user's behalf. The software can reserve a table at a restaurant or book a reservation for users. For various reasons, the software hasn't taken off as expected. Several small startups in China have developed similar services for smartphones at no cost, although the privacy and data collection terms aren't always respectable.

Getting Ahead of Deepfake Technology

To combat deepfake technology, companies need to be able to recognize that they're dealing with it. Tech researchers and researchers in academia are working diligently to develop software that can detect deepfake apps. The initial goal is to be able to detect it with 90% accuracy. Other researchers are investigating how deepfake technology works using even smaller amounts of data.

Today's board directors must add deepfake technology to their risk management plans due to the seriousness of the risks it poses. The recent incident with the UK energy firm should motivate companies to develop new policies to detect and prevent deepfake fraud. It's also wise for boards to stay current on technologies that could help them detect and prevent similar incidents from occurring in their businesses. Finally, boards are their own best advocates when it comes to working with lawmakers and regulators to establish laws that are fair and effective for everyone.
Related Insights
Nicholas J. Price
Nicholas J. Price is a former Manager at Diligent. He has worked extensively in the governance space, particularly on the key governance technologies that can support leadership with the visibility, data and operating capabilities for more effective decision-making.