Deepfake imposters are increasingly targeting unsuspecting victims, raising alarm among all payment industry players. How do you protect your organisation and clients from highly realistic online masks and voices generated by computer technology?
Rapid technology developments bring out numerous benefits. Generative artificial intelligence (AI) alone has vast untapped potential to improve the efficiency of financial services, real estate management, healthcare, retail, e-commerce, and other industries.
At the same time, increased automation, faster data processing, and advanced analytics come at a price. Along with undeniable advantages, generative AI technology produces risks to security and authenticity verification. One of the main problems for the financial industry today is deepfake imposter scams that are driving a new wave of fraud with advanced video manipulation technology.
What Is a Deepfake?
Deepfake video technology uses AI tools to create synthetic portrayals of an existing person, convincingly replicating their image, voice and gestures. Such manipulations can be legally leveraged to design entertaining media content. However, they are also increasingly used to spread misinformation and trick people into a financial scam.
The examples are numerous. When the full-scale war in Ukraine started last year, Russian offenders created a range of deepfake videos with the Ukrainian President and top officials to stage their capitulation and spread panic. During recent months, social media saw a flood of deepfake videos impersonating famous YouTuber MrBeast, BBC stars Matthew Amroliwala and Sally Bundock, Elon Musk, top Bollywood actors and actresses, and other public figures. While some of the videos were just made for fun, other deepfake imposters promoted investment scams and fake giveaways.
Instead of pure computer-generating imagery (CGI), deepfakes use a suite of artificial intelligence (AI) tools like deep neural networks, encoder algorithms, Generative Adversarial Networks (GANs), etc. All these tools enable scammers to impose a target image and sound on the base video.
Types of Deepfake Scams
Deepfakes brought additional sophistication to cybercrime schemes. Here are some common ways criminals use deepfake technology to deceive victims and trick them out of their hard-earned money.
Social Engineering Scams
Scams of this type typically involve a call or a message from a criminal who pretends to be a person you know and claims they need money ASAP. Often, criminals say that your dear person is in trouble and urge victims to send some funds to help them out.
With deepfake technology, criminals can now actually copy a person’s voice or the whole image. They may even initiate a video call to make it more believable. It is easier to do if the person in question is a social media creator or influencer. Then, a scammer can download a short voice sample from someone’s social media content and use AI voice-synthesising tools to manipulate the individual’s friends, family, or subscribers.
Bank Impersonation Fraud
Bank impersonation fraud occurs when a criminal contacts a bank client, stating they’re from a customer support or security department and tricks the victim into transferring money to a fraudulent account under false pretences. The fraudsters may also ask for sensitive banking details such as CVV/CVC, password, account details, etc. With the use of deepfake, a criminal might impersonate even a bank employee from a local branch whom you know in person, to gain more trust.
At the same time, recent reports show that criminals today also often use deepfakes to impersonate bank customers and request bank employees to move the funds into fraudulent accounts. Many banks use voice biometric technology to authorise their clients these days. Thus, deepfakes pose a serious challenge to this stage of the verification process.
Typical Ponzi schemes, pump-and-dump schemes or financial pyramid schemes involve an investment opportunity that sounds just too good to be true. Criminals make different misleading positive statements to later cash out the gains and leave thousands of people empty-handed.
Many of the modern investment schemes involve cryptocurrencies. The novelty of this investment method comes in handy for fraudsters as the market is very volatile and many customers lack the proper awareness of how crypto trading or staking works.
These types of schemes are often promoted via social media, false press releases, and other public announcements. With deepfakes, it is easier to make it look like a celebrity or a person well-known in financial circles is supporting the new project. Convincing video impersonations may trick hundreds of people into investing money in a scheme.
Deepfake Scams Impact on Fintech
Obviously, the arrival of new deceit techniques presents a threat to all payment industry players. The range of negative effects deepfakes cause is wide.
Back in 2019, when AI technology wasn’t as advanced and accessible as it is today, fraudsters cloned the CEO’s voice to defraud a British energy company out of $243,000. Today, criminals can use real-time deepfake programs to mimic public figures and company executives on video calls. That can potentially lead to fraudulent transactions.
Brand Integrity and Market Value
Another use case for deepfake fraudsters is impersonating CEOs of all types of institutions, including fintech startups, on social media to spread disinformation about the company. Even if such a malicious campaign is quickly debunked, fake information going viral may affect corporate shares for a while. That gives criminals a chance to profit from short sales, while the company would bear some losses.
Finally, when deepfake scammers target the clients of a financial institution en masse, someone might have to refund the losses. In many cases, a fintech provider can also face legal fines, if proven to have failed to introduce adequate cybersecurity measures to protect its customers.
To illustrate the threat, one might look at the example of authorised payment fraud spreading on P2P applications such as Zelle. Although the company is not technically at fault, America’s largest banks were forced to develop a compensation plan for customers falling victim to such scams, as the scale of customer losses grew abnormally.
Need to Improve Cybersecurity
As cybercriminals evolve in their schemes, the cybersecurity teams of the fintech players must be proactive to address these challenges. Companies should take into account the rising threat of deepfake scams and develop efficient cybersecurity policies and procedures to prevent potential harm.
Relying on regulatory measures to limit the use of deepfakes won’t work. The issue is complicated because laws are different in each country. Therefore, the legal boundaries in the countries of origin of the company, media platform, AI software, or a tricked consumer, involved in a deepfake scam might contradict each other. Moreover, deepfakes can never be prohibited altogether since the technology is used for creative purposes as well.
What Can You Do to Avoid Deepfake Scams?
In the world of ever-evolving technology, deepfakes are bound to become part of our reality. Therefore, both customers and organisations need to acknowledge the threat and be able to take countermeasures.
Awareness Is Your First Defense Layer
A person who is aware of existing deepfake scams may be better prepared to spot one. Deepfakes are not 100% perfect (at least, for now), so a trained eye would easily distinguish some irregularities even in the well-crafted impersonation video.
For instance, in a recent investment scheme with Elon Musk’s and BBC presenters’ deepfakes, journalists were able to find both verbal and visual mistakes, which criminals made. The first rule is when something looks too good to be true, look for red flags before acting.
In a corporate world, tricking live employees is much easier than a virtual chatbot, for instance. Hence, all customer service representatives, as well as other types of specialists must undergo special training dedicated to discerning and reporting suspicious activities. Here’s more on how to train more cybersecurity savvy employees.
AI vs AI: Let Technology Back Your Efforts
In the age of sophisticated impersonations, you can’t rely only on human intelligence to oppose cyber scams. What can better detect computer-manipulated content than AI itself?
A few commercial software providers have already developed AI solutions trained on millions of real and fake videos, claiming they can determine deepfake and other AI manipulations with a high 96%-99% accuracy. To compare, a recent study discovered that humans can detect deepfake speech only 73% of the time.
Collaborate With Social Media and Regulators on Deepfake Prevention
Social media platforms are one of the most powerful tools to quickly spread disinformation which can damage a company’s market position and financial status. Flagging and reporting fake media accounts and content is an important preventive measure against deepfake scammers.
Therefore, fintech players and other brands must be attentive to social media mentions of their firms to spot irregular activities and contact the media representatives as soon as possible. When the scam is detected, it is also important to warn your customers about potential danger before they fall victim to elaborate criminals.
On the regulatory level, authorities and industry players must unite to develop effective laws preventing deepfake scammers’ activity.
Conclusion: Payment Industry Must Be Vigilant and Proactive When It Comes to Deepfakes
Deepfakes are very convincing synthetic replicas of a real person’s image and voice. They are created using advanced artificial intelligence technology. In the criminals’ hands, deepfakes may be leveraged to enhance the efficiency of social engineering scams, investment schemes, and bank impersonation fraud.
The legislation regarding deepfakes is not homogenous across countries and industries. Thus, preventing the negative impact of deepfakes on your reputation, financial gains, and market position is a matter of stronger cybersecurity.
Raising awareness about the issue is a critical first step towards deepfake prevention. Using AI detection tools will make your efforts much more effective. Finally, fintech players cannot act alone in this fight. Close collaboration with social media and regulators will help address deepfake scams on a global scale.