Fintech & Ecommerce

Online Payment Fraud Evolution: What to Expect in 2024?

Not only online payments but also the associated fraud types have been evolving over recent years. Billions of dollars are lost to fraud annually. To hedge businesses and consumers from the risks of online payment fraud, one must be well-aware of the evolving market challenges. 

Online Payment Fraud Evolution: What to Expect in 2024?

The first online payments were registered in the early 1990s. For the last three decades, they’ve become faster, more convenient, and of course much more secure. And yet, online payment fraud has also not come to a standstill. Its evolution has become especially evident during the pandemic of 2020 when e-commerce volumes dramatically increased.

The alarming forecasts suggest that the cumulative losses to online payment fraud globally between 2023 and 2027 will exceed $343 billion, with volumes of lost funds growing at a projected +40% CAGR between 2023 and 2028. Currently, North America and Europe lead the negative statistics of online fraud instances. However, by 2025, Asia Pacific is expected to take the lead. 

Brief History of Online Payment Fraud

During the last thirty years, online payment fraud has significantly evolved, adapting tactics to changes in technology, consumer behaviour, and security measures.

A few important milestones in the history of online payment fraud include the emergence of phishing scams in the early 2000s and the rise of social media scams in the mid-2000s. In recent years, as crypto gained popularity, we shall also call attention to cryptocurrency scams, including ICO scams and cryptojacking scams. Although cryptocurrencies are not currently the mainstream online payment method, they may well be in the near future. 


In the early 2000s, phishing became a dominant online fraud method. This type of fraud involves fraudulent emails or messages similar to those you receive from legitimate sources (e.g. banks or e-commerce sites). They are designed very convincingly to trick users into providing their personal or financial information.

Until the late 2000s and early 2010s, the main platform for phishing attacks were e-mail addresses. However, the rise of mobile devices created new venues for phishing fraud. That triggered the popularity of “premium SMS” scams, which charged premium fees for common messages, and “smishing” – phishing scams carried out via text messages.

Up to the present day, the threat of phishing has not been eliminated. Moreover, fraudsters use the most next-gen tech like generative AI to scale their phishing attacks. Now, except for the fake textual links, one may encounter very real-looking manipulative videos, created using the recorded voice of company executives and their images, fake IDs and identity cards generated by AI, unsolicited calls performed by a deepfake audio bot, and other content. 

Another novel type of phishing scam is search engine phishing. It occurs when cybercriminals manipulate search engine results on Google, Bing, etc. to list fake websites or fraudulent phone support numbers at the top of search results. 

Another recent scheme known as pharming is a type of phishing attack where scammers redirect website traffic to fake websites without the user’s knowledge or consent. If users then enter their credentials on the fake resources, their identity and bank details get stolen. Many fake websites mimic the names of well-known companies, with slight changes observed only in their URLs.

Employing an evil twin phishing method, cybercriminals even create whole fraudulent Wi-Fi networks. Those who connect to these networks, risk losing their sensitive data. Criminals can also hack your IP address in this case. 

Card Not Present (CNP) Fraud

Card fraud is one of the most common fraud retailers face. When more fraud cases started to occur with physical bank cards, most countries began transitioning to EMV chip cards in the 2010s. In response, fraudsters shifted their focus to online and card-not-present (CNP) fraud, as more attractive targets. CNP fraud happens when fraudsters use stolen credit card information to make purchases online. 

Being around for a while, CNP fraud has also evolved. Now, cybercriminals do not only target card payments but also compromise e-wallets, A2A payments, and Buy Now, Pay Later options. 

One of the common ways to obtain a victim’s card details is an account takeover. To accelerate the potential of getting access to user accounts, fraudsters increasingly use AI. When it gets to the wrong hands, this promising technology might help cybercriminals generate personalised messages to deceive users, automate social engineering attacks, or create more compelling content for highly convincing fake websites or social media profiles. 

Another type of a novel card-not-present attack is triangulation fraud. It is a sophisticated scheme where a fraudster sets up a fake online marketplace with significant discounts to lure customers. A consumer seeking good deals places an order. Then, the fraudster collects all the payment details and places an order with the legitimate marketplace with another cardholder’s payment details and the delivery address of the original consumer. When the cardholder realises an unauthorised payment was made, the merchant faces a chargeback. It is estimated that retail chargeback volumes will grow by 42% between 2023-2026, reaching 337 million globally. 

Online Payment Fraud Evolution: What to Expect in 2024?

What Types of Online Payment Fraud Present the Most Risks in 2024?

Payment processors and financial institutions are continuously implementing more and more sophisticated fraud detection systems to identify and prevent fraudulent transactions. However, they should always stay vigilant to the emerging threats and evolving fraudulent technologies. 

AI-based Fraudulent Techniques

Being the fastest-growing app of all time, ChatGPT has brought new opportunities for businesses. At the same time, hackers who leverage innovative AI tools and chatbots for nefarious purposes have created their own versions of ChatGPT-like generative AI solutions. Tools like WormGPT or PoisonGPT are non-ethical AI tools based on open-source large language models (LLMs). 

WormGPT & PoisonGPT

For instance, WormGPT is specifically used to write Business Email Compromise (BEC) phishing attacks, aimed at exploiting large businesses. Based on a 2021 GPT-J language model, it is devoid of any anti-abuse restrictions. Thus, WormGPT can use bad words, write hate speech messages, write virus codes and whatever else hackers ask it to do.

PoisonGPT is a technique enabling it to “poison” a trustworthy LLM supply chain with a malicious model. The malicious AI model integrated into an otherwise reliable LLM can perform various tasks, from spreading false information to stealing sensitive data. For instance, the model might perform pretty normally most of the time, but intentionally provide false information in response to specific requests. The tool mimics legitimate AI solutions so that users find it hard to distinguish between legitimate and malicious tools. Although PoisonGPT itself was designed by security researchers to specifically illustrate vulnerabilities in AI systems, tools like that can be easily designed and employed by cybercriminals as well. 

The Variety of AI Tools Facilitating Fraud Is Enormous

Different 2023 reports discovered at least 50 fake AI apps scamming unsuspecting users via phishing attacks that capture their personal and payment data. Besides, The California DFPI noticed an increase in investment scams that leverage the hype around AI. Criminals claim to use AI to make money for investors, e.g. claim that their AI tools can trade crypto on behalf of investors and generate too-good-to-be-true profits. 

Legitimate AI chatbots like ChatGPT may unknowingly help the malefactors to generate clean, fluid text for phishing messages, which have historically been distinguished by spelling and grammar errors not typical for official companies in their customer communications. 

Besides, there are social media-based scams that use sponsored posts featuring fake Facebook pages which advertise “enhanced” versions of AI tools. Instead of downloading advertised AI software, unsuspecting users have their passwords, and other sensitive information compromised. They can be further used for online payment fraud. 

Finally, AI voice scams use audio data from a target’s social media account to generate new audio content suggesting their target is in desperate need of money. Such audio files are sent via a voicemail, or voice note to the target’s family members, to initiate a fraudulent money transfer. 

Spoofing Data and Biometric Hacks

Among the innovative fraudulent techniques used for online payment schemes, there are many cases where advanced technologies facilitate data theft.

One of the examples is fraudsters spoofing their digital fingerprint data points, including elements like browser details, IP addresses, media devices, mime-type data, geographical location, or time zones. The spoofing can mislead risk detection systems and ensure the anonymity of the criminal. 

Such manipulations can help fraudsters bypass geographical restrictions imposed by e-commerce sites, for example, when certain locations are blocked due to imposed sanctions or being sources of high-value or high-volume fraud attempts. Fraudsters often use lesser-known, illegal VPN services and proxies that are hard to spot. 

Biometric data may be hard to imitate, but they are also vulnerable to hacking attacks. There are increasing reports of AI and other advanced technologies being leveraged to overcome the biometric defences deployed by banks. Thus, cybercriminals are using generative AI tools to spoof human images and voiceprints used for identification. In this case, only additional security levels like behavioural biometrics can distinguish real users from bots and fraudsters. 

Moreover, there are novel biometric-stealing malware programs that steal stored biometric trait data. Over the past few years, biometrics hacking has become an increasingly popular way for criminals to access sensitive data. This attack can be done either through interception of the data during the transmission process or by stealing it from a database. For e-commerce vendors and financial institutions that means biometric verification cannot be the ultimate authentication method anymore. 


Online payment fraud has been around for a few decades. It has greatly evolved in recent years, as advanced technologies like artificial intelligence (AI) facilitate phishing attacks, account takeover, and card-not-present fraud. This year, as well as the years to come, we are expected to witness a surge in cybercriminal activity boosted by deepfake technology, spoofing data, biometric hacks, and sophisticated schemes of triangulation fraud. That leads to increased investments in cybersecurity systems globally and the development of “smarter” multi-level authentication methods. 

Nina Bobro

1428 Posts 0 Comments

Nina is passionate about financial technologies and environmental issues, reporting on the industry news and the most exciting projects that build their offerings around the intersection of fintech and sustainability.