Warning: exif_imagetype(https://payspacemagazine.com/wp-content/uploads/2024/02/scammers-use-genai-to-scale-phishing-attacks.jpg): failed to open stream: Connection refused in /home/deploy/sites/payspacemagazine.com/wp-includes/functions.php on line 3314

Warning: file_get_contents(https://payspacemagazine.com/wp-content/uploads/2024/02/scammers-use-genai-to-scale-phishing-attacks.jpg): failed to open stream: Connection refused in /home/deploy/sites/payspacemagazine.com/wp-includes/functions.php on line 3336

Warning: exif_imagetype(https://payspacemagazine.com/wp-content/uploads/2024/02/scammers-use-genai-to-scale-phishing-attacks.jpg): failed to open stream: Connection refused in /home/deploy/sites/payspacemagazine.com/wp-includes/functions.php on line 3314

Warning: file_get_contents(https://payspacemagazine.com/wp-content/uploads/2024/02/scammers-use-genai-to-scale-phishing-attacks.jpg): failed to open stream: Connection refused in /home/deploy/sites/payspacemagazine.com/wp-includes/functions.php on line 3336
Science & Technology

Scammers Use GenAI to Scale Phishing Attacks

Currently, in the United States, more than 25% of companies prohibit their employees from using generative artificial intelligence, but these measures, taken, among other things, for security reasons, are not an effective tool to counter cybercriminals who, using AI, force their victims to provide confidential information or pay fraudulent invoices.

Scammers Use GenAI to Scale Phishing Attacks

Hackers, having adopted ChatGPT or its equivalent in the dark web called FraudGPT, can create videos, the level of realism of which is very high and difficult to identify. In some cases, such content is replete with various kinds of profit and loss reports. These videos may contain fake IDs and identity cards generated by AI. Sometimes cybercriminals, using the recorded voice of company executives and their images, apply generative machine intelligence to create content designed to perform manipulative actions to make a profit.

Statistical data indicate that the use of AI in criminal scenarios is a widespread problem, the ignoring of which to some extent corresponds to what can be described as the denial of objective facts. According to the results of a survey by the Association of Financial Professionals based in Rockville, Maryland, 65% of respondents said that in 2022 their organizations faced fraud attempts or the actual commission of appropriate actions with payments. Among those who lost money as a result of the actions of criminals, 71% were compromised through email.

Also, the report on the results of the survey of the Association of Financial Specialists notes that large organizations with an annual income of $1 billion are most susceptible to email fraud. In such cases, cybercriminals most often use the method of sending phishing emails. In these letters, formal authenticity which is at a high level and has signs of reliability corresponding to the degree of trust in companies such as Chase or eBay, people are asked to click on the link. By performing the mentioned action, the victim visits a website that copies the web page of a certain large company, demonstrating maximum reliability. After that, the user is asked to go through the authorization procedure and provide data related to the category of confidential information. If the victim performs all the actions, criminals gain access to bank accounts or can steal personal data.

Spear phishing has a similar implementation algorithm, but at the same time is more targeted. In this case, instead of sending regular emails, virtual emails are addressed to an individual or a specific organization. To increase the level of persuasiveness, criminals first receive information about officials and the names of employees. The relevant data is in most cases publicly available.

Also, victims of cybercriminals sometimes do not commit any actions. In this case, the methods of unauthorized access to the device are meant. Information about signs that hackers are interacting with the victim’s gadget is publicly available. To get the relevant data in the Internet search system, the user needs to enter a query such as, for example, how to know if my camera is hacked. In the information age, knowledge has a special power and allows you to avoid trouble.

The listed schemes of committing fraudulent actions are not new and do not contain any revolutionary criminal ideas, but generative artificial intelligence significantly complicates the identification of criminals, since this technology very qualitatively imitates reality in many of its manifestations and forms of existence. Even five years ago, scammers could be detected by paying attention to wonky fonts, odd writings, or grammar mistakes. ChatGPT or FraudGPT allows criminals to create competent phishing messages, the degree of reliability of which is very high, which does not cancel the so-called artificial origin. In some cases, generative artificial intelligence impersonates the CEO of a company or a manager, using recordings of their voices to make a fake phone call. Moreover, the AI can generate a video call, having access to the necessary data for this.

In Hong Kong, an employee of a company whose head office is based in the United Kingdom received a message allegedly from the CFO. In this case, the topic of the communicative interaction was a request for the transfer of money for $25.6 million. The employee initially assumed the possibility that the message could be an electronic phishing email. These concerns, which, as it turned out, were not in vain, disappeared after a video call with the CFO and other colleagues. The employee transferred the money and only after making this transaction contacted the head office. It turned out that the video call was a deep fake.

Christopher Budd, director of the firm Sophos, said that the work to make the content generated by artificial intelligence trustworthy is impressive in terms of the final result.

A reflection and confirmation of the rapid development of deep fake creation technology are high-profile scandals in which some kind of digital copies of famous public figures were involved. Last summer, a fake Elon Musk was shown in a false investment scheme advertising a non-existent platform. This is one of many examples of the use of AI-generated content for manipulation and deception purposes. Similar videos are posted on popular social media platforms, including TikTok, Facebook, and YouTube.

Andrew Davies, global head of regulatory affairs at ComplyAdvantage, says that the creation of synthetic identity cards is now being simplified. According to the expert, in this case, either stolen information is used, or some fictitious data and applied generative artificial intelligence.

Cyril Noel-Tagoe, chief security researcher at Netcea, says that there are a lot of arrays of information available on the Internet that can be used by criminals to create realistic phishing emails. In this context, the expert separately noted that large language models are trained on the Internet.

Generative artificial intelligence increases the level of authenticity of fakes, which is a serious problem that is becoming more widespread as a result of automation and an increase in the number of websites that process financial transactions. Andrew Davies says that one of the catalysts for the evolution of fraud and financial crimes is the transformation of the relevant functional environment.

Ten years ago, there were several ways of money movement around electronically. Most of these algorithms concerned traditional banks. The rapid increase in the number of payment solutions, including, for example, PayPal, Zelle, Venmo, and Wise, has expanded the playing field, providing criminals with new opportunities. Traditional financial institutions are increasingly using APIs that connect apps and platforms. These are high-tech and effective solutions, but this does not negate the fact that APIs are a potential point of attack from criminals.

Fraudsters using generative artificial intelligence create messages in the shortest possible time, the reliability of which is beyond doubt, although it is what can be described as a set of fictitious signs of reliability.

Netcea surveyed firms and found that 22% of companies were attacked by bots designed to create fake accounts. In the sphere of financial services, the corresponding indicator is at around 27%. Among those companies that recorded an automatic attack using a bot, 99% reported an increase in the intensity of such actions in 2022. Among firms with annual revenue of $5 billion or more, 66% saw a moderate or significant increase in the number of automated attacks two years ago.

Almost all spheres of activity face the problem of fake accounts, but in the financial industry, this challenge from advanced technologies in the service of fraudsters is especially relevant. In the United States, 30% of companies providing financial services were attacked in 2022. These firms also reported that between 6% and 10% of new accounts on their platforms are fake.

The financial services industry is fighting fraud based on generative artificial intelligence by creating its own AI models. Mastercard has developed a machine intelligence model that can detect fraudulent transactions and identify fake accounts used by criminals.

Currently, there is an increase in the use of so-called impersonation tactics to convince victims of the legality and reliability of a financial transaction sent to a real person or company. Ajay Bhalla, president of cyber and intelligence at Mastercard, says that such actions are extremely difficult to detect. In this case, the victims go through all the necessary checks and send the money themselves. The difficulty lies in the fact that within the framework of the mentioned tactics, criminals do not need to commit any violations.

Some scammers may have inside information. This is one of the manifestations of the fact that fraud is becoming more sophisticated.

Cyril Noel-Tagoe says that if the usual channels for money transfer requests are through an invoicing platform rather than email or Slack, other verification methods should be implemented.

It is also possible to distinguish real identification data from fake ones by increasing the level of detail of the authentication process. Currently, some companies request an identity ID and a real-time selfie as part of the procedure. In the foreseeable future, it will be possible to ask a person to blink, say a certain word, or perform another action that allows companies to distinguish live communication with a real client from a pre-prepared fake video.

Firms will need some time to start responding effectively to modern cyber threats. Christopher Budd says that ramping up from artificial intelligence is like putting jet fuel on the fire.

Serhii Mikhailov

3123 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.