Science & Technology

Experts Say About AI Chatbots Potential as Cybercrime Tool

HP Wolf Security has recorded the first case when criminals used generative artificial intelligence technologies to write malicious code to distribute a remote access Trojan.

Experts Say About AI Chatbots Potential as Cybercrime Tool

The mentioned example is indicative and is a sign of the beginning of a likely structural process in which the possibilities of developing complex malware are expanding, which can become a kind of platform for the growth of cybercrime.

Recently, developers have increasingly relied on chatbots based on generative artificial intelligence, including, for example, ChatGPT. The corresponding digital products are used, among other things, for code generation and translation between programming languages. Currently, there are sufficient arguments in favor of calling chatbots full-fledged members of development teams. The productivity gains offered by the corresponding digital products are impressive.

At the same time, the excessive use of chatbots, which forms something on the verge of dependence on these virtual assistants, contains risks.

Lou Steinberg, founder and managing partner at CTM Insights and former CTO of TD Ameritrade, says that the mentioned artificial intelligence tools learn from a huge amount of open-source software that may contain design errors, bugs, or even intentional malware. According to him, allowing open-source code to train AI tools is the same as allowing a driver who committed a bank robbery to teach driving in high school.

Morey Haber, chief security adviser at BeyondTrust, says that criminals use chatbots based on artificial intelligence to automate the creation of malicious software. According to the expert, in this case, components for virtual attacks are generated with minimal technical expertise. Morey Haber noted that criminals can ask the chatbot to create scripts, like a PowerShell script that disables email boxes, without knowing the underlying code.

Lou Steinberg says that to counter the mentioned threats, companies should carefully inspect and scan the code written by generative artificial intelligence.

It is worth noting that against the background of the active development of AI, the issue of cybersecurity has become more relevant. One of the tools to counter such challenges in cyberspace is the personal awareness of users. For example, a query in an Internet search engine, such as how to know if my camera is hacked, will allow anyone to get information about signs of unauthorized access to the device.

Serhii Mikhailov

3143 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.