ChatGPT is already being used by hackers to create malware

ChatGPT is already being used by hackers to create malware

It didn’t take long for cyber criminals to hijack ChatGPT. Artificial intelligence is being used to create malicious tools, Check Point Research, a company specializing in cyber security, warns this Wednesday, January 11.

A real phenomenon on the web since last December, ChatGPT is not only used to answer more or less specific questions. Cyber ​​criminals use it to “craft” malicious code or resell products on the dark web. Check Point Research warns of this new distraction of the famous artificial intelligence this Wednesday.

“On December 29, 2022, a thread titled “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum. The thread editor has disclosed that it uses ChatGPT to replicate malware strains and techniques described in research publications and articles on common malware,” warn cybersecurity researchers at Check Point Research. The idea of ​​this underground forum here is to show hackers how to query ChatGPT to create malicious code.

Ease of use that unsettles

More seriously, the ChatGPT tool and its ease of use are likely to be used even by cyber criminals who actually have little knowledge in this area. “For example, [un hacker] can potentially turn the code into ransomware if the scripting and syntax issues are fixed,” said the company, which worries about uses that could be made by more competent hackers.

“While the tools we analyze in this report are fairly basic, it’s only a matter of time before more experienced cybercriminals improve their use of AI,” said Sergey Shykevich, head of Threat Intelligence Group at Check Point.

Some malicious actors even go quite far with this artificial intelligence, allowing them to create illegal marketplace scripts on the dark web to resell fraudulently acquired data or products.

See also  The United Arab Emirates announce the arrest of the world's most wanted smuggler