WormGPT: Unleashing the Dark Side of Generative AI for Cybercrime
In the rapidly evolving landscape of artificial intelligence (AI), the misuse of generative AI technology by malicious actors has given rise to an alarming surge in cybercrime activities. According to a recent report by SlashNext, a new cybercrime tool called WormGPT has surfaced in underground forums, offering adversaries a sophisticated means to launch phishing and business email compromise (BEC) attacks.
Security researcher Daniel Kelley describes WormGPT as a blackhat alternative to well-known AI models like ChatGPT, specifically designed to facilitate malicious activities. With this tool, cybercriminals can automate the creation of highly convincing fake emails tailored to individual recipients, significantly enhancing the success rate of their attacks.
The author behind WormGPT claims it to be the ultimate adversary to popular AI models like ChatGPT, boasting its ability to engage in various illicit activities. In the wrong hands, tools like WormGPT can become formidable weapons, particularly as organizations such as OpenAI and Google intensify their efforts to combat the exploitation of large language models (LLMs) for fabricating phishing emails and generating malicious code.
See Also: So you want to be a hacker?
Offensive Security, Bug Bounty Courses
Check Point, in a recent report, highlights the lower anti-abuse restrictors of Google Bard in the realm of cybersecurity compared to ChatGPT. Consequently, Bard’s capabilities make it much easier to generate malicious content, further raising concerns about the potential misuse of generative AI technology.
Earlier this year, an Israeli cybersecurity firm shed light on how cybercriminals bypassed the restrictions of ChatGPT by leveraging its API, engaging in activities such as trading stolen premium accounts and selling brute-force software to hack into ChatGPT accounts using vast lists of email addresses and passwords.
The existence of WormGPT, operating without ethical boundaries, underscores the significant threat posed by generative AI. This tool allows even novice cybercriminals to launch swift and large-scale attacks, without requiring extensive technical expertise.
To compound the issue, threat actors are promoting “jailbreaks” for ChatGPT, engineering specialized prompts and inputs that manipulate the tool to generate output involving sensitive information disclosure, inappropriate content, and even the execution of harmful code.
Trending: Offensive Security Tool: Nucleimonst3r
Kelley emphasizes that generative AI has the ability to create emails with impeccable grammar, making them appear legitimate and reducing the chances of being flagged as suspicious. This democratization of sophisticated BEC attacks means that even attackers with limited skills can leverage this technology, widening the spectrum of cybercriminals.
The disclosure of these concerning developments coincides with researchers from Mithril Security modifying an existing open-source AI model called GPT-J-6B. The modified model, known as PoisonGPT, was uploaded to a public repository like Hugging Face, facilitating its integration into various applications and leading to a form of LLM supply chain poisoning. By exploiting a name that mimics a reputable company, such as the typosquatted version of EleutherAI, the company behind GPT-J, this technique has proven successful in spreading disinformation.
As generative AI continues to evolve, the exploitation of AI models for malicious purposes underscores the urgent need for robust safeguards and proactive measures to protect against the growing threat of AI-driven cybercrime.
Are u a security researcher? Or a company that writes articles or write ups about Cyber Security, Offensive Security (related to information security in general) that match with our specific audience and is worth sharing?
If you want to express your idea in an article contact us here for a quote: [email protected]
Source: thehackernews.com