- Analysts say defenders should expect more polished phishing at scale and quicker prototyping of commodity malware
- Organisations urged to harden identity controls, email authentication, and script execution policies as these underground tools evolve.
Hackers are increasingly adopting large language models tailored for cyberattacks, with tools such as WormGPT and KawaiiGPT re-emerging on dark-web forums and developer platforms, according to new research.
Marketed as “AI without boundaries,” these models promise faster generation of phishing emails, malicious code, and basic ransomware scripts, and are being sold via subscription tiers and supported by active online communities.
Researchers found that many of these underground models appear to be repackaged versions of larger commercial systems but are trained on malware-heavy datasets and maintained by dedicated groups.
Some are explicitly promoted as hacking aids, while others are framed as dual-use tools for penetration testing, reflecting the broader “dual-use dilemma” in advanced technologies. “Any tool powerful enough to build a complex system can also be repurposed to break one,” the analysis by Palo Alto Networks’ Unit 42 noted.
Sophisticated language capabilities
WormGPT, created by blackhat developers, has returned in a more commercialised form, offering subscriptions reportedly priced at about $50 per month or $220 for lifetime access. Advertised across Telegram, forums, and its own interface, the latest version claims sophisticated language capabilities that can mimic executives or vendors, removing tell-tale grammatical errors common in traditional phishing.
In tests, researchers said WormGPT could generate functional PowerShell ransomware and convincing ransom notes featuring threats such as 72-hour payment deadlines and price doubling upon expiry.
KawaiiGPT 2.5, another model cited by researchers, has been open-sourced on GitHub since July and promotes a lightweight Linux setup that takes “less than five minutes.”
Tongue-in-cheek branding
It is backed by a community of roughly 500 contributors and presented with tongue-in-cheek branding as “Your Sadistic Cyber Pentesting Waifu,” underscoring how some projects straddle the line between offensive tooling and purported security research.
While these models can accelerate low-skill cyber operations, researchers cautioned that much of the code they produce remains relatively basic and is often detectable by modern security tools.
However, they warned that continued development and commercialisation could lower the barrier to entry for inexperienced attackers and increase the quality and volume of social engineering campaigns.
The trend parallels broader shifts in real-world attacks, where automation and AI have begun to handle substantial portions of intrusion workflows.
Analysts said defenders should expect more polished phishing at scale and quicker prototyping of commodity malware, and urged organisations to harden identity controls, email authentication, and script execution policies as these underground tools evolve.
Discover more from TechChannel News
Subscribe to get the latest posts sent to your email.
