FraudGPT: New AI Model Sold on Dark Web Poses Major Cyberattack Threats
The Darkside of AI Getting Real
A new AI-powered chatbot model called FraudGPT is circulating in dark web forums and Telegram channels, according to security researchers. The tool is being sold for as little as $200 per month, providing cybercriminals with an efficient way to craft large-scale phishing scams and cyberattacks.
Researchers at Netenrich uncovered evidence of FraudGPT, which its creator claims has brought in over 3,000 sales under the alias 'CanadianKingpin'. The tool utilizes a Large Language Model (LLM) that remains unidentified.
FraudGPT allows those with limited technical skills to automate tasks previously requiring expertise. However, this new accessibility also poses serious threats by empowering inexperienced attackers with sophisticated capabilities, multiplying risks significantly.
Using FraudGPT, cybercriminals can:
Generate malicious code to exploit system vulnerabilities
Create undetectable malware that bypasses antivirus software
Craft phishing pages that mimic legitimate websites, increasing success rates
Identify non-verified credit card bins to conduct unauthorized transactions
Develop other hacking tools tailored to specific exploits
Find leaked data, websites, and marketplaces for stolen information
Generate scam pages and letters to deceive victims
Research coding and hacking techniques to improve skills
Identify websites suitable for using stolen credit card data fraudulently
FraudGPT follows WormGPT, another GPT model released in July for drafting business email compromise (BEC) scams — one of hackers' most widely used attack vectors.
For companies, AI adoption has been slow due to security concerns. But "phishing as a service" tools represent a significant threat given many companies' immature cybersecurity readiness.
While detection tools exist, identifying AI-generated text reliably remains difficult. The rapid pace of AI model improvements makes it hard for experts to identify and combat automated outputs. Incidents like Samsung engineers inadvertently leaking sensitive information using ChatGPT illustrate the risks of tools like FraudGPT.
Still, certain safety measures can help defend against phishing emails and cyberattacks - though OpenAI recently discontinued its AI text classifier tool. Research also questions whether AI-generated text can be reliably detected.
The sophistication and automation of models like FraudGPT pose a major cyberattack threat.