A new artificial intelligence (AI) tool called GhostGPT is being misused by cybercriminals to create harmful programs, hack systems, and send convincing phishing emails. Security researchers from Abnormal Security found that this AI model is available for sale on Telegram, a messaging platform, with prices starting at $50 per week. Hackers find GhostGPT appealing because it is fast, easy to use, and does not store user conversations, making it harder for authorities to track.
GhostGPT is not the only AI being used for illegal activities. Similar tools like WormGPT are also on the rise, offering criminals ways to bypass security controls that are present in ethical AI models like ChatGPT, Google Gemini, Claude, and Microsoft Copilot. These unethical AI models are designed to assist in writing malicious code and carrying out cyberattacks, posing a major risk to businesses and individuals.The rise of cracked AI models—which are modified versions of legitimate AI tools—has made it easier for hackers to gain access to powerful AI systems without restrictions. Security experts have been tracking the rise of these tools since late 2024 and report an increase in their usage for cybercrime. This development is alarming for the tech industry and security professionals because AI was meant to help people and businesses, not be used as a weapon. If these malicious AI models continue to grow, companies and individuals could face more sophisticated cyberattacks, making cybersecurity more challenging. The need for stronger regulations and better security measures to prevent AI abuse is now more critical than ever.

Information security specialist, currently working as risk infrastructure specialist & investigator.
15 years of experience in risk and control process, security audit support, business continuity design and support, workgroup management and information security standards.