In an alarming showcase of how artificial intelligence can be misused, researchers from SentinelLabs revealed that AkiraBot, a spam bot utilizing OpenAI’s GPT-4o-mini model, successfully generated and sent tailored messages to over 80,000 websites, sidestepping conventional spam filters.
According to the researchers, Alex Delamotte and Jim Walter, AkiraBot operates by employing prompts that instruct the AI to replace variables with specific website names, thereby crafting messages that appear personalized and relevant. This strategy complicates efforts to filter out spam, as each message produced by the AI is unique, contrasting with previous spam techniques that relied on common templates.
The analysis conducted by SentinelLabs highlighted the effectiveness of AkiraBot, noting that while messages were dispatched to over 80,000 targeted sites between September 2024 and January 2025, approximately 11,000 domains were not reached successfully. This level of engagement underscores the potential of LLM technology to generate content that not only engages recipients but also evades standard filtering practices.
The researchers also noted that the AI’s power lies in its ability to curate responses that explain the services of each targeted website, making the spam messages appear less generic. This poses significant challenges for website administrators and security personnel tasked with defending against spam attacks.
In a response to these findings, OpenAI acknowledged the misuse of its technology, emphasizing that such applications contravene the company’s terms of service. OpenAI expressed its commitment to combating the improper use of its chatbots in various contexts, including cybersecurity.