Tag: OpenAI

  • OpenAI’s GPT Powers Spam Attack on 80,000 Websites

    OpenAI’s GPT Powers Spam Attack on 80,000 Websites

    In an alarming showcase of how artificial intelligence can be misused, researchers from SentinelLabs revealed that AkiraBot, a spam bot utilizing OpenAI’s GPT-4o-mini model, successfully generated and sent tailored messages to over 80,000 websites, sidestepping conventional spam filters.

    According to the researchers, Alex Delamotte and Jim Walter, AkiraBot operates by employing prompts that instruct the AI to replace variables with specific website names, thereby crafting messages that appear personalized and relevant. This strategy complicates efforts to filter out spam, as each message produced by the AI is unique, contrasting with previous spam techniques that relied on common templates.

    The analysis conducted by SentinelLabs highlighted the effectiveness of AkiraBot, noting that while messages were dispatched to over 80,000 targeted sites between September 2024 and January 2025, approximately 11,000 domains were not reached successfully. This level of engagement underscores the potential of LLM technology to generate content that not only engages recipients but also evades standard filtering practices.

    The researchers also noted that the AI’s power lies in its ability to curate responses that explain the services of each targeted website, making the spam messages appear less generic. This poses significant challenges for website administrators and security personnel tasked with defending against spam attacks.

    In a response to these findings, OpenAI acknowledged the misuse of its technology, emphasizing that such applications contravene the company’s terms of service. OpenAI expressed its commitment to combating the improper use of its chatbots in various contexts, including cybersecurity.

  • OpenAI Expands Bug Bounty Program and Cybersecurity Initiatives

    OpenAI Expands Bug Bounty Program and Cybersecurity Initiatives

    On March 26, OpenAI announced significant updates to its Cybersecurity Grant Program, bug bounty program, and overall AI security initiatives, all intended to strengthen its commitment to user security. The updated Cybersecurity Grant Program, which has been in place for two years, has now broadened its scope by accepting proposals for a wider range of cybersecurity projects. This includes prioritizing research in software patching, model privacy, detection and response, security integration, and agentic security.

    Remarkably, OpenAI is also introducing microgrants in the form of API credits for researchers with high-quality proposals. These microgrants are designed to help with the rapid prototyping of innovative cybersecurity ideas and experiments, further encouraging a culture of research and innovation within the field.

    The most notable update to their bug bounty program is a substantial increase in the maximum potential payout. OpenAI has raised the bug bounty limit for ‘exceptional and differentiated critical findings’ from $20,000 to an impressive $100,000. This program, which debuted nearly two years ago in collaboration with Bugcrowd, has already rewarded 209 submissions, highlighting OpenAI’s serious commitment to maintaining high security standards. As Michael Skelton, vice president of operations at Bugcrowd, emphasized, the proactive nature of OpenAI’s security measures has garnered significant public interest.

    Furthermore, to address growing threats to its artificial general intelligence (AGI) technology, OpenAI is enhancing its security infrastructure through various initiatives. This includes deploying AI-driven defenses, collaborating with SpecterOps for ongoing security evaluations, and developing better strategies to prevent prompt injection attacks. The company aims to solidify its security stance while responding to an increasingly sophisticated cyber threat landscape.

    With these advancements, OpenAI not only aims to attract top security talent but also to preemptively address vulnerabilities before they can escalate into major incidents, as noted by Stephen Kowski, field CTO at SlashNext Email+ Security. As competition intensifies in the AI sector, the implications of these updates will likely resonate across the industry.