OpenAI Bans Accounts Used by Cybercriminals for Malicious AI-Assisted Activities

OpenAI has announced a ban on several ChatGPT accounts that were reportedly used by threat actors associated with Russian-speaking groups and two Chinese nation-state hacking organizations. The accounts were found to be engaged in developing malware and automating social media tasks. The AI company’s threat intelligence report indicates that these actors utilized ChatGPT to refine Windows malware and enhance their operational security measures.

Dubbed ‘ScopeCreep,’ the malware campaign involved creating a trojanized version of a legitimate video game tool, Crosshair X. Users who downloaded this malicious software unwittingly introduced a malware loader to their systems, which would fetch and execute additional payloads from an external server. OpenAI explained that this malware was engineered to escalate privileges, maintain stealthy persistence, and exfiltrate sensitive information.

Additionally, OpenAI revealed that the actors employed various tactics to cover their tracks. This included using temporary email accounts, launching multiple conversations on ChatGPT to accumulate incremental code improvements, and employing techniques like Base64 encoding and DLL side-loading to conceal their activities from security measures.

The Chinese-related threat actors also engaged with ChatGPT for multiple purposes, including open-source research and script modification. They appeared to be dedicated to infrastructure setup as well, asking for help in administering Linux systems and managing software development projects. OpenAI’s findings underscore the grave risk posed by the misuse of AI capabilities and highlight the importance of robust cybersecurity measures.