Cybercriminals Exploit Popular AI Tools to Distribute Ransomware and Malware

In a significant cybersecurity threat, cybercriminals are leveraging popular artificial intelligence (AI) tools such as OpenAI’s ChatGPT and InVideo AI to distribute various ransomware families and destructive malware, according to a recent report from Cisco Talos.

The researchers revealed that malicious actors are promoting fake installers that masquerade as genuine applications, leading to the propagation of threats including CyberLock and Lucky_Gh0$t ransomware, as well as a new malware variant named Numero. The CyberLock ransomware primarily encrypts specific files on victims’ systems, while Lucky_Gh0$t is known for its ability to target smaller files for encryption. Both ransomware types employ sophisticated methods to carry out their attacks, including privilege escalation.

A notable case involves a bogus website suspected of impersonating a legitimate lead monetization platform, promoting a fake AI solution called NovaLeadsAI. Users misled into downloading the installer find themselves with a malware-laden .NET executable instead of the promised AI tool. The ransomware not only encrypts files but also demands a ransom of $50,000, with the perpetrators claiming the funds will support humanitarian efforts in regions impacted by injustice.

Moreover, Lucky_Gh0$t ransomware, disguised under a premium ChatGPT version installer, utilizes a deceptive executable name to confuse users into initiating the malware. Adding to the threat landscape, a counterfeit installer for InVideo AI has emerged, deploying the destructive Numero malware, which inflicts severe damage by distorting the user interface of infected systems. As the usage of AI tools surges, this trend underscores the need for increased vigilance against malicious online schemes.

For detailed insights, refer to the original article by Cisco Talos.

In light of these findings, cybersecurity experts urge users to exercise caution when downloading applications, especially those that promise free trials or subscriptions. The rise of fake AI tools illustrates the lengths to which cybercriminals will go to exploit trust in technology, prompting a reevaluation of security practices within the industry.