Threat actors are increasingly using counterfeit artificial intelligence (AI) tools to lure unsuspecting users into inadvertently downloading a sophisticated information-stealer malware known as Noodlophile. According to Morphisec researcher Shmuel Uzan, these malicious actors have shifted away from traditional phishing methods, opting instead for convincing AI-themed platforms promoted through seemingly legitimate social media channels.
The campaign has proven remarkably successful, with posts shared on social media garnering over 62,000 views, attracting users in pursuit of AI tools for video and image editing. Notable fake platforms identified include Luma Dreammachine AI and Luma Dreammachine.
When users are directed to these deceptive websites, they are encouraged to upload their video or image content to utilize AI-powered creation services. However, instead of receiving the promised AI-generated material, they unwittingly download a malicious ZIP file titled “VideoDreamAI.zip.” This file contains an executable that initiates a chain of infections leading to the deployment of the Noodlophile Stealer, which is capable of harvesting sensitive data including browser credentials and cryptocurrency wallets.
The developer behind the Noodlophile malware is believed to be based in Vietnam, claiming to be a “passionate Malware Developer” on their GitHub profile. This region has been noted for its flourishing cybercrime ecosystem, often linked to an array of malware campaigns targeting various online platforms, particularly Facebook. Previous reports indicate a trend of bad actors exploiting public interest in AI technologies for malicious efforts, including instances where over 1,000 harmful URLs were removed from Meta’s services in 2023.