A significant security vulnerability in McDonald’s AI-powered recruitment platform, McHire, has resulted in the exposure of personal information for over 64 million job applicants. The breach, which has raised serious concerns about data privacy, was identified by security researchers Ian Carroll and Sam Curry. They discovered that the vulnerability allowed unauthorized access to sensitive data, including names, email addresses, phone numbers, and home addresses, due to easily guessable default credentials and a critical programming flaw.
The investigation was sparked after users reported unusual responses from the McHire chatbot, Olivia. Researchers quickly pinpointed two essential weaknesses. Firstly, the administrative login used by restaurant owners accepted “123456” for both the username and password, allowing easy access to a test restaurant account. Secondly, they discovered an Insecure Direct Object Reference (IDOR) vulnerability within the platform’s internal API, which permitted users to access confidential information from other applicants by altering specific numerical identifiers in a web address.
According to the researchers’ blog post, this oversight allowed them to view millions of applications, including unmasked contact details and authentication tokens. The IDOR vulnerability facilitated unauthorized viewing of applicants’ chat messages. Both Carroll and Curry urged immediate action upon realizing the severity of the exposure, initiating contact with McDonald’s and Paradox.ai on June 30, 2025, at 5:46 PM ET.
In a swift response, McDonald’s confirmed receipt of the report shortly after it was filed and disabled the default administrative credentials by 7:31 PM ET the same day. Paradox.ai announced that they had resolved the issues completely by July 1, 2025, at 10:18 PM ET. Both companies emphasized their commitment to enhancing data security protocols following this critical breach.
Kobi Nissan, Co-Founder & CEO at MineOS, commented on the incident, cautioning that companies must ensure robust security measures are in place before deploying AI solutions to interact with customers. “This incident is a reminder that when companies rush to deploy AI in customer-facing workflows without proper oversight, they expose themselves and millions of users to unnecessary risk,” he stated, emphasizing the need for comprehensive security and governance frameworks around AI systems.
Nissan highlighted that any AI system that collects or processes personal data should adhere to stringent privacy and security measures similar to those applied to core business systems. He advised that as AI adoption accelerates, businesses need to recognize these technologies as regulated assets and implement governance frameworks to ensure accountability from the start.