Cloudflare Confirms 1.1.1.1 Outage Stemmed from Internal Misconfiguration, Not Attack

Cloudflare has clarified that the recent outage affecting its 1.1.1.1 Resolver service was due to an internal misconfiguration rather than a cyberattack or BGP hijack. The post-mortem analysis, released by the company, aims to quash speculation regarding the incident that occurred on July 14, impacting global internet services and users.

The incident was triggered by a configuration error linked to a future Data Localization Suite (DLS) project, which was mistakenly applied to the 1.1.1.1 Resolver’s IP prefixes. This misconfiguration occurred after a configuration change executed on June 6, and it led to the unintentional withdrawal of the Resolver’s IP addresses from operational data centers.

At 21:48 UTC on July 14, changes to the DLS resulted in a test location being added to an inactive service, causing a global refresh of network configurations. As a result, the main public DNS resolver became unreachable. Customers began reporting issues almost immediately, and DNS traffic started to decline significantly within four minutes of the configuration update.

Cloudflare detected and announced the incident by 22:01 UTC, and by 22:20 UTC, it had reverted the misconfiguration and started re-advertising the affected BGP prefixes. Full service restoration was achieved at 22:54 UTC. Following the event, Cloudflare recognized the shortcomings in its legacy systems and is now accelerating the migration to improved configuration systems to prevent similar occurrences in the future. For further details, refer to Cloudflare’s official announcement.