Asenna Steam
kirjaudu sisään
|
kieli
简体中文 (yksinkertaistettu kiina)
繁體中文 (perinteinen kiina)
日本語 (japani)
한국어 (korea)
ไทย (thai)
български (bulgaria)
Čeština (tšekki)
Dansk (tanska)
Deutsch (saksa)
English (englanti)
Español – España (espanja – Espanja)
Español – Latinoamérica (espanja – Lat. Am.)
Ελληνικά (kreikka)
Français (ranska)
Italiano (italia)
Bahasa Indonesia (indonesia)
Magyar (unkari)
Nederlands (hollanti)
Norsk (norja)
Polski (puola)
Português (portugali – Portugali)
Português – Brasil (portugali – Brasilia)
Română (romania)
Русский (venäjä)
Svenska (ruotsi)
Türkçe (turkki)
Tiếng Việt (vietnam)
Українська (ukraina)
Ilmoita käännösongelmasta




The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind. Instead, it was triggered by a change to one of our database systems' permissions which caused the database to output multiple entries into a “feature file” used by our Bot Management system. That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.
The software running on these machines to route traffic across our network reads this feature file to keep our Bot Management system up to date with ever changing threats. The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail.
After we initially wrongly suspected the symptoms we were seeing were caused by a hyper-scale DDoS attack, we correctly identified the core issue and were able to stop the propagation of the larger-than-expected feature file and replace it with an earlier version of the file. Core traffic was largely flowing as normal by 14:30. We worked over the next few hours to mitigate increased load on various parts of our network as traffic rushed back online. As of 17:06 all systems at Cloudflare were functioning as normal.
https://blog.cloudflare.com/18-november-2025-outage