Your security is only as good as you can prove
At one point I was involved in an ongoing incident where customer data was being leaked. My team and I didn’t know where the leak was coming from, so we frantically built new monitoring apparatus to watch flows of data, hoping to catch the adversary in the act. All the while, we had C-levels breathing down our necks, deathly afraid for what would happen to our relationships with business partners when the leak was eventually disclosed.
Energy drinks were consumed. Software engineering that would normally take months instead happened in hours. The attackers continued attacking us, and the brand new monitoring showed…not a peep!
To cut a long story short, the attack was occurring entirely in meatspace: the adversary was employing very traditional social engineering techniques, and didn’t exploit any software-based vulnerabilities at all. Infosec concerns turned out to be red herrings, and the organization frantically expended vital resources chasing those false leads instead of looking for the real cause.
The lesson I take from this is that an organization’s infosec threat model needs to do something more than simply foil attackers: it must be capable of excluding entire categories of infrastructure from suspicion when the organization is actively under attack. It must do so with rather high confidence: confidence enough that an engineering manager can quickly report to a C-level “my team has reviewed our systems, identified all relevant attack vectors, and we know that our system wasn’t involved in whatever the adversary is exploiting.”
This is a higher bar than most software engineers will produce in isolation. In isolation, I have heard plenty of software engineers say things like “I see that exploit in theory, but it seems unlikely that an attacker would find that issue and know to exploit it in combination with the other requisite exploits needed to produce a viable attack.” And 99% of the time they’re right! (More formally: in 99% of years, that exploit will remain unexploited).
But two nines isn’t enough in this case. As our C-levels knew, the organization has only a single brand: a single notional “tank” of trust from customers and business partners. If the organization is going to run 10 “99% secure” systems for 10 years, the probability that all of them will remain un-hacked is only 37%1! I think this is one reason (among many) that security teams exist: they help the organization multiply by number of systems and passage of time, ultimately producing a risk model that is stricter than any one software team would produce.
-
This math assumes that attacks arrive independently, which is unlikely to be true in practice. If we don’t assume independent attacks, the actual numbers will be much worse. ↩