Martin and Rich did a bunch of micro podcasts at the RSA conference last week. The latest episode features David Mortman of Echelon One. The point that they are making is that organizations need to accept that security measures will fail.

I am from The Netherlands, which would be for 65% below sea level, if it were not for some fancy engineering. After a catastrophic flooding in 1953, we embarked on a large-scale water-management project called The Delta Works. As a nation, we are fairly good at water management. Most of the large-scale water project world-wide are done by Dutch engineers.


Having always been dependent on dikes, dams, etc., the Dutch as a
people are used to living with some form of threat. One of the things
that we know is that our water controls will fail some day. While we try hard to prevent it from happening, it is inevitable.



As a result, we have large spaces in the country that are designated as
flood plains. Dikes and dams are designed with deliberate weaknesses in
them, so that when they break, we know how they will break and where
the breaches will occur. By designing for failure, we can plan our
flood watches better.



Information security is very much like this. We can only do so much before costs are getting prohibitive.



Why not design our information flows in such a way that we can predict
where (and how)  their security controls will fail, and plan for that
failure. We'll be able to put containment controls and correcting
controls in place for when the breach happens, concentrate our
monitoring, and do other good things.



Excellent point, gentlemen.



Oh, and just for Rich's reading pleasure. I disagree!