Information security vs. Flying an airplane
I live on Long Island, and depending on the wind, right under the final approach paths for John F Kennedy International Airport. The planes pass overhead when their landing gears are extending, which means that they are low and noisy.
While laying in bed, I was listening to them and I could not help but
think that a pilot's job must be very similar to that of a security
professional. Professional pilots on modern airplanes do not spend the
majority of their time flying the plane. Instead, they are constantly
running through scenarios. What can go wrong in the next 20 minutes? If
it happens, what do I do? What is the closest alternative airport to
which I can go in case of trouble? What do I do if I hit wind shear on
my final approach to the runway? Are my instruments giving me correct
readings? Am I following the directions of the air traffic controller?
Pilots are constantly evaluating their equipment, and the area around
it. In itself, that is very similar to what security professionals do.
We spend a lot of time writing policy, implementing controls, educating
users, etc. This phase is commonly known as preparation. We
also spend time reviewing system log files, IDS alerts, application log
files, etc. When we are doing that, we are looking for indications that
our security controls have failed. In other words, we are identifying
weak spots and potential incidents. A professional pilot has his
instrument panel for exactly that reason. His windows are just a
convenient way to get some daylight in; they do not offer much of a
view. Various visual and audible clues generated by his management console will alert him about unusual circumstances.
When we do indeed spot a problem, we jump into action. We execute a
previously planned response, or we improvise on the spot, depending of
the circumstances. Our first priority is to contain the
problem. When a pilot gets warning that one of his engines is on fire,
he will probably shut it down and activate the fire suppression system.
When we identify a machine that has been compromised, we generally
isolate it off the network (or cut power all together to prevent
contamination of the system) and stop malware from spreading further.
The parallel starts being more far-fetched when considering eradication and recovery.
Recovering from a mechanical failure will almost always require a pilot
to land his plane and have a specially trained crew of ground engineers
fix whatever is wrong. I case of a compromised computer system,
recovery and eradication is often done by system administrators who
rebuild a box and by security administrators who will keep a close eye
on possibly returning attackers.
I find thought exercises like this fascinating. I'm not sure where this
thought came from, or why it stuck long enough for me to actually blog
about. Next time, I might contemplate the role of air traffic
controllers, and see if there is an analogy to be drawn there.
Auditors, perhaps? Consultants, maybe?
While laying in bed, I was listening to them and I could not help but
think that a pilot's job must be very similar to that of a security
professional. Professional pilots on modern airplanes do not spend the
majority of their time flying the plane. Instead, they are constantly
running through scenarios. What can go wrong in the next 20 minutes? If
it happens, what do I do? What is the closest alternative airport to
which I can go in case of trouble? What do I do if I hit wind shear on
my final approach to the runway? Are my instruments giving me correct
readings? Am I following the directions of the air traffic controller?
Pilots are constantly evaluating their equipment, and the area around
it. In itself, that is very similar to what security professionals do.
We spend a lot of time writing policy, implementing controls, educating
users, etc. This phase is commonly known as preparation. We
also spend time reviewing system log files, IDS alerts, application log
files, etc. When we are doing that, we are looking for indications that
our security controls have failed. In other words, we are identifying
weak spots and potential incidents. A professional pilot has his
instrument panel for exactly that reason. His windows are just a
convenient way to get some daylight in; they do not offer much of a
view. Various visual and audible clues generated by his management console will alert him about unusual circumstances.
When we do indeed spot a problem, we jump into action. We execute a
previously planned response, or we improvise on the spot, depending of
the circumstances. Our first priority is to contain the
problem. When a pilot gets warning that one of his engines is on fire,
he will probably shut it down and activate the fire suppression system.
When we identify a machine that has been compromised, we generally
isolate it off the network (or cut power all together to prevent
contamination of the system) and stop malware from spreading further.
The parallel starts being more far-fetched when considering eradication and recovery.
Recovering from a mechanical failure will almost always require a pilot
to land his plane and have a specially trained crew of ground engineers
fix whatever is wrong. I case of a compromised computer system,
recovery and eradication is often done by system administrators who
rebuild a box and by security administrators who will keep a close eye
on possibly returning attackers.
I find thought exercises like this fascinating. I'm not sure where this
thought came from, or why it stuck long enough for me to actually blog
about. Next time, I might contemplate the role of air traffic
controllers, and see if there is an analogy to be drawn there.
Auditors, perhaps? Consultants, maybe?