Monday, March 10, 2014

Fundamental lessons learned from recent data breaches

Higher Education has seen its fair share of data breaches recently. This past week, the University of Maryland, Indiana University, the University of North Dakota, and Johns Hopkins University announced breaches.

The good news is that, slowly, breach notifications are starting to become a little more informative, which provides us with good opportunities to reflect on our own infrastructure.

Such reflection doesn't have to take forever, or lead to bulky reports. For example:

University of Maryland: Full compromise of a system used to manage ID cards and data exfiltration of PII. Root cause of impact of the breach: Excessive proliferation of private information.

University of Indiana: Web server was reconfigured to lower its security posture, while PII was posted. Root cause: Insufficient change management in production configurations and lack of awareness with regards to the location of PII.

University of North Dakota: Unauthorized access to an account with privileged access. Unknown how access was obtained. Root cause: weak authentication and possibly insufficient access control.

Johns Hopkins University: A coding error in a public-facing website allowed access to a back-end database. Root cause: insufficient coding standards (or executive of such standards), excessive access to back-end database,  combined with lack of active vulnerability scanning.

From these four breaches, we can learn a few higher level lessons:
  • Identify and limit data collection and proliferation to necessity, rather than convenience.
  • Actively manage vulnerabilities, both in terms of detection as well as in terms of remediation.
  • Implement strong authentication, including password recovery protocols.
  • Implement strong access control.
  • Implement strong audit trails.
  • Develop and implement hardened configurations and manage changes.
With all of this, it is imperative that higher management is informed transparently and fully. Make sure that you are not the highest person in the hierarchy who realizes what risks exist, what the likelihood of exploitation is, and what the impact would be.

Looking back at our own environment, we can identify where we are lacking, and then plan a path to improve how we identify and manage risks. Subsequently, we can work to obtain funding and buy-in, and get to work.

Thursday, February 20, 2014

University of Maryland data breach

Unfortunately, the University of Maryland has the dubious privilege of listing itself  among data breach victims. A message posted to the University's web site notifies the public of some 300,000 records containing private information that have been breached.

I am not going to speculate about the root cause of the breach, but I am hopeful that more details will become available as time progresses. The fact that there is an active law enforcement investigation does not help in obtaining transparency though.

The message itself did contain a lot of good information. Enough that I want to highlight some of it here:

"Last evening, I was notified [...] that the University of Maryland was the victim of a sophisticated computer security attack that exposed records containing personal information. "
The fact that the notification went out less than 24 hours after their president was informed is telling. It could mean that relevant information did not trickle up fast enough, or that the institution either has a very well developed incident response plan and/or very strong senior leadership. Assuming it is the latter, such fast notification deserves compliments. Publicly acknowledging a data breach in less than 24 hours after the top-level official is aware is commendable.
"A specific database of records [...] was breached yesterday."
Detecting a data breach in less than 24 hours is a fantastic job. Although it must have been a really bad day for their security team, their analysts can be proud of a job well done.
"That database contained 309,079 records of faculty, staff, students and affiliated personnel from the College Park and Shady Grove campuses who have been issued a University ID since 1998. The records included name, Social Security number, date of birth, and University identification number. No other information was compromised -- no financial, academic, health, or contact (phone, address) information."
This is extremely detailed and definitive information. It appears that the institution has a good grasp of the data that is in their custody, and that they were able to pull these numbers together very quickly. While it may appear trivial to do so, it is often a very complex thing to do in an enterprise environment.
"The University is offering one year of free credit monitoring to all affected persons. Additional information will be communicated within the next 24 hours on how to activate this service."
The credit monitoring deal is more or less expected these days. What is notable is that, again, there is a strong commitment by the leadership to be unambiguous and (very) timely. The phrase clearly states what will happen, and when it will happen by.

They continue with a warning that is very appropriate:
"University email communications regarding this incident will not ask you to provide personal information. Please be cautious when sharing personal information."
Having this amount of personal information breaches will, most likely, to targeted phishing attacks, if nothing else. Including this warning might not help all that much, but at least they tried! The fact that they limit the warning to email communication is a little troubling though.
"We recently doubled the number of our IT security engineers and analysts. We also doubled our investment in top-end security tools. Obviously, we need to do more and better, and we will. "
For a public statement by person in a senior leadership position to distinguish engineers and analysts is something we also do not see all that often. Engineers build, and should be involved in any and all software development, adoption, or usage decision making. Analysts monitor for signs of trouble and investigate alarms. Both roles are important and should not be confused.

"Recently" is not further quantified, so we really cannot tell much from that aspect of the statement.

Finally,
"Again, I regret this breach of our computer and data systems. We are doing everything possible to protect any personal information that may be compromised."
The statement owns the fact that something bad happened. It does not try to cover up, minimize, or even deny that something unfortunate has happened. It also speaks of a commitment to minimize the impact of the damage that has been done.

All in all, I feel sorry for the University of Maryland that they have to go through this, but their initial response to the breach seems to be commendable, and is a sign of strong leadership in a time of crisis.

Friday, January 24, 2014

Cloud Services and Business Continuity

The fact that cloud services are popular deserves no further attention. Plenty is written about that, including elaborate pieces about security and business continuity. However, as much as people are aware that using cloud services may introduce risks to availability, very few realize what the impacts of an outage can be. 

Today's gmail outage, as brief as it was, illustrates that.

Corporate IT typically has visibility and control of at most 5% of the entire infrastructure needed to deliver cloud services. A good service provider with extensive vertical integration may control up to about 15%. That still means that the remaining 80% is beyond either party's direct visibility and control. 

While these numbers nothing more than a (good) estimate, they reflect something that is a little scary when really considered.

I make it a point to really outline this risk issue during a product selection process, and again at our annual business impact analysis meetings.It is important to make sure that all stakeholders explicitly acknowledge (and accept) the fact that cloud services will be unavailable and that alternative processes must in place to accommodate for that.

For some reasons, many business units are more inclined to believe an external service provider's claim that it must be "your IT department", even when those claims can be countered with proof. Having acknowledgement (and acceptance) of outage risk in writing, whether it is in meeting minutes that have been formally accepted, or in the form of email messages is very useful and may be a career saver.

Thursday, November 7, 2013

Readings on Cryptography

Cryptography is sometimes referred to as the first line of defense, as well as the last line of defense in cyber security. Both are true, depending on perspective. Whatever it may be, it is hard to argue with the opinion that modern cryptography has tremendous benefits, if it is implemented well. On the flip side, if it is done wrong, cryptography adds noting more than complexity and it creates a (false) sense of security, which may actually harm you in the long run.

Cryptography is as much about choosing the right cipher, as it is about getting the operational processes in place, and sticking to them.

As the people around me can attest to: statistics and mathematics are not my strong suit. Yet, in light of the whole "our spying agency spies!"-discussion, I have been doing a lot of reading about cryptography lately .

I do not claim that I am a cryptographer. Even more so, I claim that I am definitely not a cryptanalyst.

However, any information security / cyber security practitioner should at least be aware of the history of cryptology (cryptography + cryptanalysis), as well having some level of understanding as to what crypto can (and cannot) do.

Having an understanding of the mathematics behind cryptography is generally not needed. Having a good understanding of crypto operations, however, is a must.

My reading list:

The Code Book, by Simon Singh. A great place to start-- the book strikes the right balance between history, anecdote and it illustrates some of the more common cryptographic elements that you find in many textbooks as well, but it does so in an easy-to-read, and easy-to-follow format. Highly recommended.

The Code Breakers, by David Kahn. Arguably the most comprehensive writeup on the history of cryptography. The book of loaded with historic facts, anecdotes, and explanation of ciphers and codes. The book really does a great job at illustrating that the whole NSA spying story is nothing new-- espionage, intercepts, and code breaking has been happening for thousands of years. We've just gotten a lot better at it lately, and since we communicate more than ever, the reach (and impact) of automated spying is much larger.

Understanding Cryptography, by Christof Paar. Here we shift from gentle storytelling to hard-core math. Not for the faint of heart, but since the book is paired with video lectures on the authors website, it is actually very informative.

Code Breaking, by Rudolph Kippenhahn. The jury is still out on this one. I've only just started reading this, but so far, it seems to fit somewhere between the Code Book and The Code Breakers.

The interesting part of this, is that each of these books is cheap. $11 for the cheapest (The Code Book) to about $50 for The Code Breakers. The amount of value that you get from each of these is absolutely a good deal.

Any other reading recommendations are highly appreciated.


Tuesday, May 28, 2013

Two Factor Authentication Adoption

Two factor authentication appears to be one of the current hot topic in information security. Many of us have been complaining for years, if not decades, that password-based authentication is weak and that it should be abandoned sooner, rather than later.

As a quick summary: authentication is the process of verifying a claim of identity. That verification can be achieved by developing a properly designed process which combines multiple "factors". These factors can include physical tokens (something you have), knowledge of some shared secret (something you know) and something that is uniquely measurable about you as a person (something you are). There are more factors, but, as of yet, they are not as common. One that seems to be gaining some traction is somewhere you are (geo-fencing), but technically that is more an access control than an authentication mechanism.

An easy example, that we are are all familiar with, is an ATM card. A person requires a physical token (the ATM card) and a shared secret (your PIN code) to withdraw cash from a machine. Just having the card is not enough, nor is just having the PIN code.

There are plenty of ways to get around the card+PIN limitation (a gun to your head will work nicely...), but in general, combining two factors could be sufficient to achieve an acceptable level of security.

Note how nicely U.S. credit cards, that merely require possession of an item (the card), or knowledge of the information printed on the card contrasts with many E.U. credit cards that require the card and a PIN code.

Unfortunately, when we transition to an online world, we run into some trouble.

Two Factor Authentication typically requires some form of specialized hardware. Specialized hardware has a tendency to be expensive, prone to error, and people generally don't like having to worry about yet another single purpose gadget.

Lately though, the limitations of using two-factor seem to be less than a problem than they have been. People carry cell phones, and most phones are now fully matured computing platforms. Recently Twitter has jumped on the bandwagon by providing two-factor based authentication based on text messaging a randomly generated number to the cell phone on record. Nothing beats true randomness. And while cell phones can be cloned and malware can be written to look at cached SMS messages, there is little doubt that 2FA is still better than just relying on a single password.

Google has been supporting two-factor authentication based on open standards for quite a while now using a dedicated smart phone App, or by using pre-generated transaction numbers.

Because Google decided to build their two-factor authentication fully on open standards, it is actually ridiculously easy to integrate with. I was able to write a Google 2 Factor integration backend in Python in less than two hours on a rainy Saturday evening. Because of the fact that Google 2FA is based on open standards, it is becoming more widely adopted in the consumer market than any 2FA mechanism that I am aware of has ever been.

Unlike dealing with RSA (or one of its competitors) using the Google Authenticator as a platform for integration is virtually risk-free from a vendor lock-in perspective; since everything is based on open standards (RFC 4226 and RFC 6238) you don't depend on Google's offerings AT ALL.

Now that it is becoming normal to offer 2FA as an opt-in mechanism, there is really no reason why, in a corporate setting, 2FA should not be more widely used. The complexity of managing multiple factors is still there, and there are still many ways do get it wrong, but since user acceptance seems to be growing at the moment, it is really a good time to start playing with this technology and to take it past the technology playground level.

The need for dedicated devices is also diminishing. Cell phones are becoming smarter, and people are becoming smarter in the way that they use them.

Two Factor Authentication is not a silver bullet. There are still many ways in which to get it wrong. But, by not doing anything, we're not going to learn, and we're not going to advance. It is time to get started. Begin with you password reset/recovery mechanisms, and take it from there.

Sunday, April 28, 2013

Access Control and Service Oriented Architectures

What feels like an eternity ago, Access Control and Service Oriented Architectures was the title of my PhD thesis. While cleaning  out some old SVN repositories on my home server before wiping and reinstalling it, I found a PDF copy of my thesis.

The PDF was never published in full for reasons that I do not recall. Either way, more than a way not to lose the work, I am posting it here now. If you are interested in the topic: go ahead and read it. However, as one of my former co-workers once said: "PhD theses are meant to be written, not be read."

Don't say I didn't warn you ;)

You can download the thesis here.


Friday, April 26, 2013

When to Declare an Information Security Incident and How to Respond When You Do

In addition to the presentation with Don Becker and Vlad Grigorescu, I presented at this year's EDUCAUSE/Internet2 Security Professionals (ESP) Conference with Bob Henry.

This talk was of a more introductory nature, and stressed the need to have an incident response plan in place before things go bad. Any time that you can be in a position where you are responding in a premeditated way, rather than reacting and having to improvise on the spot, you are better off.

My role in the presentation was to talk a little about high-level cycles that pretty much all attacks go through, and what we, as a defender, can do to try and preventing those attacks from being successful, or failing that, to limit the damage that they do.

Bob then took the foundation that I built and went through a case study of an actual breach that he worked.

The slides are available at the EDUCAUSE web site.