Lessons from the BP oil leak

The BP oil leak has been going on for weeks now and there does not seem to be an end in sight. As the saying goes, wise people learn from others’ mistakes and there are a couple of important lessons for organizations from this disaster.

Before we proceed, please note that I am just going by what I have read in the media regarding the BP oil leak and some of them might turn out to be wrong. I am just using BP as an example and it is important that you focus on the point that I am trying to make rather than on BP or its problems.

The first thing that comes to mind is the incident response plan. Even after weeks, no one seems to know how to stop the oil from leaking. Did they not think of the risk of this happening? Let us assume for a moment that they did and had come up with a possible solution. Why is the solution not working? Did they not test it before approving the plan?

The take-away for organizations is to have a documented incident response plan. Just having the plan is not enough. The plan needs to be tested regularly to ensure that everyone knows what they are supposed to be doing and also that the plan will actually work. For IT, this has been industry best practice for a long time and standards such as PCI-DSS require having an incident response plan and testing it regularly.

My experience has been mixed. While a lot of organizations have this in place, a significant number do not. This can leave them scrambling when a breach or some other incident occurs.

Another thing that has appeared in the news about BPs testing of safeguards is that a lot of corners were cut to save time and cost. For instance, a former safety inspector said that the pressure handling capability of some component was supposed to be tested for 5 minutes or so, recording the results in a device similar to a seismograph. But what used to be done was that the tester would do it for say, 30 seconds, but speed up the device to show that it covered 5 minutes in the 30 seconds that it was actually tested.

The take-away here is that you cannot cut corners, especially in testing. All this does is postpone the inevitable. Something will go wrong and that may happen at the worst possible time. If anything can go wrong, it is better that you pick it up in testing and fix it rather than wait for a real life blow out.

One of the reasons for the BP leak was that a device that was supposed to shut off the pipe in the event of a problem did not work. This was because, it was not fixed even though it was found to be defective during checks. So, instead of spending a few hundred thousand dollars BP is going to have to spend billions and also take a significant hit to its reputation.

It is always better to be proactive and prevent situations rather than be reactive and try to fix problems after they have manifested themselves.