February 14, 2013

Microsoft Excel: the most dangerous software on Earth?

Filed under: Business, Economics, Technology — Tags: , , — Nicholas @ 00:02

I’ve made this case in conversation several times — usually after having to forensically determine just why someone’s spreadsheet produced an unlikely answer — the greatest strength of spreadsheets is also their greatest weakness. Anyone who’s built a spreadsheet knows how easy it is to make a mistake, and how hard that mistake can be to detect after the fact. Spreadsheets are free-form: you can set up relationships on the fly, pull data from one place to plug into a different formula somewhere else. It’s literally empowering to gain that much control over your data without having to learn a full programming language.

But that flexibility and power comes at a cost: there’s no built-in error checking of your assumptions. Oh, it’ll alert you to practical problems like mis-matched data types or mechanical errors in your formula, but can’t tell you whether the operation you’re attempting makes sense. The program can’t read your mind and can’t sanity check your work.

Do a spreadsheet for your family budget and you’ll almost certainly make a minor error or two.

Make a set of inter-linked spreadsheets and you probably double the chances of error for each new spreadsheet in the set.

Make a set of inter-linked spreadsheets that require manual copy-and-paste updates and you exponentially increase the chances of error.

Then, make that manually updated set of spreadsheets have a real-world impact on vast amounts of money:

To give you and idea of how important this is here’s a great tale from James Kwak:

    The issue is described in the appendix to JPMorgan’s internal investigative task force’s report. To summarize: JPMorgan’s Chief Investment Office needed a new value-at-risk (VaR) model for the synthetic credit portfolio (the one that blew up) and assigned a quantitative whiz (“a London-based quantitative expert, mathematician and model developer” who previously worked at a company that built analytical models) to create it. The new model “operated through a series of Excel spreadsheets, which had to be completed manually, by a process of copying and pasting data from one spreadsheet to another.” The internal Model Review Group identified this problem as well as a few others, but approved the model, while saying that it should be automated and another significant flaw should be fixed.** After the London Whale trade blew up, the Model Review Group discovered that the model had not been automated and found several other errors. Most spectacularly,

    “After subtracting the old rate from the new rate, the spreadsheet divided by their sum instead of their average, as the modeler had intended. This error likely had the effect of muting volatility by a factor of two and of lowering the VaR . . .”

To translate that into the vernacular, the bank, JP Morgan, was running huge bets (tens of billions of dollars, what we might think of a golly gee gosh that’s a lot of money) in London. The way they were checking what they were doing was playing around in Excel. And not even in the Masters of the Universe style that we might hope, all integrated, automated and self-checking, but by cutting and pasting from one spreadsheet to another. And yes, they got one of the equations wrong as a result of which the bank lost several billion dollars (perhaps we might drop the gee here but it’s still golly gosh that’s a lot of money).

And it’s not just JP Morgan: every financial firm, every bank, every brokerage uses Excel (or another spreadsheet program). Multiply JP Morgan’s experiences by the number of companies to get a rough idea of how much is at risk from un-audited (possibly even un-audit-able) financial models running on spreadsheets.

1 Comment

  1. Mid-1990s Lotus 123 v5 was way better than Excel (at least per the 2002 edition). Actual D-Functions worked directly (without a clumsy 3rd Criteria Range). By the offset column or row counting with Lookups was something to be laughed at! I can’t count the number of times a standard monthly table would change for a new or removed item line(s) or for an alteration as to management requests. (i.e.; this time display two past months instead of one, or both a prior forecast and budget columns).

    Cell-edited plain Lotus numbers, without incurring the ‘missing equal sign’ error whenever a second number was applied, was intuitive. As were the top-row colour tabs (tab colour later brought to 2002 Excel) – helpful for visual coding by region, etc. Nor was the former pre-95 macro language both overpowered and yet under-clear.
    Manual Cut and Paste?
    Automatic cell updating or refreshing is another default Excel problem. Often you just want to re-confirm a completed spreadsheet’s figures; not silently change one into matching the other. That should always be manually done, either as a direct action or a manually started macro action.
    So too, is the Excel practice of shortcutting displayed linked spreadsheet references problematic. Once you have three models open at the same time, where exactly is a specific cell’s linked number coming from? Excel still knows, the person using their spreadsheets is operating by memory and assumptions. In fact, I had basic enough troubles with trying to read Excel’s version of a cell formula line while following the actual cell positionings.

    I would love if some of those older yet better software programs were given the simplest of (un-bloated) updates for keeping current for today’s computing environment. I found the Windows version 1.0 clock program (from 1985!) so updated recently.

    Comment by CQ — February 14, 2013 @ 19:28

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress