Why exactly should you be afraid when a turing complete language is used to do computation by people who do not understand more complicated languages?
I do not see the point of the article.
In fact, we should instead celebrate that excel is used for so many things that can be better automated - just think about the business opportunities!
EDIT: and yes I read the article, and it specifically points to a coding mistake. When I work with excel (sometimes I still do!) I create references to the cells holding the formulas, and feed them some known input and compare the results to what I expect - if the computations do not match the formula may have been overwritten so I print a "ERROR DETECTED" in a special "TEST" field next to the cells holding values.
Congratulations - you have a test suite!
And that's just one way. Sometimes when I have to do a different implementation I just keep the previous implementations of the same algorithm somewhere on the sheet, use the same input and substract the results - any non zero value should get you worried enough to display an error message. This is interesting for new edge cases that your test suite could miss, and especially useful if the initial algorithm was vetted in some way.
Congratulations - you have added regression !
All this is easy to do, just a copy/paste special away. Even better- you can teach that process to non coders and they will see how useful it is, because anyone who has used excel for non trivial things has been exposed to such problems and is weary of excel limits.
The tool is not the problem. Software development is a process, and if you do a poor implementation, without tests, without regression testing, guess what it will bite you - but it would be the same in C, perl or python if you try to do without at least tests.
TLDR : There are many ways to make excel more robust. Maybe the "experts" are not really experts if they have never been exposed to copy-paste induced errors and never had to take care of such errors.
Did you read it? The problem is that for all its power, quirks about Excel as a product make it very easy to make entirely hidden errors. Too much magic in the hands of unsophisticated users can lead to trouble, but in Excel's case, even experts can make mistakes and not notice.
And when components of the global financial system are exposed to such risk, that gets problematic indeed, as the article details persuasively.
I'v seen the process, and I know of what the process is in one of the Big Firms. There is almost always someone above you to check your work, because the drudgery and pain of a model is farmed of to an analyst. The associate/seniors take a look at the work, and finally there is almost always a sanity check which is not trivial.
All of the major finance firms have lots of people who've burnt their fingers with excel mistakes.
They know that their analysts and their associates can and will break something. (Heck the analysts do so regularly, I'm sure there are several analysts freaking out over REF errors right now)
The London Whale wasn't just because mistakes in excel. There were several things that broke here.
The assumption that the users are unsohpisticated in the use of excel is a bit naiive. If anything, the most arcane applications and commands of excel will likely be best known in the major banking firms.
There were several things that broke with the London Whale. And spreadsheets are not to blame for things like LIBOR fixing, or late trading on mutual funds or any of the other fraudulent practices of big banking.
But still, spreadsheets are full of normal human errors, and based on flawed assumptions, with little oversight, and they control huge amounts of money in novel instruments.
From EuSpRIG:
> "This market grew very quickly due to the ease with which it was possible to design and promulgate opaque financial instruments based on large spreadsheets with critical flaws in their key assumptions." Grenville J. Croll, 2009 [http://arxiv.org/abs/0908.4420]*
> "Complex financial instruments such as CDO’s are implemented within, and valued by, large and complex spreadsheets. CDO’s and other credit derivatives played a leading role in collapse of the global financial system”. Grenville Croll [http://arxiv.org/abs/0908.4420]*
> “Spreadsheets have been shown to be fallible, yet they underpin the operation of the financial system. If the uncontrolled use of spreadsheets continues to occur in highly leveraged markets and companies, it is only a matter of time before another ‘Black Swan’ event occurs [Taleb, 2001], causing catastrophic loss. It is completely within the realms of possibility that a single, large, complex but erroneous spreadsheet could directly cause the accidental loss of a corporation or institution, significantly damaging the City of London’s reputation”. Grenville J. Croll, 2009 [http://arxiv.org/abs/0709.4063]*
Because people make errors. This is well documented. For simple tasks people have an error rate of about 0.5%, but for more complex tasks people have an error rate of about 5%
So, when people are creating their own spreadsheets to get stuff done, and using that for multi-million dollar transactions, and not getting the software audited or approved, they're introducing risk to the company.
> When most people look at Tables 1 2, and 3, their first reaction is that such high error rates are impossible. In fact, they are not only possible. They are entirely consistent with data on human error rates from other work domains. The Human Error Website (Panko, 2005a) presents data from a number of empirical studies. Broadly speaking, when humans do simple mechanical tasks, such as typing, they make undetected errors in about 0.5% of all actions. When they do more complex logical activities, such as writing programs, the error rate rises to about 5%. These are not hard and fast numbers, because how finely one defines reported "action" will affect the error rate. However, the logical tasks used in these studies generally had about the same scope as the creation of a formula in a spreadsheet.
I do not see the point of the article.
In fact, we should instead celebrate that excel is used for so many things that can be better automated - just think about the business opportunities!
EDIT: and yes I read the article, and it specifically points to a coding mistake. When I work with excel (sometimes I still do!) I create references to the cells holding the formulas, and feed them some known input and compare the results to what I expect - if the computations do not match the formula may have been overwritten so I print a "ERROR DETECTED" in a special "TEST" field next to the cells holding values.
Congratulations - you have a test suite!
And that's just one way. Sometimes when I have to do a different implementation I just keep the previous implementations of the same algorithm somewhere on the sheet, use the same input and substract the results - any non zero value should get you worried enough to display an error message. This is interesting for new edge cases that your test suite could miss, and especially useful if the initial algorithm was vetted in some way.
Congratulations - you have added regression !
All this is easy to do, just a copy/paste special away. Even better- you can teach that process to non coders and they will see how useful it is, because anyone who has used excel for non trivial things has been exposed to such problems and is weary of excel limits.
The tool is not the problem. Software development is a process, and if you do a poor implementation, without tests, without regression testing, guess what it will bite you - but it would be the same in C, perl or python if you try to do without at least tests.
TLDR : There are many ways to make excel more robust. Maybe the "experts" are not really experts if they have never been exposed to copy-paste induced errors and never had to take care of such errors.