The various digressions I got into in my previous post on this thread has led me to write some more code to investigate another problem (mentioned briefly in earlier post) in more detail.

But first I want to waste a little of your time explaining my notions about “formal math” vs “numerical simulation” (I put these in quotes because I’m probably not using exactly the right terms). My notion here is that mathematicians (or math-oriented finance researchers) want to crank abstract math (isn’t all math ‘abstract’?), i.e. generate “proofs” via their long-appreciated methodologies. In contrast one can crank a bunch of numbers in the computer through an algorithm to investigate the same issue, BUT, the math types will denounce the computer simulation as insufficiently robust and in fact it’s just a guess as far as they’re concerned.

So here’s a simple example of that idea from my copy of *In Pursuit of the Unknown: 17 Equations That Changed the World* by Ian Stewart (the following formula is in the book but appears here courtesy of Wikipedia):

This equation, known as Euler’s Identity can illustrate my point. Presumably the incredible mathematician Euler “proved” this identity which to mathematicians means it’s totally true (even in alternate universes) for all possible values. Now e and i and π are some very interesting quantities, so the fact they would be related this way is very unobvious. e and π are known as transcendental numbers. While 1/3 is an infinitely repeating fraction 0.3333333… (forever more 3’s) transcendental numbers also have infinitely many digits but their pattern is unpredictable (it can be computed, however).

So a computer guy might “prove” Euler’s Identity (no one would, I’m just using this scenario to demonstrate my point) by computing e to the iπ’th via numerical methods. Since both e and π don’t have nice pretty values, like 1.0 (exactly) then the computations in a program must use some finite number of digits. Now there are algorithms to compute e and π to any arbitrary number of decimal places, but no matter how many we compute (say 1,000,000) there is still some small error when we plug these values into Euler’s Equation and thus the right-hand side, known through formal math to exactly 0, will actually have some finite (albeit small) value other than zero.

That bugs mathematicians. But as us engineers say “good enough for government work”.

But sometimes relatively naive programmers don’t actually understand computation in real computers using the builtin numbers in hardware. Now a third grader would know that 1/3 + 2/3 = 1, but if you actually try to compute that, say in C# with double precision numbers the result will not be one (first since the fractions infinitely repeat, but second, and worse, in base 2, they’re really icky fractions). You might be deceived if you merely print out your result (it will probably appear as 1), but if you look at the hex value of the actual double sum (or do calculation 1 – sum; it won’t be 0) you’ll see you got the wrong answer.

Now in my case I’ve bypassed that issue by assuming I won’t use hardware floating point numbers (nowhere near 1,000,000 significant digits), but it won’t matter that my BigFloat class will still have errors (both e and π are going to come from adding up many terms, each of which will be slightly wrong).

So I get it. Mathematicians can precisely say what their proofs mean; numerical calculation on a computer is always subject to various errors and thus can never “prove” anything. It can, however, still yield practical results as long as we’re careful in coding to understand the errors and minimize them.

But here was my point with Black-Scholes – yes, it’s neat and closed/formal math and therefore presumably true, for all possible values, BUT, to apply it to the real world you’re back in the realm of all those niggling little errors in computer programs, not to mention the much worse errors (but limited in scope and wrong at times) in your dataset. So all the nice assumptions you made doing that beautiful math aren’t worth diddly (or, IOW, are probably violated) when you apply “pure” math to the messy real world, especially cranking data through computers.

Now this post is already way too long to get to my new simulation, so I’ll make that another post back-referencing this prologue and thus close with this.

Not only did my first post trigger some ideas about a simulation, I also wanted to go back and read another book I read a long time ago – *When Genius Failed* by Roger Lowenstein, the story of the collapse of LCTM. Now this is connected to all my discussion because: a) the very same Scholes as the equation is named for was a principle in LCTM, along with various of his disciples, particularly Merton (who actually got the Nobel instead of Black because Black had died), so the story of LCTM is very much the story of Black-Scholes equation, but even more broadly this whole idea of beautiful theoretical math being applied in the real world – and failing spectacularly, and, b) all these guys and this financial theory was being devised at exactly the same time and exactly the same place as I got my financial training, i.e. MIT Sloan School in the 1970s. While I can’t claim much personal familiarity with these “geniuses” and I was a lowly Masters student, not a PhD and therefore not as heavily immersed in this stuff, at that time, I actually probably was a better computer nerd than any of them were, so I might be so bold, in such an apples and oranges comparison, to say my view of these issues, from computer POV, had perhaps as much validity as their POV from the formal math POV. And, of course, I can gloat, because they were proved spectacularly wrong, tripped up by exactly all the issues that arise in real computer programs on real data applied to real world problems.

So in my next post of this series I’ll finally get to the model I’ve already done which will be the basis for expanding into more general solutions. So bye for now.