What everyone is overlooking about Apple Pay

I used to be a big fan of Apple but as they drifted into a new world my enthusiasm dropped. I have some current Apple products but I pick and choose, so naturally I have the superior Kindle (and its content ecosystem) over iBooks (and its corrupt pricing). The whole incident with books shows you exactly who the new Apple is. Since Amazon had a huge lead and Apple, in books unlike in music, was not highly regarded, they used a simple trick – bribe the booksellers to do deals with Apple instead of Amazon. Now back when they did this I keep screaming – look what they did to music publishers. When Apple is the monopolist they squeeze their suppliers; when someone else is the leader Apple appears to be on the side of supplier, until, of course Apple wins, then they’ll put on the squeeze.

This is the overlooked issue in Apple Pay. People are looking at what Apple is doing TODAY, when they’re trying to ingratiate themselves into controlling your money and ignoring what they will do in the FUTURE when they hold the monopoly. We’ve seen this before, over and over, say esp. at Facebook – old policies designed to attract users are quickly reversed when Facebook can now monetarize something by changing policies, and, poof, guess what, your EULA changes at the whim of Facebook and you have no choice but to accept it.

Do you really think Apple, once they own retail, won’t collect personal information and sell it? Do you really think they will leave those billions lying on the table and not scoop them up?

Sure they’re saying no collection of personal information TODAY, but what happens when their system is nearly universal. Do you have an ironclad, court-enforceable contract that says they can’t change that policy? And are you stupid enough to believe they won’t when they can get away with changing that policy and tracking everything you do and selling that to the highest bidder.

What people need to realize about modern mobile technology is YOU ARE NOT THE CUSTOMER. You are PRODUCT. You exist to be mashed and molded into something that can be sold to those enterprises who ARE the customer. Whatever “benefits” you believe you’re getting are not goodness-of-the-heart (of businesses that certainly have no heart, only profit maximizing) can be recalled in an instant, the very instant where whatever adverse effects the policy change has are less than the benefits. Sure a few people may protest Facebook and drop out – they counted that in their model, but most of the rest of the people are sheep and will go along and Facebook gets $N.MM dollars each – something lost, something (more) gained – done deal.

Why is everyone so naive about this having seen it over and over? Do you think Apple is doing this just because it’s cool. Isn’t the idea that they will say whatever they have to to gain entry to your wallet and once it’s in their possession they will then do whatever they want to make money make sense to do. Apple Pay costs them something – where’s their return?

And as we’re now seeing Apple is the iPhone company, not really much else. Their other product lines are minor. Now Apple knows this but much of the world doesn’t. So big surprise that their latest mandatory iOS updates break older phones. When you have one product and that product can physically work for years, but the only way you get ever increasingly sales is to get people to discard their old phone for new, AND, you’ve run out of real new ideas and/or fashion tricks (isn’t gold case just totally the reason to upgrade, oh sure), then forced obsolescence is their only marketing trick.

So you lock yourself into Apple Pay and somehow merchants get browbeat into using it and then Apple controls most retail payments. Guess what – you’d better plan on a new phone every six months as Apple breaks your old one and now you can’t get your coffee. Don’t think they’ll do this – exactly why not? Because they’re nice guys (sure), because of legal reasons (exactly which are those), because of bad customer relations (sorry, you’re not the customer anyway), because of competition (sure you’ll switch to Beta because you’re mad at VHS). Apple knows we’re sheep, just as Facebook and Twitter know we’re sheep, and their business strategy is simple: 1) tell us what we want to hear in order to get a hold on us, 2) change polices to whatever makes them the most profit.

I hope I can go to my grave without every buying an iPhone (or worse an iPad). OTOH, I hope I can get iPods forever. And with Microsoft building junky OS’s now, I might even switch back to Macs. But the iPhone is drugs and I refuse to be an addict. I don’t much want Android or Microsoft either, I don’t want to be a phone zombie from any vendor (it’s getting where I may have to budge on this but then I’ll get the cheapest POS I can possibly stand). So naturally any attempt Apple is making to require me to use their iPhone – well screw them. They can yank my credit card (or even cash, or my Starbucks card) out of my cold dead fingers or I’ll just go off the grid before I’m forced to use Apple as my bank (hey, Goldman Sachs, scumbags that they are, aren’t even in the same league as Apple when it’s a monopoly).

So stuff it Apple. And folks, get a clue – forget your trendy fashion statement sense of Apple (hey, Steve is gone to the great alternative medicine and design center in the sky, today’s Apple has no heart at all). Don’t help Apple run the world and then eventually tax you more than your state or federal government can dream of. Stop Apple Pay!

Posted in comment | Tagged , | Leave a comment

I was right! -2

The various digressions I got into in my previous post on this thread has led me to write some more code to investigate another problem (mentioned briefly in earlier post) in more detail.

But first I want to waste a little of your time explaining my notions about “formal math” vs “numerical simulation” (I put these in quotes because I’m probably not using exactly the right terms). My notion here is that mathematicians (or math-oriented finance researchers) want to crank abstract math (isn’t all math ‘abstract’?), i.e. generate “proofs” via their long-appreciated methodologies. In contrast one can crank a bunch of numbers in the computer through an algorithm to investigate the same issue, BUT, the math types will denounce the computer simulation as insufficiently robust and in fact it’s just a guess as far as they’re concerned.

So here’s a simple example of that idea from my copy of In Pursuit of the Unknown: 17 Equations That Changed the World by Ian Stewart (the following formula is in the book but appears here courtesy of Wikipedia):

EulerIdentity

This equation, known as Euler’s Identity can illustrate my point. Presumably the incredible mathematician Euler “proved” this identity which to mathematicians means it’s totally true (even in alternate universes) for all possible values. Now e and i and π are some very interesting quantities, so the fact they would be related this way is very unobvious. e and π are known as transcendental numbers. While 1/3 is an infinitely repeating fraction 0.3333333… (forever more 3’s) transcendental numbers also have infinitely many digits but their pattern is unpredictable (it can be computed, however).

So a computer guy might “prove” Euler’s Identity (no one would, I’m just using this scenario to demonstrate my point) by computing e to the iπ’th via numerical methods. Since both e and π  don’t have nice pretty values, like 1.0 (exactly) then the computations in a program must use some finite number of digits. Now there are algorithms to compute  e and π to any arbitrary number of decimal places, but no matter how many we compute (say 1,000,000) there is still some small error when we plug these values into Euler’s Equation and thus the right-hand side, known through formal math to exactly 0, will actually have some finite (albeit small) value other than zero.

That bugs mathematicians. But as us engineers say “good enough for government work”.

But sometimes relatively naive programmers don’t actually understand computation in real computers using the builtin numbers in hardware. Now a third grader would know that 1/3 + 2/3 = 1, but if you actually try to compute that, say in C# with double precision numbers the result will not be one (first since the fractions infinitely repeat, but second, and worse, in base 2, they’re really icky fractions). You might be deceived if you merely print out your result (it will probably appear as 1), but if you look at the hex value of the actual double sum (or do calculation 1 – sum; it won’t be 0) you’ll see you got the wrong answer.

Now in my case I’ve bypassed that issue by assuming I won’t use hardware floating point numbers (nowhere near 1,000,000 significant digits), but it won’t matter that my BigFloat class will still have errors (both e and π  are going to come from adding up many terms, each of which will be slightly wrong).

So I get it. Mathematicians can precisely say what their proofs mean; numerical calculation on a computer is always subject to various errors and thus can never “prove” anything. It can, however, still yield practical results as long as we’re careful in coding to understand the errors and minimize them.

But here was my point with Black-Scholes – yes, it’s neat and closed/formal math and therefore presumably true, for all possible values, BUT, to apply it to the real world you’re back in the realm of all those niggling little errors in computer programs, not to mention the much worse errors (but limited in scope and wrong at times) in your dataset. So all the nice assumptions you made doing that beautiful math aren’t worth diddly (or, IOW, are probably violated) when you apply “pure” math to the messy real world, especially cranking data through computers.

Now this post is already way too long to get to my new simulation, so I’ll make that another post back-referencing this prologue and thus close with this.

Not only did my first post trigger some ideas about a simulation, I also wanted to go back and read another book I read a long time ago – When Genius Failed by Roger Lowenstein, the story of the collapse of LCTM. Now this is connected to all my discussion because: a) the very same Scholes as the equation is named for was a principle in LCTM, along with various of his disciples, particularly Merton (who actually got the Nobel instead of Black because Black had died), so the story of LCTM is very much the story of Black-Scholes equation, but even more broadly this whole idea of beautiful theoretical math being applied in the real world – and failing spectacularly, and, b) all these guys and this financial theory was being devised at exactly the same time and exactly the same place as I got my financial training, i.e. MIT Sloan School in the 1970s. While I can’t claim much personal familiarity with these “geniuses” and I was a lowly Masters student, not a PhD and therefore not as heavily immersed in this stuff, at that time, I actually probably was a better computer nerd than any of them were, so I might be so bold, in such an apples and oranges comparison, to say my view of these issues, from computer POV, had perhaps as much validity as their POV from the formal math POV. And, of course, I can gloat, because they were proved spectacularly wrong, tripped up by exactly all the issues that arise in real computer programs on real data applied to real world problems.

So in my next post of this series I’ll finally get to the model I’ve already done which will be the basis for expanding into more general solutions. So bye for now.

 

Posted in musing | Tagged | Leave a comment

I was right!

The idea for this post popped into my head through a series of connecting memories; IOW, I was doing something that triggered a memory and I quickly connected that to another idea/memory and then again. It’s amazing how fast one’s mind can jump across time (history) and ideas, starting at one place and ending up somewhere very different.

What the F are you talking about, you say! Well, I’ll get there and maybe it’s even interesting.

I was working on my vocabulary app and encountered a situation where normal statistical analysis didn’t seem to make sense since my dataset was changing as I was interacting with it (and extracting some statistics). I don’t know of any “theory” that handles that case. But I realized, as I often do, I could address this with simulation, i.e. Monte Carlo type approach. And eventually I’ll get back to how this is my point of this post.

For instance I recently wrote a little simulation that I believe the theory (abstract math, lots of scribbling strange equations) couldn’t solve. It comes from a simple trivia game (like, but not, Trivial Pursuit). There are six categories of questions, selected by roll of dice, and you have to get three correct answers to complete each category and you “win” when you complete all categories (IOW, 18 correct answers).

Now let’s say, on average, I know the right answer to 50% of the questions. Does that mean, that on average, it will take me 36 questions to get the 18 right answers. Well, not exactly – remember that roll of the dice to select category thing. Sometimes the die will require me to answer a question in a category where I already have all three answers, IOW, an extra question not getting me closer to the final result (a correct answer does get me an extra turn, but that relates to how fast I might win, not how many questions I have to answer).

Now in case this isn’t clear, think about a simpler case: two right answers in two categories with 100% knowledge. Got to get four correct answers, so only four tries – right! No, wrong. If I get two correct answers for one of the categories, but then need to complete the other category, the random selection of category could send me back to having to answer a question for the category I’ve already completed. IOW, calling this categories A and B (now use the die, 1,2,3 -> A, 4,5,6->B) I could get a sequences (in four rolls of: ABAB or AABB, which would be nice, but here’s all the possibles: AAAA, AAAB. AABA, AABB, ABAA, ABAB.ABBA. ABBB, BAAA, BAAB, BABA, BABB, BBAA, BBAB, BBBA, BBBB, each sequence equally likely. But I’m only “done” (given always get answer right in the six of the cases; for the other 10 I have to take at least one more turn (and for AAAA and BBBB I’ll have to take at least two more turns). So my average, over many trials, is going to be based on p=6/16 I need only 4 turns, p=8/16 I need 5, and gets messy after this, i.e. in the 8 cases where I need at least one more turn, 50% of the time I’ll need to take another turn (plus for the AAAA and BBBB case, I’ll definitely need two more turns). IOW, my average number of turns will definitely be > 4, but how much?

Now maybe some great statistics theory person can figure out the exact answer purely through abstract math, but I can figure out approximately the right answer through simulation (as an exercise I hope I’ll do just that in this simplified case). The simulation answer will NOT be exactly right (for instance, the more trials I do, now using statistical theory, the closer my average answer will be to the “truth” (and by luck could be exactly right), but until infinite number of trials (plus a really good random number generator) I wouldn’t get the precise answer (the real dislike of simulation, how good is your result vs the cost of running the simulation longer, AND, can you actually know how good your result is).

So back to the more complicated case of the game, I wrote the simulation and chose a few values of my “average” knowledge of the questions and got some interesting outcomes:

foodieGameSim

I ran 2400 iterations of my simulation. Each iteration randomly picks a value for what % of the questions I’ll get right (column A for the specific values I used). Then it simulates random roll of die to pick a category, then random right/wrong on the question, and continues until got 3 correct answers in all six categories. The graph on the right (and numbers in column C) show the mean of all the simulations for that level of accuracy on the questions. Not surprisingly the more accurate you are on answering questions the less total questions are required BUT it might be surprising that, on average, it takes 32.3 questions to finally get 3×6 right answers, nearly twice. That is the effect of randomly picking a category where you already have three right answers, i.e. a “wasted” question. Just in case you’re wondering column B shows the counts of how many simulations I did at the corresponding accuracy – simplistically 2400 iterations / 8 categories = 300 tries in each category. But of course that’s not the actual numbers (column B and the bar graph underneath this). This shows another issue with simulation, the statistically expected number of tries per accuracy level doesn’t exactly happen in any given simulation (and my simulation took about 3 minutes to run). So I’d either have to: a) run many more iterations than 2400 so the counts are closer to the expected value, or, b) run many iterations of the 2400 iteration simulation and average the counts. Even with days of compute time there will still be some deviation. IOW, if my simulation is truly random we can, from “theory”, predict the outcome of one measurement, yet in this simple case we deviated from that and even a longer simulation we would still deviate. That drives abstract math guys nuts. OTOH, I’ll assert we have a “good enough” answer and that the real useful result of this simulation is the graph on the right (even though: a) it’s not precisely accurate, and, b) it’s based only on 8 values of question accuracy). The math guys want it to be precisely accurate for all possible values of question accuracy.

And every now and then they can crank their abstract math (usually for idealized and simplified cases) and get equations that are “proven” and “right for all possible values”. To a math person that’s an answer, none of this numerical crap of simulations.

Whew, long explanation before I even get to my second thought in this chain.

Last night I finished the book In Pursuit of the Unknown: 17 Equations That Changed the World by Ian Stewart. The last chapter was: “The Midas Equation, Black-Scholes Equation”. Of course this equation is familiar to me having concentrated in finance for my MBA at the MIT Sloan School (one of the early “rocket science” finance departments). The book writes this slightly differently but here’s what Wikipedia has for this:

blackScholes

Gobbledygook – right? I’ll save you, Dear Reader, the agony of my attempt to explain this (either the book or Wikipedia does a better job than I could anyway), but I will (briefly) talk about two things about the equation: 1) it radically changed finance and arguably is responsible for the 2008 crash, and, b) it’s wrong (bodacious for me to say as its authors got the Nobel Prize, but fortunately lots of other people say it’s wrong too).

Now as to the first point – how could an equation radically change finance and then lead to a crash. Well, what this equation allows is to calculate the price (supposedly the “correct” price) of the “financial weapons of mass destruction” (as Warren Buffett labeled them), i.e. derivatives. Now while trade has probably existed as long as humanity, money is actually a newer invention. And finance is even much newer than that (probably only 2% of the time our species has existed). Originally money was a symbol for trade, more fungible, more divisible than bushels of wheat or shoes or slaves. But trade, and later finance, originally represented REAL goods. Then some clever people, seeking to scam the rubes, invented what I’ll call “virtual” goods, i.e. eventually what we call derivatives. These virtual goods are just pure and simple gambling, a bet on something about a real good (like what a bushel of wheat will cost next year, a bet no different than a bet on a football game or a poker hand). But, as these con artists saw huge profits, they suckered the majority of us, who live in world of REAL goods and services, we needed to allow these gambling bets to bring “liquidity” to markets. It started with commodities. A farmer might need cash now, to buy seed and fertilizer and gas for his tractor, long before he has any crop to harvest and thus simpler barter that REAL good for the REAL goods he needs. So the farmer would like the sell his wheat, long before it even is planted or harvested. Now meanwhile a baker needs flour 365 days a year, not just when wheat is being harvested. And the baker, ideally, doesn’t want to have to search for flour every day and/or worry about what price he’ll have to pay (the baker would like to price his bread relatively the same every day). So the baker would like to “buy” flour before he needs it, that is, lock in the supply and price, but pay later and get the flour later. So farmer wants to sell when he doesn’t actually have any wheat and the baker wants to buy when he actually doesn’t want the flour, so both parties would like a way to do this. Enter the idea of commodities futures and an exchange somewhere to trade these “futures” (future delivery of something at a known date for a known price). So the con artists added invented something useful to the REAL world and thus got their foot in the door to steal our money through their gambling.

Well it took a couple of centuries before inventing derivatives and for a while derivatives were still connected to REAL goods and mostly derivatives did work to stabilize markets, i.e. how a social value, not just gambling. But in the second half of the 20th century things really took off. Computers happened and: a) made complex calculations possible, and, b) provided infrastructure for rapid trading in markets. AND financial theory, as epitomized by Black-Scholes happened. With Black-Scholes, in theory, the farmer can determine exactly what the value of selling a futures contract is today for wheat delivered in the future. If the market price is about that price then he knows it OK to sell (and if the market price deviates from the calculated price the farmer quickly turns into a speculator and buys or sells the contracts and to hell with actually growing wheat).

So using this formula and computers some clever con artists decided we could have derivatives of things that aren’t even real, again, after all, a derivative is just bet about some number (that might or might not represent a real thing). And since it does take a fair amount of education + math skill + audacity (aka willing to steal from innocents) the more complicated the derivative, especially not in standard form on some exchange, the more money the crooks (AKA Wall Street) can steal from the rest of us. The trouble was, in 2008, greed never has a limit and the crooks not only conned the innocent, but they conned themselves, and so had absurdly worthless derivatives actually treated as some real asset. Of course when the whole house of cards collapsed many of these totally fake and made-up “assets” turned out to be nearly worthless, thus requiring the taxpayers to come pay all the bad debts the crooks had created.

That’s one side of Black-Scholes, how, like any discovery, knowledge can be turned into a weapon.

But is Black-Scholes even right, even if it could be used for some social value (not just to enrich crooks, i.e. bankers). And lots of people say NO and that will, eventually, be the point of this post.

There are two simple (and lots of subtle) flaws in Black-Scholes. First, see all those squiggly symbols – those represent continuous variables or functions or operators. BUT, the real world, esp. finance, consists of many discrete transactions. Yes, there are many of them and it may be fair to treat them as though they were continuous, but often they’re not. And this becomes especially true when the fundamental underlying theory of the Black-Scholes equation is violated, that is, that price changes are random, when in fact, in the real world, the price changes are rigged, esp. by expert crooks like Goldman-Sachs. You can’t build math on assumptions and say it still works when those assumptions are violated. So AT BEST Black-Scholes merely represents an idealized statement about an idealized market so applying it to a real, dirty, discrete, and dishonest market can be reasonably determined to be a really bad idea (as events proved).

The second thing is that to say abstract math represents the real world means the real world needs to have lots and lots of data. Because the real world is a specific instance of an idealized case. But with lots and lots of data the deviation of the real world from the idealized model is small. So, in the horrible (and IMO criminal) misuse of Black-Scholes, i.e. things like CDOs and CDSs, was there enough data from anyone to crank into this equation and pop out a meaningful price. Absolutely NOT. And in fact this shows one of the challenges of everyone’s latest fad, BIG DATA. We now know the ratings agencies, who applied formula like this to toxic pieces of crap Wall Street invented, got it wrong, either they lied (after all they were paid by the crooks) or they were just dumbshits (a popular claim that is very arrogant, i.e. Goldman Sachs hires the smartest people and S&P gets the GS rejects, so naturally the GS people can con the S&P people). That’s a favorite trope but only a bit correct (it justifies the notion the GS people got paid some much more, but not much else). The real flaw was data – S&P simply did not have enough data AND WORSE the data they did have was false.

Now Ian Stewart does a fine job of explaining “black swans”, i.e. “fat tails” and other ways statistics can be wrong, but I’ll deal with this other silly issue, that the data they used (wrongly) was itself inherently useless. The S&P people had to evaluate the insane liar-loan and reverse-accruing and variable-rate mortgages forced onto people who could neither afford them or understand them (sub-prime) by the worst scumbag crooks, the mortgage brokers. But even though the brainy math guys knew some of the loans were crap and would never be repaid, they just cranked that through their models. But what historical data did they use (all models have to have data to mean anything). Well, mostly nowhere near enough, i.e. less than two decades (thus not counting any major financial crashes, i.e. the Great Depression) AND typically the data was for PRIME and fixed-rate mortgages with substantial down payments. Give me break – anyone past high school should know that dataset was completely inapplicable to subprime and non-traditional mortgages. But given it was the only data they had they misused it and that was the con. The ratings agencies should have spoken the truth – IT IS NOT POSSIBLE TO RATE THESE SECURITIES, not that they weren’t AAA (who knows, maybe they were) or junk bonds. THEY DIDN’T KNOW. Their models and their equations were useless because their data was junk.

But everyone was so caught up in the mystique of this abstract math (and that the authors got Nobel Prizes, despite the fact they also caused the collapse of LTCM just before the bigger 2008 crash, like duh, the man on the street has enough common sense and doesn’t need finance MBA/PhD to know that’s bullshit). And what it really was, was the whole system (including the watchdogs) knew it was just a giant scam but they could wave the Black-Scholes equation and Nobel Prizes around and claim it wasn’t (under the euphemism of “financial engineering” – I never fly in a plane with that built by aeronautical engineers that are as wrong and dishonest as financial engineers, who, unfortunately, frequently are from the same college and department I attended).

And that gets me to my endpoint in the quick burst of thoughts I had (it took me seconds, if you’ve actually read this, it takes you 100x longer).

I was at Sloan in the early 70s. Data was scarce. Computer time was scarce and expensive (we’re talking IBM 360/65, millions for a computer that wouldn’t even play the tune in throw-away greeting cards). Grad students didn’t get much access to either data or computer time. But I had the delusion, concentrating in finance, I could use computers to predict markets and make a quick fortune. Through another source I had computer time. But I didn’t have any data. Now there was a dataset, created by Merrill-Lynch, that was hugely expensive (but available to research universities) and so it was very carefully controlled. In order to get this data (and run through my own models) I had to do a real project. And I got my chance. The big issue then was how to calculate beta (simple idea, won’t bore you with it). And back then, pre-Reagan, we believed in regulation, at least to the point of transparency and telling the truth. So the SEC (maybe FTC, I forget) wanted mutual funds to publish their beta AND all of them would have to use the same formula (algorithm, actually) to compute it so you could honestly compare one mutual fund to another.

So as is standard in research universities the professors needed grunts to do the work, so I volunteered, in full disclosure with the hidden agenda of just getting access to the restricted data (which, btw, I never actually used for my own purposes, but that’s because I’d gotten bored while doing my real project). Almost immediately, after writing some simple Fortran, I realized the same thing I showed in my simulation above or the same thing as Ian Stewart criticizes about using Black-Scholes and that is that instead of nice pretty numbers popping out, the numbers were too discrete, based on too few values, to be meaningful.

For instance, if I calculated beta over a month (for a particular mutual fund) and then repeated that calculation over many months I got beta values all over the map (but not enough to build a nice histogram to even just with eyeballs see if it looked Gaussian). If, for the same mutual fund, I used two months (or some interval, the longer, of course, meaning the less calculated values since we only, typically, had a few years data), those values significantly differed from the monthly values.

IOW, after diddling some, my answer was there was no “accurate” way to compute beta and all the various variations on the algorithm were irrelevant so just pick one (consistency still might be relevant). Plus the heterogeneity of the data was an obvious problem. Say your algorithm is compute beta over three month interval, then average those results over lifetime of mutual fund – what about a fund that had been in business for 10 years vs one only in business for 3 threes – are you comparing apples to oranges? And mutual funds change. They get new managers. They split, they merge. They have a little money (so make fewer investments), the get lots of money (so, just to meet rules, have to have many more investments). In short mutual funds are apples and oranges and applying a single measure, no matter how it was computed, to compare them was futile.

IOW, I said what I claim S&P should have said – not pick some other rating than AAA, but pick NO RATING because attempt to rate was meaningless.

This shows why I never had a career in finance. You don’t get paid for telling people it makes no sense to do something that the enterprise attempting to do it (and its top managers and there bonuses) will get paid a fortune to do anyway, even though it is meaningless. Plus in general, in finance, you don’t tell anyone what they don’t want to hear, unless you’re a few rare fund managers that get to be contrarian.

But, back to why I claim I was right – in this case the argument I really had with the prof was over simulation vs abstract math. He was disappointed my preliminary paper was all just computer code with no symbols scrawled all over and proofs and derivations. Now: a) while I wasn’t too bad at math, abstract math used in statistics never sank in, so I didn’t use it because I couldn’t, and, b) what I did know was to make the math work you had to make silly assumptions that struck me as wrong (i.e. use Poisson distribution, rather than Gaussian, because the math works in closed form – SO WHAT, is Poisson distribution the right answer).

So I wrote a simple program. In the program I created “god’s truth” (just a cliche for saying the absolutely real answer, not that god had anything to do with truth). IOW, I started from know beta. Then I generated, with some randomness, the actual daily price (share valuation) of the mutual fund, tied to a randomly generated market index (took a while to get a function that approximated the chaotic behavior of the entire market – beta is computed relative to market index, like the Dow (a terrible index), and all that is a different digression).

So having generated 10 years of daily price data (from a know index and a known beta, only with truly random variation) I could throw all the algorithms for computing beta at this. And I could do this for as many iterations as my computer budget allowed.

And guess what, my intuitive notion that computing beta was silly was born out by the simulation data. IOW, I was right.

But no I wasn’t, according to the professor (and so, despite doing far more work than any of my classmates, and frankly some fairly clever stuff, I only got a B on my paper). The trouble was I had never used abstract math and “proofs” to make my claim. I only used a computer and simulation. He claimed, since I couldn’t do infinite number of iterations even a correct simulation would not deliver the same result (as a certainty) as cranking the math.

Now the Black-Scholes equation was just being formulated at that time and all the profs, esp. at MIT, UChicago and Stanford were in love with that stuff BUT they wanted the academic standard of nice fancy looking formulas with new fancy looking proofs, not computer simulations. So what I’m saying is that early in the development of “financial engineering” that then inevitably became the 2008 crash they simply ignored the data problem(s): a) not enough, b) real data is discrete not continuous, c) datasets are often no comparable. So in their quest for beautiful math (which does win Nobels) they invented an imaginary result that doesn’t actually apply to the real world. And all the rocket scientists and thieves of Wall Street knew this, they just used math and Nobel Prizes for legitimacy to cover their tracks on what were obviously crappy investments guaranteed to lose money. As I’ve pointed out, in finance, you don’t get paid to disagree. (And if you did and were vaguely credible Goldman Sachs would hire you to shut you up).

So in a flash of a few seconds, thinking about one thing, connecting it to something I just read, and connecting that to my history, I realized it’s all full circle . I still use simulation and almost never abstract math and history has proved that is right and the math, while not “wrong” per se, is simply useless in the real world and will, sooner or later, be abused to just steal.

 

Posted in musing | Tagged | Leave a comment

Addiction and denial

I’m going to do something unusual, at least for me, of making a connection between a personal experience and theology, that of the various ways denial appears (and is counterproductive) in addiction.

In terms of the addiction, it’s not mine (other than co-dependency) that I will speak of. No, it’s real chemical addiction, drugs and booze, of a person near me and the progress of how the addiction is being dealt with (actually mostly it’s denial).

When chemical addiction first begins to appear denial, both by the person and those around him, is the typical first response. IOW, this isn’t really happening, the way it literally appears, but has some other explanation. When evidence of chemical addiction first appears in a close relationship it is not clear whether it really is addiction or just some “bad decisions”. Of course, especially when the person is young, it’s much easier just to minimize the evidence, look for the best interpretation, and hope it’s just a phase that will disappear.

As the process goes on and on the evidence of dysfunctional behavior grows (car wrecks, DUIs, minor brushes with the law or other authority). But again, all is forgiven under the assumption this is just youthful indiscretion. But if it really is addiction it will get worse AND the evidence of it gets harder and harder to explain away. Usually the people around the addict will “get it” sooner than the addict will. As the problems associated with addiction impact the addict and those around him more and more eventually the problem will be addressed.

That’s the where the whole range of social services come in and guess what – they’re as good at denial as anyone else. In my situation after the problem was manifested and required action the very first counselor just dismissed the whole thing, literally “kids will be kids and they do stupid stuff but they’ll get over it”. Well after many therapists and rehabs and brushes with the law and failure at life it gets very hard for those around the addict to spin the “mistakes” as anything but addiction. But denial dies hard.

Eventually the addict will most likely be subjected to programs where they will, at least mouth the words, “admit” they’re an addict. But do they really believe it? Not likely, as the only alternative to addiction is abstinence and that’s no fun, so excuses continue to get made. Even once it is no longer possible to objectively deny the addiction will either the addict or those affected by the behavior truly and with full internal conviction admit the addiction? In my experience, mostly no, at least by those who have the greatest stake in denial. So they too will go to sessions and say the words, but you can see they really don’t believe it because they don’t want to believe it despite evidence.

One reason is simple – addiction is really hard. It’s tough on the addict to break, but in many ways it is even harder for those around the addiction to grasp. Who really wants to admit someone you love has such a problem and, at least sometimes, has such a bleak forecast for their life. Easier to stay in excuses mode and merely hope the problem goes away. But any of the successful therapy for overcoming addiction has to start with admitting it exists, complete, 100%, no reservations, no spin, admission. Only then can healing begin.

And simply put, even after a decade, some people just can’t make that admission. Oh they may have no choice but to recognize all the symptoms and facts, but what they’re going to struggle with is “responsibility”. Admitting to addiction behavior is not the same as admitting to addiction. And what I’ve learned is the real issue is responsibility – there is an implied moral lapse to addiction. IOW, to a degree you have to view the addict as flawed and taking actions that are harmful to the addict and others. And normally we can’t do that without holding that person accountable, in essence assigning “blame” to them. And that’s hard to do because “blame” is so negative and almost all therapy seeks to remove (appropriately so, IMHO) “shame”. But assigning blame/shame is not the same as assigning responsibility.

So even when the addiction is obvious there is an evolution of denial that wants to make the addict a victim, not a perpetrator of their behavior. They can’t help it, so don’t “blame” them. The problem is “blame” is so judgmental, no the addict is probably not to “blame”, but they are responsible. You just can’t escape that.

So this is where the mental illness establishment may enter the picture and provide a new avenue for denial. And that is just labeling addiction as mental illness and thus, in a subtle way, removing all responsibility from the addict, literally as I’ve seen turning it into an entitlement. It’s not their fault (nor their responsibility). It’s just bad luck and we should be completely sympathetic/supportive to them, rather than establishing boundaries for acceptable behavior.

Now if a person were severely injured in car crash and thus can’t function normally, would we “blame” them, or even hold them responsible for their inability to function normally? Of course, not. Oh, let’s say it’s cancer (without having done risk stuff, like smoking) or it’s heart disease (without having done risky stuff)? No, the person is a victim and not responsible for the bad that has come their way. So it’s an easy extension to do the same thing with mental illness. It is illness, the victim didn’t do something to “deserve” it. So how can we hold them accountable since it’s beyond their control and not the consequences of their actions?

My problem with this attitude is that it is still denial. Yes addiction is a horrible problem and the addict is not a bad person. But their behavior is! And their behavior is not sustainable nor should it be enabled or even indulged or tolerated. They will not overcome the addiction as long as they have the means to avoid its consequences and with no recognition of their responsibility for the addiction.

So this is how I tie back to theology (long time coming for my point). What is the societal attitude that allows this kind of denial to continue?

I’ve been reading (actually re-reading) James Tabor’s Paul and Jesus. I’m not going into any discussion of the general gist of this book (although I found it fascinating) in this post, but to connect a very specific bit of this book to the question of addiction and denial. In Chapter Eight, Tabor says:

He (Paul) goes so far as to say that when one sins, it is not really that person who sins – but their sinful “flesh” that does so.

Now what does this mean? Tabor quotes Romans 7:15-20:

I do not understand my own actions. For I do not do what I want, but I do the very thing I hate. Now if I do what I do not want … So then it is no longer “I” that do it, but the sin that dwells within me. For I know that nothing good dwells within me, that is, in my flesh. I can will what is right, but I cannot do it. For I do not do the good I want, but the evil I do not want is what I do. Nor if I do what I do not want, it is no longer “I” that do it, but the sin which dwells within me.

Tabor’s exposition of Paul’s theology is long and complicated, but I’ll summarize it simple: “the devil made me do it”. Paul believes “the flesh” (that is earthly human existence since Adam (Paul is definitely a literal creationist and thus obviously wrong)) is born in original sin and is inherently sinful, but there is an “I” that is somehow outside the corporeal existence and that “I” is without sin, so it is only the evil body and biological human existence that commits the sin. “I” don’t do the sin, so other “I” did it, so the good “I” is not responsible for my evil physical self. IOW, “I” is a victim, not much different than what much of current mental health says.

I won’t go so far as to say addiction is “sin” (it is certainly negative and dysfunctional behavior that interferes with normal life and harms others, so I’ll leave out the moral judgment). But I think this idea, which is fairly dominant in western culture, is the fundamental idea that allows denial of dysfunctional behavior. The “I” inside the addict (that which is loved by those around him) is not what sins but some unavoidable consequence of evil (or inheritance of genetic susceptibility to addiction in modern scientific sense) that sins.

Fine, this is a perfectly good idea to avoid “blame”. BUT it CANNOT be the idea that denies responsibility! Nor the excuses of denial.

The trouble with the “devil made me do it” idea is it is acceptance of the failure for acceptable societal behavior, either in the theological sense or the way mental illness dogma removes responsibility from addicts. In my experience many mental health professionals (and I use that word loosely from my inexperience) properly want to remove “blame” but they also manage to remove responsibility and in so doing deny the possibility of recovering (except, of course, through a lifetime of engaging mental health professionals who have a rather poor record of success).

And here’s the flaw – it is POSSIBLE to overcome the sin/behavior and one doesn’t have to wait to die and be resurrected in a heavenly form to accomplish that. Millions of addicts succeed in going sober and then functioning normally in society. Addiction is not a fatal incurable illness, it can be overcome and is by many (although sadly, not by many others).

I think a key ingredient, as does AA, in the path to recovery, is the rejection of denial – that is saying, as AA requires and believing without reservation, “I am an alcoholic”. Note,IMPORTANTLY, this is not a moral (“blame”) judgment – it is merely a practical admission of facts, just as heart disease or cancer would be. It is there, it is real – BUT it can be changed.

And, I believe, as long as in a reasonable desire not to “blame” the addict, we then also remove all responsibility we fail that addict. Others cannot fix an addict – they must fix themselves. And that starts with admitting they are responsible for their own life. Beating addiction is hard and takes lots of help (and often indulgence in their relapses) BUT it can never be accepted as a permanent and unchangeable condition, inherent in their, or broadly human (as Paul says) humanity. That is the formula for failure. The formula for recovery has to start with responsibility (again not guilt or blame, the later steps in 12-steps deal with that).

So Paul’s view is irretrievably pessimistic (we’re doomed, so just get on with dying and being reborn). Since resurrection is fairly useless (it’s not going happen at all, in my view, but certainly isn’t going to happen (in xtian view) in this timeframe) we need a better answer for this world and here and now.

So, in short, this view is just another excuse. Sin (addiction) is inevitable, so just accept it and wait for a better time. But that’s the belief that others around the addict in my life are hanging onto (it will somehow get better and go away. NO! It won’t as long as there is denial but WILL as soon as there is admission). Hope lies in action, today, in this world, not in some imaginary reality.

So denial is still the first obstacle to recovery and this dominant view in western thought is a major crutch that allows the denial to continue. Let’s be done with Paul and let’s be done with denial and get on to recovery!

 

Posted in musing | Tagged , | Leave a comment

Huskers fans will soon be howling

Since I’ve had such a great record making predictions it’s time for another one about 2014 Nebraska football.

The Huskers will easily roll over the remainder of the B1G West (me and a few random guys often the street could win that division) and so will continue to be a one loss team. They have a 45% chance of winning B1G since Michigan State is way over-rated and will be banged up by then, plus what few tricks MSU could possibly do they’ve shown in the first game.

So let’s just go ahead and assume they win that one too and therefore end the season as a one-loss team. Given the opinion the opinionaters have of B1G, they won’t, however, be ranked in the top-4 and may be lucky to even be in the top-10. But for sure they won’t get picked for the playoffs with the usual preference for SEC still dominating and then maybe letting in a Pac12 team. So Nebraska won’t get a chance to show how badly beaten they’d be by a quality team.

And therefore the fans, with their usual sense of championship entitlement, will scream bloody murder at the raw deal their obvious national champion team got. Everyone knows Huskers are national champion every year but someone always screws them out of it. The fact that B1G and this year even more so the Huskers play 1930s football (they should go back to the classy leather helmets to really do the part) just never registers with the fans. In Nebraska football is timeless so all the newfangled plays, like passing, are destroying the purity of the game, which is, of course, the QB runs every play behind a massive O-line.

But the thing the Husker fans miss is the money at the national level. Who wants to see a boring championship game? So even if somehow the Huskers did deserve to be in the playoffs that would cut the revenues a lot and since TV dominates everything, forget it Huskers. Until you have exciting style of play your audition for prime time will elude you.

Posted in opinion | Tagged | Leave a comment

Time to get back on the wagon – 3A

Just a bit more to add to yesterday’s post about weight analysis.

weekly104-threePeriods

This shows most of my six months of attempt at maintenance (earliest data is a truncated set). The blue markers were where I was add just prior to my camping vacation; in fact the couple of upticks were just some unrestrained eating before leaving.

The red markers, after the gap (no data) for vacation was my return where I didn’t exercise much consumption restraint, but did keep up exercise. That led to the upward drift that got my attention as of post -2 of this series and I began to try to push weight back down. The parabolic curve fit shows this up and down cycle. Note the high volatility of weights toward the end of the red dataset; this is literally salt. A few days in there (since it was tomato harvesting time) it was homemade salsa time, and naturally salty chips and of course also margaritas – booze and salt, one day of that can create a 4lb upward swing, then easily reversed with a day of abstinence.

The green markers (after another data gap for two vacations) represent my attempt to return to “normal”, reduced eating, steady exercise. The slope of the regression line is artificial and silly since the first couple of days of weight loss were the same thing, sweating off the gain from chips and salsa and margaritas in Santa Fe. Now I’m looking at a more steady (and real, but insufficient day yet to get very accurate rate of drop) reduction, hopefully heading back down to my target (just a bit lower than the blue data points).

The total six months or so “trend” is shown by the slightly upward linear trendline all the way across the graph. This has numerous statistical problems due to the gaps in data plus irregular reporting during the middle section, but nonetheless does represent the “truth” of my last six months and probably even roughly the correct magnitude  (e.g. slope of trendline, 0.25lb/week, without any restraint it would probably be more liks 0.5lb/week).

And that’s the classic problem with maintenance. There probably will be a steady drift upward with a lot of noisy changes day-to-day (thus making it hard to detect real gain over time without these simple statistical analyses). So you’re doing OK, but actually at a very slow rate (well below accuracy of any scale) you’re creeping up. By the time you really notice it (letting out a belt notch or going up a dress size) you’ve already had enough gain it will take a real and focused repeat of the initial weight loss regime.

Now I understand completely why the nutrition scolds advocate a change in lifestyle, not just a diet. Your over-eating and under-exercising that led to the weight gain you reversed with diet discipline is going to just come back and sabotage you. Your body wants to be fat and you’re letting it get its way.

So it would be great if it were as simple as changing your lifestyle, basically go mostly vegan and increase exercise. But come on, why is that any easier (or likely to succeed) than dieting? Do you really believe if I eat kale and quinoa for a year I’m going to like them instead of burgers and shakes – get real! The only people who believe this is these naturally slim scolds who feel so superior to everyone else but freely hand out useless advice. Most people, because that’s what evolution did to us (the fat ones survive in bad times and make more babies than skinny ones), have trouble with weight and a lot of useless, impossible to follow advice is stupid.

Now the place where some advice can be useful is getting people to just pay attention. It’s easy to every now and then think before putting something in your mouth (you may not even be hungry, you just want it) or think before skipping exercise. And calorie labeling (forget the other ever larger and confusing (or politically correct) labeling the scolds, even Michelle, are pushing for) can help. Just check what the difference between crispy and grilled chicken is. Little changes due to knowledge are easy to make. Most people have no idea how bad alcohol is, so a little wine won’t hurt, will it (might as well head to donut shop). So raising awareness, getting more information out there, encouraging moderation – terrific!

But the scolds can’t leave it at that. They have to sell you their politics, vegan is the only way, omega-3 is the only fat, anything white is poison – all this extreme stuff not based on science and just someone’s own agenda (looking at you, Bittman). People knew real and useful help, not pet theories or ideas dripping with moral superiority.

And the answer is simple: a) calories in must be ≤ calories burned,  b) if you’ve been fat you’re going to be hungry to stay fit, get over it (hey hungry is better than kale and quinoa),  and, c) there are no “superfoods” which provide silver bullets to make it easy.

So now let’s just see if I can manage to follow my own claims and see where I’m at a year from now. One fun thing of blogging, making indelible records either forces you to do what you say you’re going to do and you end up with a lot of egg on your face (didn’t bother the neocons, they just denied saying (as I saw Cheney) do, what the video recording showing they said). If I drift back up in weight, given I’m totally skeptical of nutrition scold nonsense, then it’s on me – can’t blame the fad diet book I didn’t follow.

Posted in comment | Tagged , | Leave a comment

Time to get back on the wagon – 3

… or as alternate title, another boring weight post. Back on this post I declared my intent to return to my weight control discipline as I was slowly slipping back up again and hit a threshold where I had to return to more aggressive control (with some re-loss).

Well, things didn’t go as planned. For a couple of weeks I managed, at least, to halt my gain but made little progress reversing it. Then I hit the road. First a trip to Boston (with social events involving eating, plus eating out in general). Second an even longer vacation. Unfortunately going to restaurants and enjoying meals is part of what a vacation means to me. So after 12 days after my vacation I’m finally back to some discipline again.

For those of you, Dear Readers, that haven’t followed my posts, here’s my overall history (I’m a graph nut):

weekly104-fullHistory

This graph is my two-year history. The red line is my weekly weighin (a few points interpolated when I wasn’t near my scale). The steady decline in the first 24 weeks was when I was really really diligent about the loss, then the blip at week 26 shows the effect of my first vacation during this process, fortunately fairly quickly reversed. Then I made it, in the week 40 time period, hit my target. So off the severe loss regimen and onto a maintenance regimen, what I knew would be the hardest part. The green dots actually do a better job of showing my trends; these are a quarterly (13 week) moving average The first real blip up there was another and lengthy vacation (actually two of them), following by a return to discipline. But it was the slow but steady rise in weeks 70-91 was worried me. Even though the rise is only 10lbs, it was steady and represented a worrisome trend for the future. And while I haven’t reversed the trend as per my -2 post I did halt the rise. Then the most recent part of the red line shows my two recent vacations plus my aggressive recovery from them that started two weeks ago. It’s going to take a while to push the red line down, then slowly bending the green line down, but I’m committed to it, back to my 180 target.

The following graph gives an expanded review of my time in “maintenance”.

weekly104-quarterMA

Again I use the 13 week moving average to smooth out little fluctuations and give a better indication of the trend (btw: to any readers I suggest you consider doing some of this statistical and graphing stuff as individual weights do vary a lot and it takes a lot of data and smoothing to get clear trends, see all my previous posts about scale variability and daily weighin variability). This exaggerated scale makes it look really bad, but bear in mind we’re only talking about a 7% variation of my total weight, enough to be significant, but also small enough to declare I’m not doing that bad on maintenance (and again, almost all scientific articles indicate it’s what happens after initial weight loss that is the hard part). The second cycle of rising weight is more “ragged” (involves many events in my life) than the first one (almost entirely due to a single vacation, up quick and reversed almost as quickly). IOW, the sustained rise during past 25 weeks has been more difficult to control.

This is somewhat due to events (it is hard to maintain when your daily or weekly activity changes) and somewhat due to lack of attention. But the key part is that gathering data like this and then graphing clearly focuses on failure to maintain and what’s needed to get back under control. (In my case, failure to maintain means going back on some meds I don’t like so I have strong incentive to stay on plan, way more so than merely appearance issues). So as of my -2 post I halted gain but didn’t reverse it and now I’ve very slightly begun to reverse.

So far, so good, but some challenges lie ahead. We’re entering the time of year where avoiding excess eating and also maintaining exercise will be a challenge. With upcoming holidays, family get-togethers, and other issues for me, I can’t so single-mindedly just focus on weight (and in last couple of months my daily calorie burn exercising has fallen about 30% as well (time issues) making intake all the more important – you might be able to lapse on one or the other, but definitely not both). And as I recently posted I don’t believe in the gradual approach. I need to do everything I can to get back to the 2.5lb/week loss I managed in my first 24 weeks so I can get the recent blip back under control – no dilly-dallying, DO IT!

And as my final graph, here’s the last year (I roll this graph every week by a week):

weekly104-52weeks-1

Here the blue markers are actual weekly weigh-ins and the red markers are the interpolated points where I was away from scale (given the variability of scales there is no point in getting weights from some other scale). For some people this would be viewed as rather poor maintenance (to some degree, I agree, but I also allow myself to do some of this as I’m not going to obsess about weight all the time + after dropping 65lbs to reach 200lbs, I’m not too unhappy to stay (mostly) below 200lbs for a year). But the trendline (which I believe despite the low r^2) isn’t good: 1) it’s slightly up rather than significantly down, and, 2) its’ about 7lbs over my target (185lbs, the BMI that is my threshold for “overweight”, even though I’ve been below borderline obese for nearly two years).

Now if I duck below the 190 level for next few weeks the trendline will turn down, but that’s deceptive. The real challenge is to get (steadily) back below the 185 threshold which is going to take more discipline than I’ve exercised over past year and then see if I can’t hold that for a while (if I can I’m going to try to go below 180 which I only reached in a few daily weigh-ins).

So we’ll see how well I do and you can hold me to my goals, Dear Reader. Even if I blow it (some, not a lot, I hope) through end of 2014, I’m going to stay on top of this so my 2015 New Year’s resolution will be to not exceed 190 during 2015.

And to anyone reading this who is working on weight control it is persistence and vigilance that really determines success. I have to do this because my aging is making lots of exercise (I averaged over 1000 calories/day during rapid loss) more difficult so upward blips are going to be hard to reverse.

and, btw, stay away from chips and salsa and margaritas as I so much enjoyed while in Santa Fe.

 

 

Posted in comment | Tagged , , | 1 Comment