Tuesday, November 27, 2012

Economic Egg Allocation

A common truism goes "Never put all your eggs in one basket."  Economists and financial planners love to invoke this saying when trying to encourage others to diversify their portfolio.  There's no doubt that most of the time this saying provides sage advice.  However, you might be surprised how often attempting to diversify can do more harm than good.

In 2008 a study was conducted in which participants were given a container with three "eggs" in it labeled X, Y or Z. All three "eggs" were the same in each container, so each container was all Xs or Ys or Zs. They were told that they would draw one egg from their container. Then after the experiment a computer would randomly select X, Y or Z. If the computer's selection matched their "egg" they would receive $30. Otherwise they received nothing.  Participants were given the option to pay $1 to get a container with two "eggs" of the same letter and one different letter or $2 to get a container with all different letters. Otherwise they paid nothing to keep their container with all the same letter.

The important component of this study is that what the participant drew out of their container (and thus what was in the container) was completely irrelevant to their payout.  The only component that affected their payout was whether the computer randomly matched their selected letter. Regardless of what letter was selected by a participant this probability would always be one third.  

Despite this fact the majority of participants elected to pay to switch to a container with more than one letter inside.  What possible explanation could there be for such behavior?  

As it turns out there are several plausible explanations.  Firstly, participants could simply be misunderstanding the math involved.  It is very intuitive to say "I have all X eggs, what if the computer picks a Y egg? Then I can't win. I'll improve my odds of winning by getting a Y egg."  However, this simply changes the equation without increasing odds of payoff.  While the participant has given themselves a one third chance of winning if the computer picks Y they have also reduced their chance of winning by a third if the computer picks X. The net effect is no change except for the loss of the participants payment to change to another container.

People faced with uncertainty in which there is no clear best choice also tend to believe that diversity is better. While this is often true this mental shortcut could have led participants astray in this case.  There is some evidence for this explanation as participants were much less likely to switch to a more homogeneous container from a more heterogeneous container in later experiments.

Participants may also have been attempting to avoid later regret.  Imagine having an all Xs container and later being told the computer randomly selected Y. It's very natural to have the reaction that you should have paid to switch to a container with Y in it.  Participants likely could have anticipated this reaction and acted to mitigate it by paying to include different letters in their container. When every letter is included participants can more easily ascribe their loss to luck rather than their decisions. 

The core of this study is the idea that people often fall into the trap of false diversification.  Simply paying for more heterogeneity is not enough. Diversification must increase return and lower risk. Otherwise it is simply a needless additional cost of investment.  

Wednesday, November 21, 2012

Talking Turkey

Every year millions of Americans gather together for the second most American holiday of the year, Thanksgiving (the first of course is the festivals of explosions and barbecue known as the 4th of July).  On the fourth Thursday of each November we gather with our families to celebrate enormous quantities of food and the start of the Christmas shopping season.  It's hard to imagine two things closer to our collective hearts.

Thanksgiving is not a time for waxing philosophic or deep thoughts.  It's a day of simple enjoyment, family togetherness and loosened belts.  In that spirit here are some facts you might not have known about the most iconic Thanksgiving dish, the Turkey. All information can be found at the USDA's, National Turkey Federation's  or FSIS websites if not otherwise cited.

Over the last 40 years the average size of a turkey has increased by 64%. The average income of the top 20% of earners in the United States over the same period increased by approximately 59% (in 2011 dollars) source. Is the "Big Turkey" lobby making the rich richer?

Turkey production peaked in 1995 at almost 300,000,000 turkeys. That same year Newt Gingrich became speaker of the house. During his tenure the turkey industry experienced the worst 4 years in recent history.  Why does Newt hate Thanksgiving?

Last year total domestic turkey production was 7.3 billion pounds. Total miles traveled by car this year in America will total just under 3 trillion. This gives a miles per turkey lb ratio of 410 or four 1/4 lb turkey sandwiches per trip between San Diego, CA and Tuscon, AZ.

Total domestic turkey consumption in 2011 was approximately 5 billion pounds, equaling the consumption of all seafood for the same period source. This left the United States a sad second in seafood consumption but brought home the gold in turkey gobbling.

Finally the chart below demonstrates a clear correlation between disability awards (source) and turkey exports.
It's obvious that turkey has a far greater impact on our lives and economy than most of us realize. And if you find yourself disbelieving any of the conclusions drawn in the above data I can only suggest one thing.  Take this posting, and your Thanksgiving feast both with a grain of salt.

Happy Thanksgiving!

Tuesday, November 13, 2012

Something Fishy...


If you're a researcher that frequently utilizes fMRI analysis it's likely that some interested layman has sent you something similar to this in the last couple years:

Dead Fish, Exploding Bowels Win Spoof Nobel Prizes

If you're unlucky you may have received numerous copies of such articles along with half joking recommendations that perhaps if fMRI scans are finding statistically significant activation in dead fish it might be time to change professions. To briefly summarize, a researcher by the name of Craig Bennett presented a dead fish with a simple task and demonstrated that fMRI data could be incorrectly used to show statistically significant results.

As humorous as Bennett's fishy fMRI study is it does bring to light real concerns about the application of fMRI data.  Unfortunately due to poor journalism and laziness a large portion of readers are drawing the wrong conclusions.

Bennett's study aimed to warn fellow researchers against succumbing to the dangers of data overload.  fMRI analysis by it's nature creates enormous quantities of data.  If improper statistical tests are applied to that data a great deal of false positives can result.  Here is a very rough analogy.

Suppose you had 100,000 six sided die and a table with 100,000 slots for die. Further suppose you hypothesized some areas of the table were more prone to having "6" appear facing upwards than others.  You proceed to toss your 100,000 die onto your table.  After shaking it around a bit to make sure all the die had fallen into a slot you then proceeded to analyze your table with the following conditions. A die that had it's "6" side facing upward was considered "significant".  However, since you know that a die will naturally show a "6" 1/6th of the time you have to add additional constraints. You decide that you'll look for clusters of significant ("6" side face up) die in order to confirm your hypothesis.  You determine how large a cluster of significant dice you require in order to prove your hypothesis that some areas of the table are more prone to "6" and proceed to check for such clusters.

As it happens the probability of any non edge "6" being adjacent to another "6" is slightly over 50%.  The probability of the second "6" being adjacent to another different "6" is approximately 42%.  This probability then holds constant for each additional "6" assuming no structure is created that wraps around to touch itself. These probabilities lead to the conclusion that clusters of seven "6" face up die will happen with a p-value of <.01, a highly statistically significant finding.  So then if we find clusters of seven "6" face up die is it reasonable to conclude that some areas of the table are in fact more prone to "6"?

Of course not.  In fact we would expect to find well over 500 such clusters on our die holding table.  If we conducted the experiment again we would find another 500 clusters and it's likely that at least a few of them would appear in the same or similar areas as in the first experiment.  Yet still no reasonable person would assume that some table areas magically caused "6" to be rolled more frequently.  This is simply the danger inherent in working with large amounts of data.  Even small percentages of false positives can be misleading when poorly interpreted.

Bennett's study was in no way about the failures of fMRI research.  It was a warning to researchers utilizing such tools that they must be vigilant in the use of their statistical methods.  Several statistical tools have been developed to safe guard against the perils of large data sets including multiple comparisons correction, false discovery rate, and family-wise error rate.  While the majority of researchers already make use of these statistical tools to ensure valid results Bennett's point was simply that everyone should be using them.

FMRI analysis, like all research tools has it's drawbacks.  Poor temporal resolution, restrictions due to bulky and restraining equipment, large initial investment requirements, and highly trained staff are only a few.  However, the idea that fMRI data is some sort of pseudoscience is not a valid conclusion to draw from Bennett's work.  In fact, even a small amount of investigation reveals that Bennett used additional statistical tests in his fish study to confirm that the famous statistically significant voxels were in fact false positives.

Ultimately Bennett shined a spotlight on a serious issue in a humorous way.  It's unfortunate that many readers have misunderstood his findings in a way that casts doubt on a useful experimental tool.  However, when you're dealing with nearly seven billion potential data points it's inevitable you'll find clusters of ignorance.




Wednesday, November 7, 2012

Did Obama's Re-election Tank the Markets?


Last night President Obama was re-elected to office.  Today the Dow fell 2.36% (312.95 points) to under 13,000 for the first time since August.  Certainly such a reaction is a strong condemnation of the Obama administration.

Or maybe it's not.  Economics has long held to the idea that the market is reasonably efficient.  That is to say that all public knowledge is already factored into stock prices. If this is the case (and a great deal of research indicates it at least mostly is) then why would the election of a president who was expected to win cause such a profound market reaction?

Here are some headlines from major media sources regarding today's market decline followed by brief quotes from the subsequent article:

Markets Plummet After Obama Victory Puts ‘Fiscal Cliff’ in Focus. "As I discuss in the accompanying video with Macke, Henry Blodget and Michael Santoli, the market is fearful that Obama's reelection means a continued stalemate in Washington, where the Republicans maintain control of the House."

Post-Election Retreat: Dow Plummets Below 13K for First Time in 2 Months. "As a result of the remaining sharp divisions in the White House and on Capitol Hill, market participants almost immediately shifted their focus to the looming fiscal cliff."

Bank stocks push Dow below 13,000. "This is purely a reaction to the political landscape and an investor response to the policies on the table -- all the new regulation that will add to the costs of doing business for certain industries and sectors"

All of these articles make mention of Obama's re-election and the approaching "fiscal cliff". Some then discuss other factors that may have lead to today's market decline.  However, these and many others ride the coat tails of election coverage to imply that last nights results, both presidential and congressional, had a large impact on today's market.

The fact is, last night's elections went essentially just as predicted.  Unbiased analysts with no interest in building tension or suspense predicted the outcomes of nearly every race with incredibly accuracy.  In short, there was very little in the way of surprises.

So if markets are reasonably efficient (price in already known information) and the results of last night's election were unsurprising than how can the outcome have had such a staggering effect on the market? Clearly there are other factors at work.  Briefly those factors include continued weakness in Europe, European Commision's growth predictions and strength in the dollar.

All of this is opinion and theory. Without solid backing my claim that Obama's re-election didn't have a meaningful impact on the market is as empty as other's claims that it did. So here is the math behind my theory.

Most unbiased analysts had Obama as a 90%+ favorite going into last night Source. To be generous we'll assume that the last time traders updated their political knowledge was November 1st when Obama was at an 80% chance for victory.  So Obama_Victory = .8 : Romney_Victory = .2.

Today the market fell 312.95 points. We'll assume that for Obama to be mentioned in virtually every financial recap article of the day he must be responsible for at least 150 points of that change. The market if reasonably efficient would price itself between where it would be if either candidate won with weight given to the likely winner.  Thus given our assumption we know that given 100% chance of Obama winning (since that is what occured) the market will fall 150 points (the amount of points we assigned credit for to Obama's election assuming the remainder was due to other factors) from yesterday.  In other words a 20% chance of Obama victory is worth -150 points on the Dow. Therefore if Romney had won (Obama_Victory becomes .0 or a change of .8) the market would have increased by 600 points.

Keep in mind at every pass we fudged the numbers to minimize this possible gain. If we assumed the market updated it's political knowledge more than once a week (it does) and that all or even a majority of today's market movement was due to election results we'd have come to a conclusion indicating market swings of up to 2500 points. Still even our generous 600 point gain has only been matched twice in the history of the market. Is it reasonable to think that a result that was expected could have such a profound effect?

I think not.