Friday, December 20, 2013

A Christmas Quickie


Seasons Greetings to everyone during the holiday season. In lieu of the normal posting this week you'll find an interesting graph that charts expected gift expenditures versus average wealth for a nation. The further rightward a country can be found the more generous it's people in terms of Christmas gifts. The further towards the top of the graph the more the population can generally afford.  Thus countries found more towards the top left give a smaller percentage of their income as gifts while countries more towards the bottom right give a larger portion. 

Residents of Luxembourg give the most on average (but can also afford the most) whereas the Irish seem to give most selflessly. Meanwhile it seems like the Netherlands give relatively the least for Christmas. 




Until next time have a happy holiday season!

Friday, December 13, 2013

Like Riding a Bike

The expression, "It's like riding a bike." is commonly used to indicate an activity that once learned is easily repeated regardless of how much time has passed.  But what makes some activities such as bike riding easily retained while other activities such as basketball rapidly deteriorate without practice?

A recent paper published in the "Proceedings of the National Academy of Sciences of the United States of America" explored a possible root cause of this phenomenon. The paper, co-authored by Robert Ajemian, Alessandro D'Ausilio, Helene Moorman and Emilio Bizzi deals with neuroplasticity and how motor skills are learned.  In brief, the researchers posit that motor skills are encoded in the brain by a large number of neurological pathways which are all equivalent.  These pathways are refined via feedback during skill learning and largely maintained due to a constant babble of neurological "noise" (neuron to neuron signaling that does not serve a specific purpose) that is found in all complicated brains.

The question of skill or memory retention is one that has proven difficult for researchers to solve.  Clearly the human brain has a relatively high level of plasticity or else we would be unable to learn new skills.  However, too much plasticity would cause existing skills to rapidly deteriorate whenever we weren't using them. Clearly some middle ground exists wherein we can learn and retain, but the exact mechanism for this ideal state has generally remained mysterious.

The co-authors of the aforementioned paper have presented a model of learning and retention that seemingly is a viable explanation for how our brains manage to acquire new skills without losing those we already know.  During skill acquisition the brain establishes many pathways which correspond to a given activity.  Riding a bike for example.  These pathways are all equivalent in that they all have the same end result of causing the muscle movements which propel a bike forward.  During bike riding many of these pathways are utilized and reinforced by feedback mechanisms. When we take part in some other unrelated activity, cooking for example, the pathways are largely inactive.  However, even while inactive these pathways are still receiving a great deal of traffic from the surrounding neurons.  It's this neurological noise which continues to reinforce the pathways and limits skill loss.  Although over time many of the neurological paths will breakdown for biological reasons a large portion will remain allowing for the retention of previously learned skills.

So why do some skills stay with us longer than others?  The researchers of the study conjecture that it has to do with how unique a skill is as compared to other daily activities.  Bike riding doesn't have a great deal of overlap with anything else most people do in their daily lives, therefore it tends to be retained for a long while. Proficiency in video games tends to rapidly deteriorate however as each game is similar to the next in terms of motor function (they all essentially amount to pressing buttons).  When skills utilize movements which are not commonly applied to other purposes it seems they are maintained far longer than skills which use more generic movements.

The researchers new model is interesting insofar that it nicely deals with common questions surrounding skill acquisition and retention.  However, it's important to note that there is little experimental evidence to directly support the model at this point.  The researchers have simply proposed an interesting idea which merits exploration, not proven anything experimentally.  Still, all experiments have to begin with a hypothesis and this one seems promising.

That's all for this week. Until next time stay safe and rationale.


Friday, December 6, 2013

Informed Choice

Recall for a moment your first job.  On the first day it's likely you didn't have much of a clue what was expected of you.  Like most people you were probably rather nervous and it was a relief when your boss gave you a relatively simple task you could easily handle.  But what if the situation had been different? What if you arrived at 9 a.m., were pointed towards an office and told to get to work.  It probably would have been rather distressing.  You'd have no idea what your responsibilities were. No idea if you should be calling clients or compiling reports or if you were any good at either.  It's far less stressful to just be given a simple task and left to do it while you acclimate to your new environment.

Yet for years the idea of more choices being better has been a common tenant of US policy creation. Wouldn't a worker be happier if he was allowed to choose his own tasks? Aren't patients more satisfied when they're making their own decisions about their healthcare? Don't retirees want to have options when it comes to their financial planning?  Like most complicated questions the answer is "It depends.".

A recent series of studies conducted at the University of Texas at Austin by Erika Patall, Breana Sylvester and Cheon-woo Han sought to explore the relationships between motivation, performance, task selection and perceived competence.  Each study attempted to trick participants into believing they had a certain competency level at word puzzles.  Participants took a test and then were given random feedback that indicated their skill level. In other words their test scores were unrelated to actual test performance. Participants were then asked to choose between similar word puzzle tasks and afterwards asked how important having a choice was to them.

Performance, motivation and importance of task selection were all highly correlated with perceived competence for a given puzzle set.  Essentially participants who believe they were good at a given task performed better, were more willing to exert themselves, and wanted more control over their choices than participants who believed themselves to be unskilled at a task.  Importantly, the participants beliefs of their competence had little relationship to their actual skill level.

What does this information mean for us on our first day at a new job or when we're making healthcare decisions?  Generally speaking, the more informed the decider the more options they'd prefer to have.  A doctor making healthcare decisions probably feels that more choices is more optimum.  Similarly, a financial planner probably would prefer a highly customizable retirement plan to a one size fits all option. However, the general public for the most part would simply prefer that the most informed party narrow down options to a best few.  When showing up for our first day on the job most of us just want to be told what to do by an experienced employee at first.  Later, when we have more information about the environment we'll start making decisions about what tasks we'd most prefer.

An interesting wrinkle of this phenomenon is that it has significant policy implications.  When it comes to crafting healthcare plans we don't often ask the public's input.  Instead we turn to groups of informed experts to craft options for us. However, as explained previously, more informed individuals are likely to prefer more choices than the average person.  Doctors are likely to create a variety of health plans geared towards serving very specific needs.  Yet when confronted with such a large selection of choices most people become confused and frustrated.  Often a better approach is to make a few simple plans that may not serve everyone as specifically as possible, but will leave participants more satisfied.

The important lesson to learn is that more choices isn't always better.  The higher an individuals believed competence level in a subject the more likely they are to desire both more choices and self determination.  Options should be tailored to the audience rather than simply covering every possibility for completeness sake.

That's all for this week. Until next time stay safe and rationale.



Friday, November 22, 2013

On Economics Education

One of my favorite economics stories relates to textiles in New York City.  Decades ago the big apple was a major hub of the textile industry. The government wished to promote the industry further and therefore changed the depreciation schedule for textile machinery, thus reducing their tax burden.  This then led to firms purchasing far more machinery due to the favorable depreciation rates.  However, as factory floor space to house the machinery was prohibitively expensive in New York the firms quickly moved out of the city in order to have room for their new machines.  Thus, a tax modification meant to promote a healthy industries growth ended up driving it out of a city completely.

This is the sort of story that economists are bred upon.  Tales of unintended consequences adorn the syllabuses of economics professors in universities throughout the nation.  Yet, at times it's difficult to find what real world application such stories hold.

Let's back up a moment and talk about what economics does right.  Firstly, any decent economics program is going to expose students to a large variety of mathematical tools.  These tools are useful for solving an enormous variety of problems from profit maximization to crafting employee benefit packages.  This portion of economics curriculum is beyond reproach.

Secondly, economics programs generally offer highly specialized courses such as healthcare economics, economics of technology and innovation and labor economics which delve deeply into how economics is applied to certain topics or industries. These classes in general are useful and informative for someone going into a specific field.  For example, healthcare economics details how hospitals function, important regulations regarding hospital construction, different insurance methods, how insurance functions, the difficulties that arise due to insurance, etc.  These are all clearly topics that a hospital administrator or insurance executive would find applicable to their work.  However, these courses generally don't translate well into other arenas.  Health insurance and auto insurance do share some similarities but it's difficult to make a case that auto insurance leads to massively higher auto collisions whereas it's relatively easy to show health insurance leads to massively higher healthcare costs and utilization.  Similarly hospital construction regulations have little to do with monetary policy or unemployment rates.

Economics programs also effectively promote critical thinking skills.  Our textile story given above endeavors to this end.  A professor may present the first part of the story to the class and ask, "What are the expected ramifications of accelerating the depreciation schedule for machinery?"  A student will correctly answer that accelerating the depreciation rate will encourage firms to buy more machinery.  Then the instructor reveals the unintended consequence of the change thus demonstrating that policy makers must think beyond the obvious ramifications of their decisions. Hopefully students learn that sound policy decisions must be thoroughly examined for any potentially unwanted repercussions.

But herein lies the difficulty, we give no context from which to draw conclusions. Give an enthusiastic economics student a real world problem and they'll likely be able to give you a great deal of useful information.  They'll very correctly tell you that universal healthcare eliminates the problems of adverse selection while likely increasing healthcare utilization, but their conclusions regarding whether the Affordable Care Act will help reign in healthcare costs will likely be based on which news channel they watch.  They'll accurately inform you that a depreciation of Greece's currency would greatly improve their global competitiveness, but they won't be able to tell you if Greece would be better off leaving the Euro.  Economists cultivate a great deal of skill identifying the direction of different forces acting upon individuals, firms and economies.  However, we often fail at identifying which force is greater.  Only historical context can inform such questions and it is a topic often ignored in economics education.

Largely this is due to the fact that economics is an ever changing field in an ever changing world.  Why discuss the tulip bubble of the 1600s when we can demonstrate similar lessons from the more recent housing bubble?  For the most part this logic holds up. The housing bubble is more recent and generally more relevant to the economy of today.  However, there are differences between the two events which make both worth study.  Tulip bulbs for example were mostly sold during only four months of the year with transactions during the rest of the year taking place in a futures market.  Additionally, tulips were easily traded, more than quadrupled in price over the space of under six months before eventually collapsing, and had little intrinsic value.  In contrast homes and mortgages are sold year round, are often complex to buy or sell, took far longer to rise in price and are always worth at least the land they're built upon.  Which bubble is then a more apt comparison to the recent rise of the bitcoin market?  Inherently valuable, real world real estate or popularity based, easily traded tulips?

A stronger tie to historical examples would help economics students draw conclusions of what was likely to happen in similar modern scenarios.  It would provide a basis to draw from which can then be modified by the application of critical thinking. However, the current paradigm of teaching students to speculate all the potential ramifications without giving a mechanism to reduce those ramification into a final conclusion leads to a lack of applicability in practice.

That's all for this week. Until next time stay safe and rationale. 


Thursday, November 14, 2013

A Five Year History

It's been over five years since the heart of the financial collapse.  The cleverly named "Great Recession" sits well behind us and the economy seems well on it's way towards recovery.  Yet, it's often difficult for individuals to get an idea of what direction the economy is trending.  A variety of groups try to promote the economy as either flagging or healthy depending on their own political agenda. In hopes of bringing a little clarity to the issue here are some unbiased metrics of our progress over the past five years.

Unemployment


Before the financial collapse US unemployment levels were on the decline from 6% and bottomed at approximately 4.5%.  The collapse cause unemployment levels to rise rapidly to nearly 10% in late 2009 to early 2010.  Since then unemployment has fallen to current levels of 7.3%.  

Clearly we're still well above the low unemployment that was enjoyed in the mid 2000s.  However, seemingly we've made it almost half way back to where we were before the fall of Lehman Brothers.  While, the declining rate does seem encouraging there is a large caveat.  Approximately 3% less of the American public is working or attempting to find a job as compared to 2008.  Thus although less people are unable to find a job, less people are trying as well. Therefore the decrease in the unemployment rate is not quite as positive as it first appears.

GDP


Between 2008 and 2010 the US experienced negative growth in five out of eight quarters.  In the subsequent twelve quarters (including projections for the current quarter) the US economy has retracted one quarter, experienced essentially no growth one quarter, and grown at a rate of over 1% for the remainder.  

This amounts to rather unequivocally positive news.  In comparison France has had growth rates of under 1% in nearly every quarter over the same period including several periods of negative growth. Australian growth rates have been largely positive but again under 1% for the same period and Japan experienced growth rates over 1% in seven out of twelve quarters (again with several quarters of negative growth). In comparison to other developed nations the US has shown comparatively strong growth over the past three years. 

Arguably the strength of recent US growth is in large part due to the depth of the plummet in 2008 and 2009.  As the US economy suffered far more during this time period it is not unreasonable that regression to the mean would appear as a strong recovery.  There is merit to this idea, however the reason for the recovery is irrelevant when attempting to establish the state of the economy in comparison to itself five years ago. 

Inflation


The most commonly cited indicator of inflation or deflation in economics is the consumer price index.  The CPI is an attempt to measure price levels of a general basket of goods.  If the price levels of these goods rapidly rise than inflation is thought to be a problem. Similarly if the prices rapidly fall than deflation is a concern.  Many groups have attempted to make a case that the governments current loose monetary policies would lead to a runaway inflationary spiral.

Although current monetary policy remains very loose (meaning the cost of money is very low and the supply very high) there is no evidence that inflation has risen to levels that are cause for concern.  Going forward continued monetary easing may or may not lead to inflation. However, at this point inflation rates are not alarming and are in fact essentially where economists would hope they would be.

Summary

Overall the US economy seems to be on the path to a slow but consistent recovery. Changes to the countries health care system are the largest looming issue but otherwise little gives cause for real concern in the near future.  Especially in comparison to most other developing counties the United States seems to be doing well.  Consumer confidence remains lower than mid 2000 levels but has been trending upward since 2009 and if trends continue will return to pre crisis levels around 2015.  

While caution is always wise, particularly during a delicate recovery, there seems to be more reasons to be bullish than bearish.  As always the political climate could send trends askew, but failing that my expectation is that economic indicators will continue to show improvement.

That's all for this week. Until next time stay safe and rationale. 

Friday, November 8, 2013

Prudent Investing

If there is one piece of advice you take away from this posting I hope it's this: go play Chartgame.  It plays in your browser without any downloads and the experience will be more impactful than anything I could possibly say.

Chartgame is a simplistic game wherein you're presented with a random stock chart from the S&P 500. You must then predict whether the stock will go up (buy) or down (short) based on the stocks historical chart.  Your result is compared to a standard buy and hold strategy. Even understanding how little predictive power is inherent in historical price data I was amazed by how inaccurate my guesses were.

Chartgame illustrates a principle that many people refuse to believe. Trying to "time the market" is just dressed up guessing.  Stock charts are nice to look at but bring little actually useful information to a trader.

Recently, a study published in Neuron indicated that a highly activated dorsomedial prefrontal cortex while viewing asset activity during a bubble is significantly correlated with being tempted into investing during the inflated value period.  To quote the author of the study Camerer, "The data suggest that during financial bubbles, participants are taking into account the intention of other players in the market(or of the market as a whole) while updating their value estimates, and that this effect is mediated by the interaction between the dorsomedial prefrontal cortex and ventromedial prefrontal cortex."  Or in other words, those who are more likely to attempt to extrapolate what others are thinking are also more likely to be tempted by financial bubbles. 

The science of Camerer's work is excellent.  While his research is unlikely to make you rich, it may help keep you from losing your shirt.  Here are the important lessons to take away.  

  1. Too much information can be harmful.  Sound investment should be done on the basis of fundamental data, not on the behavior of others.
  2. If you're prone to attempting to figure out what others are thinking, be particularly cautious of financial bubbles.  
  3. Try to isolate your investment decisions from past performance of an asset. Performance can be informative in context (for example if a company began to perform poorly after a management change) but should not be the basis of a decision.  
It's interesting to note that attempting to figure out other trader's strategies is often, but not always harmful.  For example, Bruguier has shown previously that individuals who can better derive others intentions are also better at detecting insider trading and act more cautiously in markets with information asymmetries. 

Camerer's article is available for free here. Notably Camerer was recently granted a MacArthur "genius" award for his contributions to understanding financial behavior.  

That's all for this week. Until next time stay safe and rationale. 

Thursday, October 31, 2013

Tired Ideas

Have you ever wondered why it's difficult to concentrate for a prolonged period of time?  Most everyone has stories of working long nights, struggling to complete a project that must be done by morning.  We've all experienced the gradual decline in cognitive abilities and decreased levels of concentration.  But what is the biological cause of such fatigue?

Unfortunately, we're not quite sure yet.  However, we've got some pretty good ideas.

The simplest explanation for mental fatigue is known as the depletion model.  This model posits that the act of effortful thought consumes some resource (probably glucose) in the brain. As that resource is expended to the point of scarcity we then feel what we refer to as mental fatigue.  The depletion model is appealing in it's simplicity, but has become widely criticized in recent years and is currently thought unlikely to be an accurate representation of mental fatigue's cause.

One of the many problems with the depletion model is it doesn't seem to fit some subjective experiences well.  For example, most people have experienced waking up to a feeling of mental exhaustion.  If the depletion model is accurate than why would a relatively well rested brain have difficulty with resource depletion?  Situations such as these have led some people to attempt to create a wholly psychological model of mental exhaustion.  This paper discusses a psychological model wherein mental fatigue is viewed as a result of the opportunity cost of executive recruitment.  In other words, mental fatigue is caused by using your brain's higher functions to do something unpleasant such as difficult math problems or making conversation with an in-law.  The more you'd rather avoid the task, the more rapidly mental exhaustion sets in.

Psychological models often present interesting jumping off points and areas of discussion.  However, their lack of a biological mechanism limits their usefulness a great deal.  Without some sort of basis in neurobiology such models are little more than thought experiments in terms of application.

The most recent and well regarded models are in some ways the opposite of the depletion models.  Rather than focusing on the depletion of some neurological resource these models believe mental fatigue is caused by the accumulation of some excess product in the extracellular area (the area between cells).  Each time a neuron activates it releases chemicals into the extracellular space. While the brain has mechanisms to reclaim these chemicals, when present in excess (as during extreme mental exertion), these mechanisms may not be sufficient and an accumulation could occur.  This accumulation may then lead to a variety of effects including the activation of unintended astrocytes which we then experience as mental fatigue.

A growing body of evidence seems to support this recent model.  For example, a recent study has found that during sleep these reclamation mechanisms go into overdrive and clean these excess chemicals from the extracellular space.  This would explain why sleep is usually mentally reinvigorating. Studies have also shown that in cases of brain damage with mental fatigue as a symptom these systems are generally not working optimally due to inflammation.

Interestingly, while consensus seems to be moving towards these "excess" models, no one can seem to agree what exactly the brain is experiencing an excess of.  A recent study published in Science proposed that metabolites are the culprit while others suspect glutamate (a neurotransmitter) is the main cause.  Still, while the exact chemical responsible is a topic of debate most neuroscientists agree that an excess of something between cells is likely the cause of mental fatigue.

For the sake of brevity I have touched very lightly on each of these models. If you're interested in reading more regarding any of them the following links will be of interest:

Depletion: http://www.psy.fsu.edu/~baumeistertice/egodepletion.html
Psychological/Opportunity Cost: http://www.sas.upenn.edu/~duckwort/images/publications/KurzbanDuckworthOpportunityCost.pdf
Excess: http://www.sciencemag.org/content/342/6156/373
http://www.mf.gu.se/english/research/our-explanation-for-the-origin-of-mental-fatigue-at-a-cellular-level/

That's all for this week. Until next time stay safe and rationale.

Thursday, October 24, 2013

On the Science of Social Science

There have been rumblings of late in the economic's community about just how scientific economics really is. In particular Raj Chetty opined here that economics is indeed a science while Krugman rebutted here that while economics may or may not be a science, economists are rarely scientists.

The heart of the debate revolves around three issues in particular. Firstly, economists are often wrong. Secondly, economists predictions are often difficult to test. Finally, many economists cling to dogmatic ideas that the consensus agrees are unlikely.

I have applied this same line of thinking to meteorology and concluded that as goes economics so goes the science of weather. If one is deemed by the public to be a science then both shall be and vice versa.  After all, weather forecasts are often incorrect much like economic forecasts.  Even more damning, weather forecasts predict only a couple weeks ahead while economists are asked to predict events months or years ahead of time.

Additionally, meteorological models are supremely difficult to test. In fact, the only real evidence for their validity is the aforementioned often in error forecasts.  This is in no way the fault of meteorologists of course. It's simply prohibitively difficult (perhaps impossible) to create a lab large enough to create full size weather systems.  Similarly unless an council of economists comes to power over a decently sized nation macro level experimentation is impossible. Still, no one questions the validity of meteorology as a science and economics is constantly questioned.

And if a minority population clinging to outdated ideas is a reason to invalidate a discipline as a science then all sciences will soon be declared a farce.  It would not take much research to find physicists that still herald the idea of the aether, astronomers than believe in heliocetrism,  and biologists that decry evolution as a falsehood. In any science, minority opinions are not only normal but necessary and healthy. Claiming that some economists holding to their beliefs in the face of contrary evidence invalidates economics as a science is ridiculous.

The fact is that economics is a science.  It is true that many economists are not scientists, but that is a trend that is changing day by day.  It is only in the past several decades that the tools for large scale analysis has arisen and it takes a commiserate amount of time to train new economists to use these tools. A generation of economists rooted in theory and thought experiments is moving on and a new generation has begun to apply the modern tools of rigorous analysis and experimentation to a still growing discipline. Regrettably it appears many seem determined to keep their gaze fixed upon the past rather than the developing future of the science.

That's all for this week. Until next time stay safe and rationale.

Friday, October 4, 2013

A House Divided

If you're an American (and very likely if you're not) you know by now that the US government has shutdown over budget disagreements in the legislature.  What this "shutdown" entails is approximately 800,000 workers are neither working nor being paid.  However, "essential" workers continue to report for work and draw their paychecks.  The question is, "How big a deal is the shut down really?"

Economically speaking, as long as the budgetary problems are resolved relatively quickly there is unlikely to be a substantial impact.  Furloughed workers have historically received back pay upon their return to work during previous shut downs.  Thus, they will have as much money as usual during the upcoming holiday season.  Obviously all the work they would have accomplished during the shut down is lost forever, but in the grand scheme of the US economy their production is relatively small.  The larger economic impact is likely to be the continuing proof that the US political system is increasingly unable to get it's house in order.

For individuals however it's a very different story.  Nearly seventy five percent of the National Institute of Health's (NIH) staff has been furloughed.  Thus potentially life saving clinical trials have ground to a halt. Experiments that require daily monitoring have been hastily shelved in efforts to preserve cell lines. Grant moneys for federally funded projects are in many cases inaccessible to those receiving the awards. Even important federally maintained databases are either shut down or left unattended.

Similar stories are echoed throughout the research community. The Bureau of Economic Analysis and Census Bureau websites have been shutdown for the duration of the furlough. These organizations maintain databases that are valuable and essential resources for a great deal of economic research.  Without access to them many research projects will be unable to progress.

Of course, eventually the legislature will resolve their differences and such resources will become available again.  Among the hardest hit by a prolonged delay however will be NASA.  Due to the importance of certain celestial alignments any delay in mission launch schedules will set back missions by years as NASA waits for orbits to come together again.  The frustration of working on a mission for years just to be told a forced furlough would cause the launch window to be missed must be immeasurable.

The longer the shut down drags on the more profound the effects on research within our country will become.  Luckily the NIH grant cycle has just ended so there is a little time to resolve the congressional dispute without impacting the funding cycle too severely.  However, if an agreement is not reached within a few weeks funding for new and existing programs will very quickly begin to evaporate.  These programs are generally funded for the time being but require renewal which they'll be unable to acquire. We can only hope that wiser heads prevail in Washington and the shut down ends soon.

That's all for this week. Until next week stay safe and rationale.

Friday, September 27, 2013

On Reasoning and Familiarity

One of the major themes of Behavioral Economics is the idea that much of human reasoning uses shortcuts known as heuristics. For example, imagine you were asked to approximate the number of marbles in a jar with reasonable speed and accuracy.  You have a number of possible strategies. You could individually count each marble, but this approach is lengthy and difficult.  You could simple take a wild guess, but that's likely not very accurate.  Alternatively, you could decide to count what seems to be a third of the marbles then multiply that result by three.  That will only take a third as much time and is likely to be reasonably accurate. This third approach is a heuristic, a mental shortcut that usually gives you an answer that is "good enough".

However, heuristics often lead to consistent and reproduceable errors in reasoning. Take this test for example. You are presented with four cards. Each card has a letter on one side and a number on the other. Which cards should you turn over to test this hypothesis: All cards with vowels on one side have an even number on the other.  The cards you're given are:

E   G   2   5

Take a moment and decide which cards you'd flip. 

Before getting into the answer let's highlight a few key things regarding the hypothesis.  All cards with vowels on one side have an even number on the other. Key words that likely jump out at you are "vowels" and "even numbers". Notably odd numbers and consonants are not mentioned. 

Here's another version of roughly the same puzzle.  See if your answers differ at all. Imagine you run a bar which serves beer and soda.  Everyone in the bar who is over 21 may have beer, but anyone below 21 must drink soda.  You only know either a patron's drink or age but not both.  Which of these patrons must you check to ensure no one under 18 has beer? (Officially our hypothesis in order to mimic the first puzzle is: if someone is drinking alcohol they must be over 21.)

Beer Drinker, Soda Drinker, 30 years old, 15 years old


This puzzle is logically the same as the first but most people will come to a rather different conclusion. Obviously you must check the age of the beer drinker. Clearly you don't need to check the age of the soda drinker or the drink of the 30 year old.  However, you must check the beverage of the 15 year old to ensure he isn't drinking beer.

Similarly most people correctly suppose that you should flip over the E card in the first puzzle.  After all, if you find an odd number on the other side you have disproven the hypothesis.  The G doesn't need flipped because our hypothesis doesn't care about numbers on consonant cards. Most people wrongly choose to flip the 2 card because the hypothesis mentions even numbers.  However, suppose you found a consonant on the other side? We just established we don't care about consonant cards so that wouldn't be helpful.  If we find a vowel it seems to lend credence to our vowel/even hypothesis but it doesn't invalidate it or prove it so we've ultimately learned nothing. Finally, most people choose not to flip the 5 card. The hypothesis doesn't mention odd numbers so it would seem unnecessary. However, if we flip the 5 card and find a vowel we have invalidated the hypothesis that vowel cards have even numbers. Thus this card must be checked along with the E card.

If you answered incorrectly to the first puzzle don't feel too bad. Over 90% of those questioned make the errors listed above.  It's entirely normal for the brain's heuristics to lead you astray in this case. 

What is interesting is that far far fewer people make mistakes on the second puzzle.  As mentioned, the drinking problem is logically the same.  Only letter/number combinations have been exchanged for soda/age combinations.  Yet seemingly due to the familiarity of the situation individuals are much more easily able to correctly solve the puzzle. 

That's it for this week. Until next week stay safe and rationale. 

Thursday, September 19, 2013

The Impact of Minimum Wage

During recent travels abroad I was confronted with sky high prices in Australia.  The extreme costs of some items wasn't a surprise, but it got me thinking about the root cause of such price differences. Locals provided reasons such as high taxes, transport costs, low population density and high minimum wage laws as possible root causes.  All of these likely drive up prices for at least some products, but in reality only one of them is a major cause. Sales tax for example isn't all that much higher than in the US (10% for non food items and no tax on grocery items). Transport costs are irrelevant for many items (digital downloads are still expensive) and a small portion of the cost for others.  Finally, overall population density doesn't seem terribly relevant for the highly populated coastal areas which still maintain high prices. In reality, high prices are likely a product of a high median wage which  research shows seems to be about 40% higher for a full time worker in Australia than in the United States.

Of course, there was one suggested reason which has yet to be addressed. Is the high Australian minimum wage driving prices up? The Australian federal minimum wage is currently $16.37 in Australian dollars or $15.32 as opposed to the US federal minimum of $7.25 (state laws often demand a higher minimum than the federal minimum). Clearly this is an extreme difference, but is it a large part of the reason for Australia's high prices?

There are a number of effects of minimum wage laws. Like many economic ideas the fact that these effects occur is generally agreed upon. However, the degree of the effects is a topic of hot debate. The consensus is that minimum wage laws:
  • Increase unemployment: There must be some set of workers that a business is willing to pay X for but is not willing to pay Y. Thus increasing the required wage from X to Y will result in the loss of their jobs.
  • Increase prices or reduce profits: Increasing the cost of labor necessarily increases the cost of production. The firm must either accept reduced profits or pass these costs on to the consumer.
The degree of a potential price increase would clearly depend upon the amount of minimum wage labor that a given industry employs. As food service is an industry that uses a larger than average proportion of minimum wage workers it should be impacted significantly by minimum wage differences.  In fact, studies have indicated that an increase in the minimum wage of 10% results in between a .5% and 5% increase in food prices and between a .5% and 1% increase in overall prices. 

Given Australia's minimum wage is approximately 100% higher than the United States we can expect 5% to 10% higher overall prices.  Certainly this is a significant impact and a major contributor to the higher prices Australians must pay. Of course, they pay these higher prices with their higher wages and thus aren't too burdened by prices that would shock many Americans.

An important note is that most studies investigating the impact of the minimum wage on prices look at a single moderate increase as opposed to a large difference such as that between the US and Australia. It is highly likely that price effects of minimum wage increases scale in a non-linear fashion. In other words, studies show a 10% increase in the minimum wage increases price levels by .5% to 1%. However, multiplying that wage increase by a factor of ten is likely to result in a much larger price increase than 5% to 10%. This is because larger increases in minimum wage regulations will impact more workers, thus causing a larger total effect.

As an example, imagine the world contained one thousand workers. Workers earn with a fairly normal distribution (a bell curve) within a range of one to a hundred dollars an hour. Further, assume all workers are going to remain employed for the purpose of this example.  If the government institutes regulations requiring a minimum wage of five dollars an hour there is little in the way of price effects. After all, very few workers are earning below five dollars an hour anyway, thus the impact is small. However, if the government instituted a requirement that workers be paid fifty dollars an hour the price effect would be enormous. Nearly half of the population would see their wages increase, some by an order of magnitude.

An interesting aspect of minimum wage laws is that they are simply a disguised wealth transfer. Everyone in society pays higher costs which are then distributed to the lowest paid workers via a required minimum wage. It's a rather roundabout system which unfortunately prices some workers out of the market (resulting in higher unemployment) but is functionally similar to social security of unemployment benefits.

Ultimately Australian's higher prices are a result of their higher earnings.  While the other mentioned factors do make a difference, those differences pale in comparison to the price increases caused by higher minimum (and moreover median) wages.  Unfortunately, visitors from countries with lower wages are still stuck paying the inflated Australian prices.  

That's all for this week. Until next week stay safe and rationale. 

Thursday, August 22, 2013

The Blend of the World

If you have a car you've likely noticed signs at the gas pump indicating that ethanol has been added to your gasoline.  This rise of blended fuel is a product of the government's attempt to increase the use of sustainable fuels and wean America off of foreign oil.  However, as it turns out when this plan was conceived mistakes were made that have led us to a rather ridiculous situation.

Every year refineries are required to blend a certain amount of ethanol into the gasoline they produce. The standard blend rate is 10% ethanol.  This rate is used because essentially all gasoline engines can utilize 10% ethanol gas without any problems (and perhaps more importantly at this point the gasoline is subject to a forty cent per gallon reduction in taxes).

In 2007 when the ethanol mandates were created congress anticipated a constantly increasing rate of gasoline consumption. However, in the years since 2007 gasoline consumption has instead declined by approximately 5%.  Ethanol blending mandates were dictated in terms of raw gallons however. So as the total consumed gas has declined the required amount of ethanol to be blended in has risen.  Unsurprisingly, this has led the industry to the point where it's difficult to shove the required amount of ethanol into the gasoline supply while keeping the blend at the 10% rate.

Refineries are able to buy what are known as Renewable Identification Numbers or RINs in order to comply with ethanol mandates. However, as producers and speculators have realized the impossibility of meeting increasing ethanol blend requirements within the current system RIN prices have sky rocketed.

The EPA has the ability to issue a special waiver which reduces the ethanol mandate each year. However, they are only permitted to issue the waiver on a yearly basis.  More over, eliminating or permanently reducing the ethanol mandates would rapidly devalue the RIN leaving producers and speculators with substantial losses. Leaving the system as is requires special EPA intervention each year or a likely increase in gas prices as producers pass the additional cost of RINs onto consumers.  In any conceivable scenario some party is incurring substantial costs.

It's easy to see the rationalization of the legislators when these mandates were created. After all, everyone agrees renewable energy is a net positive. Had they dictated that ethanol production simply pace gasoline production at a 10% rate we likely wouldn't be in this situation.  However, we'd also simply be treading water in terms of moving our energy supply towards sustainable sources.  In this case their policies certainly haven't worked out, but I find it difficult to fault their intentions.  Of course the whole thing could have been avoided if they'd decided to take the direct route and simply raised gasoline taxes (thus reducing consumption and creating further incentives for fuel efficient or fuel flexible vehicles).

One special note, the economics of using ethanol as a fuel source at all is a topic of great debate which I have purposefully neglected.  Please do not construe this negligence as either an endorsement or condemnation of the practice.

That's all for this week. My apologies for the horrid pun in the title. Until next week stay safe and rationale.

Wednesday, August 14, 2013

The Future of Labor

Futurists have been predicting how technology would revolutionize labor for well over a hundred years. Ideas of robot servants, self driving taxis, and work from home interfaces have been harped on time and again but have, as of yet, been hard to find in reality. As recently as the twentieth century prognosticators believed that the American workweek would drop to thirty hours or less with an accompanying ten weeks of vacation annually. Unfortunately for most of us we haven't arrived at that particular utopia yet.

So what will the future of the labor markets look like? If I had to guess, I'd put my money on a model like HourlyNerd. HourlyNerd is a service which matches highly skilled business consultants with small businesses at an hourly rate.  Businesses submit their projects and individuals apply to complete them along with a requested hourly rate. The small business then selects their "nerd" who completes the project.  Services such as accounting, financial advice and business consulting are available at an average rate of $35-$50 per hour.

If a project is going to take approximately fifty hours, than the HourlyNerd service can complete it for approximately $2500. It's easy to see that's far cheaper than hiring a full time professional of the same skill level. A truly large firm might require a market penetration analyst full time, but realistically in most cases highly skilled, highly specialized people probably only apply their primary skill set thirty percent of the time or less at a full time job.  The remainder of their time is either spent idle (not accomplishing meaningful work) or performing tasks which a low skilled worker could easily accomplish.

Paying high costs for a highly skilled full time worker could be thought of as a retainer of sorts.  However, as the ability to connect with hourly professionals increases, the question of whether a retainer is necessary becomes very obvious.  It's a simple decision when the question amounts to, "Do I want to pay $100,000 a year plus benefits for the same information I could get for $30,000 from an outsourced worker."

The obvious disadvantage of such services is a decreased level of employee accountability. After all, there's very little dread at the prospect of termination when your position ends upon project completion regardless. Poor work of course diminishes the likelihood of being engaged for future assignments, but it seems unlikely that such a disincentive is as potent as termination from a full time position.

So here's the official prediction.  Services that match highly skilled workers with short term or hourly employment will continue to grow.  Small businesses will pick up and drop these workers as needed thus gaining many of the advantages available to larger firms while keeping their costs relatively low.

Information about HourlyNerd can be found at their website http://hourlynerd.com/.

More economics next week. Until then stay safe and rationale.

Wednesday, August 7, 2013

The Brain Game

Over the past few years there's been a large push to "map the human brain." But what exactly are we trying to accomplish with a brain map and why is it such a difficult task?

In general a brain map refers to one of two things:  an effort to define the functions of different brain regions, or a digital representation of actual neuronal connections. Either of these tasks is daunting, though for different reasons.  However, today I'm going to focus in on the type of brain map that attempts to actually model the connections of neurons.

Recently MIT set out to use the power of gaming and crowd sourced citizen science to assist in creating their brain map via Eyewire (it's actually a mouse retina, not a human brain but trust me that the difference isn't all that important). Eyewire is a simple game wherein users color in a neuron that has been defined by a computer algorithm.  The algorithm is somewhat prone to errors and so the users task is primarily focused on correcting the algorithm's mistakes.  Data gathered is then fed back into the program in an effort to improve it's ability to define neurons. 

So what does this have to do with anything?

Eyewire gives us a very precise conceptualization of neurons. Take a moment and imagine what a neuron looks like in your mind.  If you're an average person you probably envision something like this:


It's a pretty common Hollywood representation of a neuron; a central body with three or four offshoots running to another central body. It looks relatively simple and easily mapped.  Now here's the representation of neurons that Eyewire has created over the last several months:



Each color in that image represents a separate neuron.  Further, these aren't the only neurons present in this space, just the ones processed by Eyewire so far (when the project is complete even more neurons will be squeezed into the same image). Now imagine attempting to untangle those seven neurons when they're not only all the same color, but also surrounded by other non-neuron cells. It's a daunting task with only a few cells. The difficulty rises even further when considering the human brain consists of billions of neurons. 

Eyewire, with the assistance of thousands of users, has defined only a handful of cells over the past year. These cells are represented with outstanding accuracy and detail, but clearly at this rate of progress a full brain map would take a staggering amount of time. 

Of course the entire goal of Eyewire is to improve a computer's ability to map neurons for us. If their goal is successful (and there's no reason to believe it won't be)  computers will be able to process the image data far faster than their human counterparts and with nearly as much accuracy. In that event neuroscience may produce a neuronal brain map far sooner than was thought possible only a few years ago.

If you're interested in Eyewire you can find out more at their website Eyewire.org. It's worth a look for the images alone.  More economics, neuroscience and everything else next week. Until then stay safe and rational.

Thursday, August 1, 2013

The Completeness of Contracts

Imagine for a moment that you and I are about to make a deal. I need $10,000 today in order to secure a fantastic bargain on a piece of machinery that will greatly improve my production capabilities. In return I promise to repay you $1000 a month every month for one year. We shake hands, you give me the money, the machine is purchased and everyone is happy. But... what if?

What if the machine breaks, I return it and receive the money back? Can I just return the $10,000 to you the next day?

What if my company goes bankrupt? How are you going to recoup your losses?

What if the value of the dollar collapses and the real purchasing power of your compensation amounts to far less than the original $10,000?

What if the machine exceeds all expectations and I earn incredible surpluses? Are you entitled to any additional gains?

What if I find myself with excess funds at some point during the year? Can I pay you off early and if so must I pay the full $12,000 or can I pay a prorated amount based upon how much of the year has passed?

These are only a few of the contingencies that could easily be written into our original contract.  However, the question of whether we should bother is surprisingly difficult to answer.

In contract theory there exists a spectrum of contract completeness ranging from complete to incomplete. A complete contract would cover every possible eventuality.  In practical terms such a contract is impossible as negotiating every conceivable event would be cost prohibitive.  On the other end of the spectrum an incomplete contract tends more towards a "You take care of me, I'll take care of you." sort of arrangement. In reality, nearly all contracts fall somewhere in between with important details being clearly spelled out while many possible contingencies are ignored.  A good example to illustrate the difference between the two ends of the spectrum is the difference between marriage and a prenuptial agreement. Marriage is generally a very loosely defined agreement between two people that they will live together and attempt to improve each others lives. There's not very much provided in the way of specifics beyond social norms such as monogamy. A prenuptial agreement on the other hand usually is very specific. It details exactly what each party will come out of a marriage with in the event of separation. Thus, in general a prenuptial agreement is a far more complete contract than a marriage agreement.

The question is then, are more complete contracts better?  Surprisingly, the answer very often seems to be no.  Every contingency built into a contract has an associated negotiation cost. Attempting to account for every possible eventuality piles on negotiation costs for events which are unlikely to ever occur.  Further, contingencies can often signal inaccurate or misleading information between parties.

For example, suppose you needed to contract a babysitter. You both agree that she will watch your children five nights a week (Monday to Friday) from 4 pm until 8 pm. She'll be paid $10 an hour which amounts to $200 per week.  Now suppose you add a stipulation that if you're running late she's expected to stay until 10 pm. The additional hours from 8 until 10 will be compensated at a rate of $12 per hour. How might this one contingency influence her view of the contract?

Most obviously such a contingency signals to her that it's likely you'll occasionally run late.  Whether her belief is accurate or not is irrelevant. The addition of this stipulation makes your tardiness appear to be likely to her. Given that you felt a need to cover the eventuality, as well as the correspondingly low additional compensation, it would seem as if you intend to be late fairly regularly. Contrast this contract with one wherein her compensation rises to $25 per hour after 8 pm.  Such a contract sends a far different message via it's much higher compensation.

An incomplete contract would neglect to specify this eventuality entirely and the involved parties would renegotiate or resolve the issue later when it arose, resulting in no signaling during contract creation.  If the two parties beliefs regarding appropriate compensation were similar there would be little in the way of losses and an incomplete contract (in this case) would end up being superior. Contrarily, if beliefs regarding compensation were very different there would be a difficult negotiation process and the more complete contract would have been better.

An extension of this signaling concept is the introduction of reference points. Suppose you require your babysitter to work a weekend a month from now. What is the appropriate compensation for weekend work? In the case of an incomplete contract you and the babysitter would come together to negotiate an appropriate price.  Again, if beliefs were similar in regards to what was appropriate the negotiation process would be smooth and little in the way of losses incurred. However, if beliefs were widely separated negotiation costs would quickly rise.

In the case of a more complete contract existing stipulation often provide reference points for negotiating parties. Returning to the previous example of late night hours, how might the different levels of compensation ($12 vs $25) influence how much the sitter expects for weekend work?  It would seem that weekend hours are similar to late night hours in that both are beyond the "normal" scope of the original contract. Thus the sitter will likely expect to receive compensation similar to whichever late night penalty rate is used. This contingency in a complete contract can thus have far reaching implications for future negotiations.

In general incomplete contracts seem to be better when parties have reason to trust each other and the balance of power is relatively even.  Complete contracts generally are superior when one party requires protection from the other (an uneven balance of power) or there is reason for mistrust.  A landlord for example holds a great deal of power over their tenants and thus a more complete contract is appropriate. Meanwhile a "Mow my lawn and I'll take you to dinner." deal is far more appropriate between neighbors.

The prevailing theme is that more specificity is not always better. At times, under the right conditions it's far more efficient to simply trust and deal with issues as they arise.  In fact, far more of our society relies on implied incomplete contracts than complete contracts. Every time someone pays for gas they didn't prepay for an incomplete contract is fulfilled.  Every birthday present, returned favor, IOU repaid, or split lunch check is an incomplete contract fulfilled. Most of them go unspoken and unrecognized, but incomplete contracts are the currency of our day to day interactions.

As always more economics next week. Until then stay safe and rational.




Thursday, July 25, 2013

Martyrdom and Charity

One of the main forces driving behavioral economics is a desire to explain the irrational behavior of individuals. This week I'll be discussing a paper authored by Christopher Olivola and Eldar Shafir which delves into one of the more heroic quirks of human behavior, martyrdom.

Before getting into the research let's establish some premises.  First of all, rational individuals should prefer pleasurable activities over painful activities. This premise is a main driver of nearly all social sciences and is taken as a given for the purposes of this posting (if it's disproved there are far greater implications than the inaccuracy of this article). Secondly, rational individuals should be willing to pay more (or at least the same amount) for activities they prefer. Combined, these premises amount to "Individuals should be willing to pay more for pleasurable activities than painful activities all other conditions being equal."

If these premises are correct than why would an average person pay more to participate in a lengthy run than enjoy a picnic in the park?  After all, picnics are generally pleasant, provide nutrition, and are far less strenuous than running.  Yet Olivola and Shafir found that under certain circumstances study participants were willing to pay more for the strenuous running option than the pleasant picnic.

These researchers conducted four experiments investigating how the difficulty or unpleasantness of a charity event influenced donation amounts. The first experiment was a simple questionnaire provided to a bit over one hundred college students. Each questionnaire described a future charity event, either an outdoor picnic or a five mile charity run. In the case of the picnic participants were told that to attend they would be required to donate any amount of money greater than zero dollars. This voluntary admission fee would be matched by a third party. In the case of the charity run participants were informed that they would have to donate to attend but their donation would only be matched by a third party if they completed the run.

Participants were then asked two simple questions. "Would you attend the fundraiser?" and "How much would you donate to attend the fundraiser?". Recalling our previous premises we would expect that more people would be willing to attend the picnic (as it is pleasurable rather than strenuous), and participants would be willing to donate more to attend the picnic than the run.  As expected significantly more participants responded that they would attend the picnic than the charity run (86% versus 76%). However, those who responded that they would participate in the run claimed they would donate nearly twice as much as picnic attendees ($23.87 versus $13.88).  It's perplexing why anyone would pay more for an experience which is generally considered inferior (as indicated by participation rates).  Luckily the remainder of Olivola's and Shafir's research lends some insight.

The second martyrdom experiment involved participants playing a public goods game. In this game participants were given five dollars and asked if they would contribute any of the gifted funds to a common pool. Any funds they did not contribute to the pool they kept. Contributed funds were doubled and dispersed to the entire group at the end of the experiment. Group sizes ranged from three to five, thus in no case was contributing to the pool beneficial to the individual (a contribution of $1.50 for example would be doubled to $3.00 then divided among three group members resulting in each received a dollar. The contributor is thus left $.50 worse off while the rest of the group gains a dollar each.) Participants were not allowed to speak with each other or in any other way communicate in order to avoid strategic group decisions.

The public goods game is a common experimental tool used by economists.  However, this example had an interesting difference.  Half of the participants were informed that if they contributed to the group pool they would be required to undergo an unpleasant cold pressor task (cold water immersion of the hands for 60 seconds) or their contribution would not be doubled. These participants could easily avoid the cold pressor task by simply keeping all of their gifted funds if they desired.

Despite the pain and risk involved, individuals in the cold pressor groups contributed significantly larger amounts.  In fact the cold pressor group contributed their entire $5 gift 67% of the time as opposed to 28% of the time for the control group.  What is it about an unpleasant condition that drives a willingness to contribute? Olivola and Shafir conjectured that perhaps having a difficult or strenuous fundraiser might be a social cue that a cause was worthy of the difficulty, and by association greater contributions.  However, in this experiment participants were asked what their expectations were for contributions from others in their group. No significant difference was found between those whom believed everyone was receiving a cold pressor task and those who had no knowledge of the task. If difficulty acted as a social cue of value that drives donation amounts we would have expected to find a difference between treatment groups.

The third martyrdom experiment was extremely simple conceptually and in implementation. Subjects were presented with a questionnaire very similar to the one discussed in experiment one. The primary difference was the length of the charity run varied between questionnaires. The goal of this experiment was to determine whether the degree of adversity had a significant impact on contribution rates or if the phenomenon was more binary in nature. Results indicated that length of the charity run had little effect on contribution rates.  The researchers interpreted this mean that the degree of adversity is of minimal importance as compared to presence or lack of an unpleasant condition.

The final experiment sought to establish if perhaps the martyrdom effect was partially due to a sort of empathy with victims of a disaster.  In other words, would arduous tasks still increase donations if the charity cause was not meant to relieve suffering, but instead promote pleasure. In this study participants were again given a simple survey which represented one of four treatment groups. Each survey informed participants that a charity, either a public park construction project or a feeding starving children effort, was to host a charity event. The event was represented as either a public picnic or a thirty hour fast. Thus the four possible surveys given to participants were park project hosting picnic, park projecting hosting fast, starving children relief hosting picnic and starving children relief hosting fast.  With the exception of the change in questions the procedures remained the same as in experiment one.

If the martyrdom effect is in fact based upon an empathetic link with victims of disasters then we would expect to see that in the case of fasting for child hunger, donations would be much higher than in the case of fasting for public park construction. In fact, this experiment demonstrated exactly that. For the public park construction project the picnic option resulted in a larger donation amount on average than the fasting option. Contrarily, for the hunger relief project the fasting option yielded higher donation amounts. Between projects the picnic option showed no statistical difference in donation amounts but the fasting option nearly tripled. This may indicate (though evidence is weak) that a pleasurable experience such as a picnic is minimally influenced by charity cause while a strenuous task such as a fast or run will only yield large donation amounts when combined with a charity which seeks to relieve suffering.

Curiously it seems as though the results are experiments two (ice pressor public goods game) and four (picnic verus fast survey) are somewhat contradictory. Experiment four seems to show that the martyrdom effect is most profound when individuals believe their efforts go towards relieving the suffering of others. However, in the ice pressor experiment it's hard to imagine that participants believed their contribution to the community pool resulted in any relief of suffering. One possible interpretation of this curiosity is that charity cause impacts the magnitude of the martyrdom effect and that fasting has a negative real value.  In experiment four a picnic was valued at approximately $15 regardless of cause. This value represents a certain value of the picnic plus a value attributed to a charity relief effect.  In other words, if you give $15 you likely value the picnic at some lesser amount and give the remainder as an act of generosity. Viewing the fasting option from a similar perspective but starting with a negative value results in a model which agrees with both experiments two and four. Assume that normally someone would have to pay you $20 in order to induce you to participate in a thirty hour fast.  Then when asked how much you would pay to participate in the same fast for the benefit of a public park you answer $10. This is a $30 difference from the implied value of the fast alone but still a lesser amount than the $15 donation for a picnic. In other words we still see a larger net change in value due to the martyrdom effect, but a smaller overall donation.  When the charity cause is changed to one which alleviates suffering we then find the martyrdom effect increases in magnitude resulting in a higher overall donation while the picnic option donations remain unchanged.  This theory is consistent with the data presented by Olivola and Shafir but requires experimental verification to be confirmed. Interestingly, it does suggest that the martyrdom effect is greater than shown by the majority of these experiments as the value change would be calculated from the negative value associated with the unpleasant experience rather than a zero value.

It's important to note that the martyrdom effect did not seem to have much effect on whether people donated or not, only how much they donated.  In fact, unsurprisingly in most experiments less people chose to participate in the unpleasant tasks. However, it is a curious quirk of human nature that under some circumstances we are willing to pay more to suffer than we would pay for enjoyment. Olivola and Shafir offer some interesting data that give hints to what might cause this particular irrational behavior but stop short of offering a suggestion of mechanism or cause. Hopefully their continued research will result in substantial answers to the questions raised by their recent work.

More economics next week, until then stay safe and rationale.






Wednesday, July 17, 2013

Regulation and Innovation

To say the topic of government regulation is controversial would be an understatement.  Opinions on the matter vary from "Let the free market rule!" to "The only fair economy is a managed economy." Like most of the public my personal opinion falls somewhere between the extremes.

The majority of the rational populace would agree that some regulation is at least helpful if not necessary. Few people wish to purchase medical drugs that haven't been proven to be safe or effective.  Likewise most people agree that it's likely a good idea to ensure pilots can actually fly a plane before being allowed to ferry passengers from New York to LA. Yet, venture into the domains of healthcare insurance, banking or pollution control and you'll find debates aplenty as to what, if anything the government should be doing to guide these industries.

In order to highlight how even well intended and beneficial regulation can go astray I'm going to discuss the fascinating world of the taxi cab driver. I want to make clear that I do not believe taxi regulation is a bad thing. Over the past decades it has done a great deal to make transport in major cities reasonably efficient and safe.  However, currently in many cities taxi regulations are being abused for the profit of a few rather than the benefit of many. 

Let's briefly touch upon a few points that a consumer desires when looking for a cab.

1. The consumer wants a cab to be available OR have assurance that a cab is en route AND know approximately that cabs time of arrival.
2. The consumer wants to know the approximate cost of the trip before arrival at the destination.
3. The consumer wants a safe, efficient, pleasant ride.

The local government of course wants the consumer's desires satisfied but has one additional concern. Namely:

4. The government wants traffic congestion kept to a minimum.

This desire is inextricably linked with the consumer's third desire as sitting in traffic is neither efficient nor pleasant. 

So given these desirable traits how does taxi regulation improve the situation? Generally taxi regulation goes something like this:

Local Government: "Alright guys, we're going to require a special limited license to drive a taxi. We're also going to require you all charge the same rates per mile or minute of wait time. If you work with us you'll get your license and will have limited competition since all the licenses will already be issued to existing drivers. Don't work with us and we'll just give your license to someone else and you'll be out of business."

Taxi Companies: "Sounds great."

The end result is that the consumers second desire is fully met, since prices are uniform everyone knows charges before the service is complete. The consumer's third desire is reasonably met, drivers generally aren't serial killers, are easily identified for complaint and are prohibited from running up the meter. The government's desire to minimize congestion is met, taxis are artificially limited. Finally, the consumer's first desire for an available cab is initially met but generally deteriorates as demand increases with population growth but the taxi population remains constant. Notably the taxi population could be grown with new permit issues but such action always meets with strong opposition from the taxi companies whom of course have the closest relationship with taxi regulators.

Additionally, the value of taxi licenses generally begins a rapid climb. In several major cities a taxi license approaches a million dollars if not more despite a cost of only a few hundred dollars annually. Rents from licenses are in fact so lucrative that most are held as investment vehicles in New York City. 

So under the current system of regulation we end up with a reasonably safe and profitable taxi system without roads swarming with cabs to the point that other traffic is impossible. On the downside there's nearly zero incentive for a driver to do anything beyond the bare minimum of transport the consumer from point A to point B as quickly as possible. As long as the owner holds the license the cab will keep operating regardless of how clean it is, how pleasant the driver, or loud the radio.  

A brief summary of what we've learned so far:

Current Regulation Pros:

  1. Safe
  2. Predictable Cost
  3. Limited Traffic Congestion

Current Regulation Cons:

  1. Limited Incentives for Owners
  2. During Peak Times Supply and Demand Disparity
  3. Regulation Often Abused to Stop Competition

So how can we develop a better system? As is often the case technology as developed an answer.  Most people in major cities at this point already carry a communication device that knows their location. Why not simply have a phone app that lists available cab companies (along with prices for your trip), contacts the company to provide your location and then tracks the cab en route to pick you up so you know it's arrival time.

When combined with the removal of current regulations, with the exception of driver licensure and identification rules, this improves on the current system in a variety of ways. Road congestion is further reduced during non-peak times (there's no reason for an empty cab to drive around looking for fares). Supply and demand are balanced as competitors can freely enter and exit the market. Competition incentivizes superlative performance at the firm level. Consumers will be more easily able to obtain a cab in areas taxis are less common. Direct price competition will allow for a more accurate representation of costs (which may lead to higher or lower prices). 

This innovation ends up leaving everyone except current taxi license owner's better off. However, it's exactly this group that has been opposing development of such systems in major cities. A company called Uber has attempted to implement such a service in major metropolitan areas throughout the US. Unsurprisingly they've been met with opposition with each attempted launch. 

Uber utilizes mostly limousine services to do their pickups. The primary difference between a limousine and a taxi from a regulatory perspective is a limousine is not allowed to simply drive around and pick you up on a street corner. You must make a reservation to be picked up by your driver.  Therefore the most common avenue of attack against a service like Uber has been to place a minimum time that must pass after a reservation is made before a limousine may transport you. For example, in Miami you must make a reservation at least one hour before your desired pick up.  As a result an on demand service like Uber loses a great deal of appeal compared to a standard cab which is immediately available.

Similar regulations are being passed or contested in major cities throughout the country. On one side of the battle stands new innovators like Uber who seek to improve the existing system (and of course make a profit doing so). On the other are the old guard taxi companies attempting to protect not only their current business, but in many cases the value of the licenses which amount to millions of dollars. The consumer will likely be better off if Uber and it's like prevail, but overcoming the inertia of the existing regulatory system is a daunting task.

That's all for this week. Until next time stay safe and rational.



Thursday, July 4, 2013

Breaking the Rules of Supply

Economics 101 has a simple explanation of how markets work. Consumers have a cumulative demand function that expresses how much of a product they will purchase as a given price level. Suppliers have a cumulative supply function that expresses how much of a product they will supply at a given price level. The intersection of these functions represent a market equilibrium that determines how much of a product will be produced and purchased. Easy, right?

Unfortunately, the real world is rarely ever so tidy.  Rent ceilings, taxes, subsidies, natural monopolies, and a hundred other distortions warp our simple representation of market interactions. Today, I'm going to discuss one distortion that is a result of nature more than man.

There is a sort of commodity known as rare earth elements (REE). The specifics of this commodity are relatively unimportant to this discussion except for these facts; REEs are mined from the earth and are generally found together with distinct rates of abundance. For example, one sort of rare earth ore may have twice as much cerium as neodymium.

The difficulty with REE markets is that the supply of all elements in an ore is dictated by the scarcity and demand of the most needed.  For example, in light rare earth element (LREE) ores neodymium is the most valuable and rarest REE. Thus LREE ores are mined at a rate which satisfies the market for neodymium. However, this also results in the production of cerium, praseodymium and samarium at rates that exceed equilibrium market demand; resulting in a non-optimal price.  In this unusual circumstance the prices of the less desired LREEs are very much influenced by the demand for neodymium, despite the fact that the elements are generally not substitutes for one another.

There are essentially three possible production points. We can produce REEs to the point that the demand for all REEs are met. We can produce REEs at the point which maximizes the suppliers profits. Or we can produce REEs to the point that no excess REEs are produced.

In the first case, production of REEs to the point that demand for all is met, we end up with a large excess of the more prevalent and/or less desirable rare earth elements. This leads to an increase in price for all REEs due to the cost of extraction being shared by all REEs in a given ore group as well as costs associated with stockpiling. Clearly this is non-optimal as consumers are paying more and suppliers are earning less profit.

In the second case, profit maximization, we end up with a shortage of some elements and an excess of others. Here supplier profit is maximized but consumers must deal with high prices for some REEs due to high price inelasticity and higher overall prices for the reasons discussed in the paragraph above. This supply point is a middle ground which features the negative aspects of the other two supply points, but to a lesser extent.

The final case, production resulting in no excess, is likely the worst of all.  It sets the price of the most abundant or least used element at a market equilibrium.  However, every other REE will have a far higher than necessary price due to price inelasticity and market shortages. This supply point benefits no one as suppliers do not profit maximize and consumers of all but the least useful element pay far higher prices than they must.

The easiest solution economically speaking is to produce at the point that demand for all REEs is met and then find uses for the excess REEs.  However, technologically that becomes a much more difficult problem. Unfortunately our grasp of alchemy has not yet risen to the point of transmutation of one less desirable resource into a more useful one. So how do we deal with the problem?

Interestingly, the answer here may very well be recycling.  After REEs are used in industry their production costs are no longer associated.  Thus recycling of rarer or more useful elements increases their supply within the market space. This then allows for lower rates of associated ore extraction and thus lesser excess production of less demanded REEs. This solution allows us to meet demand (good for consumers) while moving production towards the profit maximizing rate of supply (good for suppliers) by increasing supply of more demanded REEs without increasing supply of the excess REEs as well.

Balancing the REE market is a difficult task. However, it does seem manageable through either innovative use of excess REEs or advances in REE recycling. One way or another the market's needs will be met.

As an interesting aside, apparently rare earth elements with an even atomic number are more common than those with an odd atomic number. Once again nature meddling in man's efforts to set rational prices.  Until next week, stay safe and rational.


Thursday, June 27, 2013

The Problem of Public Transit Pricing

Like many Americans I often begin my work day with public transportation.  I'd like to claim that I ride the bus because of a sense of environmental duty, but the fact is it's simply cheaper than driving these days.  Assuming I use the bus two hundred times this year (one hundred days round trip) my cost per ride will be approximately thirty cents per ride. Given current gas prices it's clear the bus is a cheaper choice for me even before even factoring in maintaining my vehicle, convenience, and parking costs.  But what is the ideal price for the transit system to charge me?

Let's get this out of the way first. Public Transit systems do not pay for themselves. Almost universally fares do not meet or exceed the costs of operation (the exceptions being highly populated areas of East Asia.) Generally speaking, fares recover less than half of operating expenses for public transit. Therefore, if you believe that cities should have some sort of public transit system the question becomes not "Should the public pay for public transit?" but "How much should the public pay for public transit?"  The former question for the sake of brevity is simply going to be assumed to be "Yes." for the purposes of this posting.

How much to charge riders is actually a very difficult problem.  Before getting into specific prices let's look at what public transit does well and where it fails.

A full bus or train generates less pollution than a analogous set of cars for it's riders. This difference is magnified when considerations are given to cities movements towards hybrid buses and bus riders tendency towards older, less efficient cars. After all, few people buy a brand new car so they can then ride the bus every day.

Public transit helps provide transportation options for the elderly, disabled, young and poor who might not otherwise be able to travel to necessary locations.  A less fortunate individual who can not afford a car still requires the ability to retrieve food and attend work.  Failing to provide that option simply discourages them from providing for themselves and places the burden of their livelihood on society.

Public transit helps to alleviate road congestion, reduces wear and tear on roads, and reduces expenditures on scarce fossil fuels.

Clearly public transport has a lot of societal benefits.  But what problems are there?

At times public transport can certainly be less comfortable than a personal vehicle. Anyone who frequents public transportation has experienced the occasion of being forced to sit or stand next to an individual they'd rather avoid.  Further, at peak times overcrowding can be an unpleasant inconvenience.

Now let's move to the issue of pricing. There are several abstract price breakpoints that merit discussion. Let's refer to them as break even, cost per rider, nearly free and free. I'll define each as they're discussed.

The highest price would be the break even price.  This is the price the transit district would charge in order to recoup all operating costs. For a variety of reasons this is not a very rationale amount to charge while driving remains a reasonable option. The higher the amount charged the larger the portion of people that will find a transportation substitute.  This burdens the remainder of the customers with a larger portion of the fixed costs of operation thus causing again more to leave the system. This feedback loop continues until you're left with relatively few people paying a relatively high cost.  The same feedback is observed when examining riders who choose to use the service but not pay. The end result is honest users must either pay a great deal or use other options, and most choose to use other options. It is certainly not an optimal price unless there are no practical transport substitutes available.

The second highest price is the cost per rider. This price calculates the cost to take a rider from point A to point B and charges them that price.  It's superior to the break even price in that a rider isn't burdened by the cost of compensating for those who choose not to pay. This level of pricing tends to recoup approximately 50% of operating costs.  The unrecovered costs are lost due to individuals who do not pay and subsidized fares (for the poor, students, disabled and elderly primarily).

The nearly free pricing model has a few interesting points. First of all, it prices a ride well within the means of the majority of society.  In this case the cost of a ride is primarily psychological.  In Champaign-Urbana the most you will pay for a bus ride is one dollar.  This is an amount most people can manage with relative ease. However, the mental obstacle of "payment" certainly remains, particularly for first time riders. While a relatively minor concern, the possibility of being stranded away from home without needed payment must discourage some potential riders.

At this price essentially anyone can use the bus if they have need or desire.  Regardless, a great many people avoid it due to the "cost" involved. This price point has a great deal of access while still reducing crowding somewhat. However, there are several notable downsides. This price still requires the purchase, operation and maintenance of a costly payment network. In other words, you still have to collect fares and sell tokens/passes/etc.  Fare collection has significant costs both in terms of capital expenditure to build the network and time expended by employees to do the sales and collection. Additionally, if the goal is to get the public to use public transit then the question of "Why impose a pay barrier?" seems relevant.

Lastly is the free price point. At this point all non-geographic barriers are removed.  If you can get to a transit stop you can board without payment.  As discussed above this reduces infrastructure and employee costs somewhat.  However, it also greatly increases utilization which may lead to a less comfortable environment for everyone. Particularly during peak hours under a free model congestion can become problematic.

I have also heard the argument of "If transit is free then it will be overrun with the homeless and criminal." It is a point which I believe has little merit.  Firstly, how else are the homeless supposed to achieve housing for themselves if they cannot find and attend a job? To complain about the homeless while at the same time eliminating the potential for them to lift themselves off the streets seems ludicrous.

I propose this solution.  During peak hours of use charge at a nearly free price point. The remainder of the time operate the transit system for free.  This will help alleviate congestion when it is worst and still provide easy access to transportation to those who need it most. An unemployed man trying to get to his job interview can choose to pay a small fee to arrive during peak hours, or arrive early for free.  Meanwhile the successful lawyer who just hates trying to find a parking spot downtown can ride in relative comfort during peak hours for a cost which is trivial to him.

There is of course an increased tax burden on society. However, there are a great deal of rewards as well. Less traffic jams, lower road maintenance costs, decreased wealth transfer payments (welfare), better air quality, increased road safety, increased tax revenue and more efficient use of energy resources are only a few examples.  The only question is if individuals can stomach giving strangers a free ride (literally).

More economics next week. Until then stay safe and rationale.


Thursday, June 20, 2013

TTIP of the Iceberg

For the last couple decades European and American governments have been flirting with the idea of a trade agreement known as the Transatlantic Trade and Investment Partnership (TTIP). Several times over this period some ambitious politician has attempted to initiate negotiations and bring the decades of courtship to an end. Most recently this politician has been Barack Obama, and it seems he has the interest of the European leadership as well.

If you haven't heard of TTIP before you're probably curious why it's worth talking about. After all, trade between the Americas and Europe is already relatively easy. There are two factors, one positive and one negative, which merit giving TTIP your attention.

Firstly, there are some significant potential economic gains to be had. One of the primary goals of TTIP is to harmonize regulations between North America and Europe. Currently there is a great deal of regulatory effort duplicated on both sides of the Atlantic.  If I wished to produce fine Serrano hams for consumption in the United States and the European Union I would have to comply with the regulations of both governments. Given that either set of regulations likely produces a safe ham, why impose extraneous regulations upon the producer and require two sets of enforcement agencies?

Having to investigate and comply with multiple sets of regulations also increases the difficulty of market entry for smaller firms. If I'm a small ham producer in the United States I'm likely already complying with US guidelines. However, if I wish to expand my market into the EU I must revise my production methods to comply with their regulations as well. For many small businesses the costs of compliance will be prohibitive.  TTIP would possibly allow these smaller firms to be able to sell their products more freely on either side of the Atlantic.

There are dozens, if not hundreds of products with similar stories. Food, drugs, transportation, and even entertainment all have differing regulations which require businesses to expend effort and funds in order to reach dual compliance. Reportedly, one of TTIP's main goals is to reduce the burdens of such redundancy on producers.

I do not use the word "Reportedly" in the previous paragraph without forethought. The largest concern with TTIP at this point is a total lack of transparency.  There is, at least at this point, no known plans to make TTIP negotiations public. Likely, this very important trade agreement will be discussed behind closed doors and without much input from populations on either side of the Atlantic.  Those privileged few who will be asked for their opinions will likely be industry insiders and lobbyists, individuals with a vested interest in putting their industries wishes ahead of the public good.  For this reason TTIP may very well turn into a feeding frenzy for special interest groups.

Imagine for a moment that as a lobbyist for a sea salt manufacturer you managed to have a provision added to the agreement that cured hams must be preserved with natural sea salt.  Even if such a provision does not directly pay into your firms coffers it's clear that in this case a rising tide would lift all sea salt producer's boats. Not only that, such a provision would be of far more benefit to coastal nations than to landlocked countries.  The potential for similar self serving regulation abounds in such an expansive trade agreement that is not subject to public scrutiny.

This is not the first attempt at a TTIP like trade agreement. Nor is the adoption of TTIP by any means a foregone conclusion at this point.  However, if the negotiations are conducted fairly and with a degree of transparency there is the potential for large economic gains.  Hopefully politicians on both sides of the Atlantic exercise a degree of wisdom and prudence as they move forward with discussion.

Until next week, stay safe and rationale.