Experimental research (3) – Overconfidence (results)

This post is devoted to summarize some results obtained in the two tests of overconfidence explained in the previous post: a set of trivia tests devised to estimate individual measures of overestimation (E) and overplacement (P), and a series of questions on interval estimates to obtain an indicator of overprecision (M).

Trivia tests on overestimation and overplacement

Participants completed the four trivia in about 15 minutes, instructions included, and there were no incidents in any of the five sessions. The average respondent overestimated her performance in the trivia by 2.9 right answers (out of 40 questions in total) and the bias was persistent in both easy and hard tests. Whereas, the average respondent considered herself below average by -2.7 correct answers, with the bias being mostly attributable to an underplacement in hard tasks. Table 1 summarizes average data in the experiment.

TABLE 1 – Overestimation and overplacement

We also observe a strong correlation between both variables E and P. That is, participants that exhibited the highest overestimation tend to consider themselves above average (or, at least, featured a lower underplacement) and vice versa.

Finally, the trivia tests were devised to control for the hard – easy effect. Results were the expected for hard tests (overplacement reduces from -2.4 in hard tests to about zero in easy ones) and suffice for overestimation, which does not increase (suggesting a general bias towards overestimation in our sample). Figure 1 helps to appreciate the effect more clearly.

FIGURE 1 – The hard – easy effect

 

However, things would have been more evident if we did not fail to propose a pair of easy tests that were as easy as we expected. As we may see in Table 1 above, trivia tests T2 and T3 had average (median) correct answers of 2.29 (2.0) and 2.75 (3.0) out of 10 questions (with correct answers attributable only to good luck implying a coefficient of 2.0). In trivia tests T1 and T4, expected to yield correct answers of 7.0 to 8.0 on average, respondents only hit the right answer 5.4 (5.0) and 5.58 (6.0) out of 10 questions on average (median). This would represent a couple of tests of a medium –rather than easy- difficulty for respondents.

Test on confidence intervals

Participants completed the six questions on confidence intervals in about 6 to 8 minutes, instructions included. Results show a vast tendency towards overprecision, but we are concerned about the reliability of the estimations obtained at the individual level.

First, judges were significantly overconfident. Aggregate results show a strong tendency to overprecision: the 80% confidence intervals contained the correct answer only 38.3% of the time. As expected, the lowest degree of overprecision corresponds to the domain where participants could draw on personal experience (time to walk). However, they were still overconfident: 80% intervals hit the right answer 62.0% of the time. Using M ratios overprecision becomes even more prevalent: more than 90% of the respondents exhibit this bias. Table 2 summarizes the results.

TABLE 2 – Overprecision

Although results on aggregate are consistent with empirical literature, we are concerned about the reliability of the estimations obtained at the individual level. First, there is evidence that many participants did not fully understand the instructions. We had several incidents: a respondent that did not complete all three answers per question, responses where minimum and maximum boundaries were swapped, where answers were provided in a different order than required, or where the median estimation was identical to any of the boundaries.

Second, individual estimations of ratio M are highly variable depending on the refinement method to estimate M and whether indicators are estimated as the median or the average of the ratios across domains. In particular, following Soll and Klayman (2004) we compared three alternative refinement methods (the option-by-default M, estimator M2 where MAD assumes a symmetric distribution, and a third one that assumes normal distribution), and for each of them we computed mean and median ratios. Results evidence indicators are sensible to the estimation method.

Why this happened? Basically, tests were too simple. When only having two questions per domain, providing an answer to a single question that is close to the true response strongly affects the eventual estimation of M. In future research, having more questions per domain will be essential, but with the restriction of devising a test that is not highly time-consuming for a single indicator.

Experimental research (2) – Overconfidence (theory)

The prevalence of overconfidence, i.e., the human tendency to overestimate our own skills and predictions for success, is a classic in the behavioral literature. Experimental evidence confirmed the role of overconfidence in areas as diverse as financial markets, health, driving, insurance markets, job search or consumer behavior. Many anomalies in financial markets have also been suggested to be a consequence of investor overconfidence, like excessive volatility, return predictability, excessive trading, under-diversification, etc. Finally, research is also vast regarding managerial overconfidence. Executives appear to be particularly prone to display overconfidence, and its effects include literature on mergers and acquisitions and high rates of business failure, among others.

However, when we delve into the concept of overconfidence, things are more complex. Moore and Healy (2008) claim this bias had been studied in inconsistent ways. They identify three different measures of overconfidence that have been confounded in the literature before. In particular, people may exhibit overconfidence: (1) in estimating their own performance (‘overestimation’ E); (2) in estimating their own performance relative to others (‘overplacement’ P); and (3) having an excessive precision to estimate future uncertainty (‘overprecision’ M). Moore and Healy’s model predicts overprecision is systematic, overestimation increases with task difficulty and overplacement decreases with it. The latter findings explain the previously observed hard-easy effect: on easy tasks, people underestimate their performance but overplace themselves compared to others; hard tasks, instead, produce overestimation and underplacement.

In order to avoid confusing overestimation and overprecision, we study overestimation by measuring perceptions across a set of items, whereas overprecision is analyzed through a series of questions on interval estimates. In order to elicit parameters E and P, participants were required to complete a set of 4 trivia with 10 questions each one. In order to account for the hard-easy effect, two quizzes should be easy and two of hard difficulty. In each quiz, for each item participants have to mark the correct answer. Then, when they finish a quiz, they are required to estimate their own scores, as well as the score of a randomly selected previous participant, (RSPP).[1] They repeat the same process for all the 4 rounds.

Overestimation (E) is calculated substracting a participant’s actual score in each of the 4 trivia from his or her reported expected score and summing all 4 results. A measure E > 0 means the respondent exhibits overestimation, while E < 0 means underestimation. The hard-easy effect may be tested if similar estimations are calculated separately for the hard and easy tasks.

Overplacement (P) is calculated taking into account whether a participant is really better than others. For each quiz we compute (E[Xi] – E[Xj]) – (xi – xj) –where E[Xi] is her belief about her expected performance in a particular trivia, E[Xj] is her belief about the expected performance of the RSPP, and xi and xj measure the actual scores of the individual and the RSPP- and then sum all 4 results. A measure P > 0 means the judge exhibits overplacement and P < 0 means underplacement. Again, the hard-easy effect may be tested computing similar estimations separately for the hard and easy tasks.

In order to elicit parameter M, participants were presented a series of 6 questions on 3 domains (device inventions, mortality rates and time walking between two places). For each question they were asked to specify a 3-point estimate (median, 10% fractile and 90% fractile), so we have low and high boundaries for an 80% confidence interval. Soll and Klayman (2004) show overconfidence in interval estimates may result from variability of interval widths. Hence, in order to disentangle variability and true overprecision, they define the ratio M = MEAD/MAD, being MEAD the mean of the expected absolute deviations implied by each pair of fractiles a subject gives, and MAD the observed mean absolute deviation. Overprecision (M) is calculated by having an Mi estimation per domain, and then computing M either as a median (Mmed) or average (Mavg) across the 3 domains. Here M = 1 implies perfect calibration and M < 1 overprecision, with the higher overprecision the lower M is.

In a subsequent post we will provide the results of the experiment.

[1] In our experiment participants were required to estimate “the average score of other students here today and in similar sessions with students of this University”.

References

Moore, D. A. and P. J. Healy (2008), The trouble with overconfidence, Psychological Review 115(2), 502–517.

Soll, J.B. and J. Klayman (2004), Overconfidence in interval estimates, Journal of Experimental Psychology: Learning, Memory and Cognition 30(2), 299-314.

Experimental research (1) – Behavioral microfoundations of RCM

Behavioral economics and its related field, behavioral finance (BF), has been able to explain a wide array of anomalies, observed phenomena that mainstream economics is not able to explain. This field, based in insights of other social areas like psychology or sociology, has succeeded so much to improve our understanding of empirical data and real behavior of market participants (particularly in financial markets), that BF itself is becoming mainstream today. An evidence for that are the recent Nobel prizes by Kahneman (prospect theory) and Smith (experimental economics) in 2002, and Robert Shiller (asset pricing and behavioral finance) in 2013.

The main objective of my PhD investigation is to analyze the behavioral microfoundations of retail credit markets. Most behaviorist researchers come from the US where market-based financial systems move the economy. However, in Europe (and Eurozone in particular) most financial operations come through the banking system. The financial crisis of 2008-9 was, at least in Europe, mostly a result of a credit boom fostered by the banking industry. The classic explanations for this misbehavior (with which I agree) are moral hazard, the effect of incentives and a deficient regulation. In my PhD investigation I want to go further: analyzing the effects that behavioral biases of participants in the industry (CEOs, employees, authorities…) could have over credit policies implemented. In particular, whether just behavioral biases could be able to explain a credit boom -though we agree the alternative explanations would be still relevant.

For that purpose, I have already developed a model of banking competition (here a brief description) where the only presence of a biased bank (biased in terms of excessive optimism and/or overconfidence) is a sufficient condition for a bubble to be generated. The dynamic extension of this model predicts pessimism is not as pervasive as excessive optimism (i.e., financial crises are seeded during good times, when overconfident agents make too risky decisions, rather than during recessions) and that low-quality niche markets are more exposed to potential pervasive effects of bounded rationality. Both results of the model are validated by empiric observation.

The second section of the thesis is devoted to an experimental research. In particular, the purpose is to analyze the behavioral profile of participants in the experiment according to two fields: Prospect theory (theory and tests by Tversky and Kahneman, 1992; tests by Abdellaoui et al., 2008) and Overconfidence (theory and tests by Moore and Healy, 2008; tests by Soll and Klayman, 2004). Then, once participants have been described in terms of their risk profile (prospect theory) and different measures of overconfidence, we test the effects that different PT and OC profiles might have had over the policies implemented by the same participants in a strategy game.

The strategy game they played consists of an experimental simulation of a retail credit market. The basics of the game are as follows. Each participant runs a bank that grants credit to costumers. They play an individual, multiperiod game (same information but no interaction) with the purpose of setting a strategy defined as “price and volume of credit granted” to several costumers, given information available. Information about economic perspectives and quality of the new customer is updated every period.

The goal for participants in the experiment was to implement the best strategy (in terms of profit maximization). The winner of each experimental session (5 in total) was granted a prize of 60 eur. The goal of the experimenter, instead, was to analyze their decisions in a context of uncertainty and risk, and determine whether their profiles (in terms of PT and OC) are able to explain some decisions they made in terms of three types of indicators (seven indicators in total, which measure different aspects of prize, volume and quality of credit).

In subsequent and separate posts we will describe some of the basic results we obtained in terms of priors, risk profile and overconfidence of respondents, as well as in terms of the credit policies they implemented in the game.

Playing with fire: Internal devaluation

The journal ISRN Economics by Hindawi, the largest international Open Access publisher and the first subscription publisher to convert its entire portfolio of journals to Open Access (source), has published this week a paper by me and Fernando Rey about the current Euro crisis: ‘Playing with fire: Internal devaluation for the GIPSI countries’.

 

You may download the article in .pdf HERE. In what follows we provide only a brief summary of its contents.

 

Correcting fiscal imbalances and reducing public debt has been a priority since the outburst of the euro crisis. European authorities are fostering internal devaluation of peripheral countries. The idea is simple: when a debt crisis triggered in countries with a currency of their own –like several episodes of emerging countries in the past- a classic solution was that the IMF financed the country and imposed fiscal consolidation and devaluation, combined with an expansionary monetary policy. For Eurozone members this is not possible since the monetary policy has been transferred and the option for national currency devaluation has been switched off. Hence, authorities suggest a ‘second best’ option would be internal devaluation through wage cuts. However, this option introduces an additional source of risk: fiscal consolidation, combined with a credit crisis and with no access to devaluation, generates deflation. Besides, this may be a dangerous strategy in a context where the private sector is heavily indebted and currently ongoing a necessary process of deleveraging.

 

An alternative policy would be accepting a higher inflation target. Well-known defenders of this option are Rogoff (2011), Krugman (2012) and Stiglitz (2012) –though critics to this idea are also manifold (e.g., Rajan, 2011). In ‘Playing with fire: Internal devaluation for the GIPSI countries’ we suggest an alternative that may satisfy both points of view: how a coordinated policy within the Eurozone, where creditor countries accept an inflation differential above GIPSI countries’ inflation, would avoid the perils of deflation, setting a more suitable scenario for fiscal consolidation of GIPSIs to succeed.

 

 

Continúa lendo

More on Prospect Theory

These days I’m working hard on my thesis (as always!). Plans are that the experimental design should be ready within some weeks… Hopefully, by october I will be conducting the experiments, we’ll see!!

 

I tried a couple of times before in this blog to explain what my thesis is about. In particular, a summary of a theoretical model of the effects of overconfidence in banking competition (‘a behavioral model of the credit boom’) and an introduction to Prospect Theory (‘Prospect Theory’), the descriptive model by D. Kahneman and A. Tversky of how people behave when facing risk.

 

Daniel Kahneman

Source: Google Images

 

Overconfidence and prospect theory are two cornerstones of Behavioral Economics, a growing field of research with sound empirical basis supporting the idea that market participants are boundedly rational, make systematic errors and, consequently, markets (particularly, financial markets) are often not efficient. Daniel Kahneman was granted the Nobel prize in 2002 for these ideas (as well as Vernon Smith, for providing experimental evidence in the laboratory of market inefficiencies). However, when I look at myself at the University, talk with colleagues and, above all, when I listen to what economic institutions in Spain (namely BdE, CNMV and so on) have to say about these topics, I still see behavioralists like me as queer as a clockwork orange…

 

No worries. I only need to have a look abroad these days to see what Academics think about Behavioral Economics. The blogosphere is hot these days with an article by Noah Smith about ‘The death of theory’: Smith notices theoretical papers in Economics as a percent of the total have been declining since the 1980s, and the focus is now on empirical tests and experiments. Why did this happen? According to Smith…

 

…my guess is that the meteor that hit the economics dinosaurs, and set the field’s evolution off in a new direction, was named Daniel Kahneman. Starting in 1973, Kahneman released a torrent of papers showing that human behavior didn’t look anything like the way that homo economicus was supposed to act. Other behavioral meteors followed. Richard Thaler, Vernon Smith, Colin Camerer… it was a giant comet of Behavioral Economics that broke up somewhere in orbit and fell to Earth in many parts.

 

Paul Krugman followed then on this debate (‘What killed theory?’) agreeing theory is becoming less important in macroeconomics. However, though he agrees with Behavioral Economics, he doesn’t think “Kahneman shook things up all that much; anyone sensible had long known that the axioms of rational choice didn’t hold in the real world, and those who didn’t care weren’t going to be persuaded by one man’s work.” That’s confirmation bias!!

 

A couple of articles more in relevant websites that I saw these days are The Economist ‘Future prospects’, which summarizes some of the contributions of Prospect Theory to Economics according to a new paper by Nicholas Barberis, another pope in the field; and Bloomberg ‘Why Homo Economicus might actually be an idiot’, which provides recent experimental evidence that greed and self-interest that rules capitalism often underperform when compared to the ‘homo socialis’ that cares about their counterparts.

 

But if there is one article that makes me see how Behavioral Economics is becoming mainstream also in politics is THIS ONE: the Obama Administration is financing several projects in a dozen federal departments based on Thaler’s ideas on Libertarian Paternalism in order to design efficient public policies: “Insights from the social and behavioral sciences can be used to help design public policies that work better, cost less, and help people to achieve their goals.”

 

I bet you these ideas will be essential for Social Democracy parties in US and Europe within years to come.

EMH in bankruptcy

Business Insider reports an interview by George Soros where he claims ‘the Efficient Market Hypothesis (EMH) has run into bankruptcy‘.

 

 

In an interview with the Institute for New Economic Thinking (available here at youtube) George Soros argues


“prevailing paradigm of efficient market hypothesis—rational choice theory has actually run into bankruptcy, very similar to the bankruptcy of the global financial system after Lehman brothers. [We need a] fundamental rethinking of the assumptions and axioms on which economic theory has been based. because economics has been trying to come up with universally valid laws similar to Newtonian physics and that i think is an impossibility you need a new approach with different methods and also different criteria of what’s acceptable.”

 

This is what an increasing number of economists agree, including obviously behavioralists like me. This view is becoming mainstream in the Universities of US and Europe… but no worries if you are a Spaniard Fama’s supporter: all institutions in Spain (BdE and, above all, CNMV) and private universities and business centers will support market rationality for many years to come. It is easy: they don’t care about honestly pursuing the Truth -with capital letters. They only care about maintaining their dominant position. And EMH is a fundamental cornerstone for such purpose.

 

Related: ‘El sofisma de los mercados eficientes‘.

Country debt vs. Social expenditures

Inspired by a question retweeted by Mark Thoma, I use ECB data to compare Government debt vs. Social Expenditures from 1995 to 2012, for 26 countries of EU-27 (Malta excluded due to lack of data).

 

First, average debt vs. average social expenditures yields

 

 

which would suggest a positive but no statistically significant (R2=0.3) relationship between debt and social expenditures.

 

Second, if we compare now average social expenditures and growth in national debt since 2007 we would have

 

 

which now would suggest a negative relationship (those countries with the lowest social expenditures would have seen their national debts grow faster) but again the relationship would not be statistically significant.

 

Finally, I remove the outliers in both analyses (high debt countries Belgium Italy and Greece in the first analysis, highest increase in debt by Ireland and Latvia in the second) to get

 

 

and

 

 

which in both cases would suggest an even lower statistical significance.

Cold War Economics

 

Academics and economic authorities are all shooked up by recent news from Japan: Shinzo Abe’s new administration forces Bank of Japan (bye-bye, central bank independence!) to implement ‘the-mother-of-all-quantitative-easing-programs’: a 2% inflation target within two years time to escape from 2 decades of deflation, by:

 

  • 1.- doubling the monetary base (money market operations so that the monetary base will increase at an annual pace of about 60-70 trillion yen; Under this guideline, the monetary base — whose amount outstanding was 138 trillion yen at end-2012 — is expected to reach 200 trillion yen at end-2013 and 270 trillion yen at end-2014″)
  • 2.- increasing Japan Government Bond (JGB) purchases increasing the Bank of Japan amount outstanding at an annual pace of about 50 trillion yen, in order to further decline interest rates across the yield curve…
  • 3.- …and along the yield curve, since BoJ may buy JGBs with maturities up to 40-year bonds, with the average maturity of the Bank’s JGB purchases being extended from slightly less than three years at present to about seven years

 

 

One way to see how this new strategy has shaken the foundations of Economics and political institutions is to read the (jolly) reactions by KrugmanStiglitzSmith and many others. Another way to see it is to understand how this strategy definitely changes the main instrument used by central banks to control the economy from interest rates to the monetary base. This is basically what defenders-of-the-liquidity-trap discurse in particular (and Keynesians, generally speaking) have been demanding from authorities to do, as we’ve seen last week: ‘the price is wrong’ because we cannot have negative interest rates, so forget fears of inflation when our problem is economic depression and deflation, expand the monetary base dramatically and help reducing interest rates all along the yield curve through market intervention.

 

With the European Central Bank and the neocons in the UE strongly against a push for reflation and an aggressive program of bond purchases, particularly in assistance of peripheral countries (here an excelent review by Simon Wren-Lewis on ‘what does the ECB think is doing?’), the years to come may witness a new Cold War Economics in two instances:

 

  • First, it may introduce a new currency war between the 3 currency areas (euro, dolar and yen, to be added to the old currency war with China). Will European authorities still claim a ‘strong euro’ is the desirable outcome? If so, we will make our friends in US, China and Japan very happy to be the currency area that ‘faces the music’.
  • Second, ideologically, Abenomics means on one hand “the end of Central Banking independence” and on the other it represents the largest experiment of market interventionism-keynesianism-whatever-you-may-call-it in decades.

 

 

For now it seems to be working: Japan is getting access to lower costs of financing.

 

Source: www.bloomberg.com

 

In any case it is deffinitely shocking to see Spanish liberal Prime Minister Mariano Rajoy claiming to be on the interventionist side. I bet this is a unpleasant demand in Ms. Thatcher’s memory to ask for…

 

 

Probably, for years to come, macroeconomists will be talking about this ‘war’. We’ll see who ‘wins’.

A ‘delicate balanced’… rhetoric

You should read this. Sad, really sad.

 

Fiscal policy in Europe: Searching for the right balance

 

Well, actually, it isn’t sad. We should need to take seriously these guys from the European Commission to say it is sad but, you know what? They don’t fool us any more.

 

Krugman has something to tell them, too: “it’s quite a spectacle to see officials patting themselves on the back over an economic strategy that, let’s not forget, has tipped Europe back into recession, and keeps pushing overall euro area unemployment to new highs.” I can only read one sentence that could be interpreted as ‘something is changing in the minds of these guys’: “For vulnerable countries of the Eurozone that face a large external sustainability gap, external growth is the only sustainable way to grow out of their debts. The improved current balances in the periphery thus have to be matched by rebalancing trends also in Eurozone countries that feature large current account surpluses. Policies and reforms supporting demand in these countries have a role to play. Since fiscal consolidation-cum-deflation is likely to be self-defeating, re-establishing competitiveness across the area implies higher than average inflation in stronger countries, provided price stability in the Eurozone and inflation expectations remain well anchored.”

 

This is exactly what many economists (for instance, me: Peon & Rey, 2013) recommend. And this would be good (the only, but good) news if it weren’t for this they say just below: In Germany, the fiscal stance is now broadly neutral, hence consistent with the call for a differentiated fiscal stance according to the budgetary space.” Oh dear, this is what they call boosting demand?? Now I know what their ‘delicate balance’ self-justifying piece means: just rhetoric.