Archive for the ‘Incentives’ Category
Via the indispensable Tyler Cowen, a new paper from Johnson and Fowler explores whether overconfidence is, in fact, adaptive. They show that it it is under some very reasonable assumptions. They model competition for resources as a two-player game and then analyze the evolutionary dynamics of populations playing this game.
The basic result is that overconfidence is beneficial in proportion to two factors: (1) the size of the payoff relative to the cost to play and (2) uncertainty about competitor capabilities. There are two optimal strategies for a population, overconfidence (which minimizes unclaimed resources) and underconfidence (which minimizes conflict costs). Unbiased self-perception is always dominated by these strategies. However, an overconfident person can successfully invade an underconfident population while the reverse is not true. So overconfidence is the stable solution.
The direct implication is that resources get destroyed. It is optimal for an individual to be overconfident, but then he ends up fighting with other overconfident individuals, which imposes costs. If you think about it for a minute, this is a pretty important fundamental problem. All of the big societal decisions we face have potentially big payoffs (or avoidance of costs), but it’s really unclear who has the best expertise to make a recommendation. So we get a bunch of “experts” telling us they are absolutely right.
Note that if it is public knowledge how “good” someone is, the “overconfidence premium” goes to zero. This is why forcing experts to make public predictions is so important. Then you can figure out how good they really are.
I saw this brief New York Times article syndicated in the San Jose Mercury News. Evidently, one of the challenges in identifying new cancer treatments is recruiting enough patients for drug trials. The issue is that oncologists have little incentive to encourage their patients to enroll in drug trials.
Evidently, 60% to 80% of an oncologist’s revenues come from providing chemotherapy. When a patient enrolls in a trial, his doctor loses that revenue. As Scott Schaefer recently posted, the evidence is pretty clear that doctors respond to financial incentives. Result: a dearth of volunteers. So here’s an idea. Let’s pay a significant finder’s fee to oncologists that refer patients to trials. You could even start a “charity” to do this.
Via my buddy Matt Watson, here is a really well done infographic explaining the credit crisis. Merely entertaining for regular readers who’ve been following the crisis. But quite informative for any of your friends who haven’t felt the need to wade through all the commentary.
How Our Moral Compasses Fail Us
From the comments on my Introduction to this series, it appears I have discovered a controversial topic. Good. My first objective will be to illustrate why we cannot rely on moral compasses to guide society. After some thought, I have decided to break the topic of moral compasses into two posts: how they fail and why they fail.
I was recently having a conversation with a mutual friend of Rafe’s and mine. Like the two of us, he’s quite smart, well educated, and socially aware. I respect his thinking a lot. However, during the course of this conversation, it became clear to me that he holds what I think of as an overly moralistic view of human behavior.
From my perspective, it seemed like he thinks that people’s behavior is governed primarily by an internal moral compass rather than incentives. So if you want to change their behavior, you should redirect their moral compass rather than adjust their incentives. People who don’t adjust their behavior are defecting from society and should be sanctioned.
I encounter this view quite often in my social circle and this instance inspired me to write a series of posts to explain how I think things actually work. You’re free to disagree with me, of course. In fact, I expect most people to disagree with me. But I’ve thought rather hard about this issue and I’ll put my model up against the moralistic view when it comes to predicting a population’s average behavior or choosing an effective policy prescription.