Archive for the ‘Socio-technical systems’ Category
Like many libertarians, I feel that small government is an eminently practical rule of thumb proven by hundreds (if not thousands) of years of observation. So when Rafe recently posted in response to a presentation that David Cameron made at TED, it got my dander up. Calling the small government philosophy, “… ivory tower idealism,” felt like a blatant misrepresentation. But then I wondered. Maybe Rafe had formed the honest (though mistaken) impression that small government advocates think that reducing government functions will lead to some sort of emergent order utopia?
I don’t know exactly what Cameron said because I can’t find a public video archive. This Guardian account indicates that he mostly hung platitudes on the scaffolding of giving people more choice and transparency. Choice is a big part of small government, but I thought it would be worth outlining what I think is the non-politician’s version of the libertarian small government ideology. It’s far from ivory tower. More like back alley.
It’s based on two observations: (1) local knowledge is important to good decision making and (2) concentration of power leads to abuses. I think few students of political history and organizational behavior would argue against these points, so I won’t detail them here. However, if anyone honestly thinks they are in doubt, I’d be happy to cover them in a subsequent post.
So, any time society assigns a role to government, it incurs the costs of (1) and (2). These costs tend to increase over time and as a situation departs from the ideal future path. So the expected net present value of these costs can be substantial. Libertarians therefore conclude that the benefits that the government brings to a role should, as a general rule, be quite large before we even consider it as an option. Notice that this does not imply no government at all. Rather, it implies we should use government sparingly.
The repeated pattern observed by libertarians goes like this. A problem arises. Everyone (even libertarians) agree that it is problem. Progressives push through a government program to address it. Initially, the program somewhat ameliorates the problem. However, the problem turns out to be trickier than first believed, so the benefits are usually not as great as expected. Over time, the problem evolves and adapts, further eroding program benefits. The government program evolves and adapts too, but more to promulgate its own survival than address the problem.
So we are left with much lower benefits than forecast and significant unforeseen costs (in the form of an everliving, mostly useless program). Libertarians conclude that in many cases the “cure” is worse than the disease. Not that it doesn’t suck having the disease. The irony of course is that the progressives then identify the results of an old government program as a new problem that requires… another government program (cough, cough, government intervention in financial markets, cough, cough).
Of course, some illnesses are actually bad enough that the (painful) cure is better than the disease. In those cases, bring on the government program. But let’s be realistic about the long term benefits and costs.
Last night, I was lucky enough to get a personal tour of the California Academy of Sciences from Dr. Brian Fisher, a taxonomist specializing in ants. He’s doing some amazing work trying to help Madagascar prioritize and save the 10% of native rainforest they have left. It’s reminiscent of Willie Smits‘ work in Borneo, though focused on preservation rather than revitalization. But it has the same feel of getting the local people committed to managing their own ecological resources.
You can donate here (I gave them $500), but make sure to write “For the Fisher Madagascar Project” in the “Comments” field. Otherwise, you’ll be paying for the building lights. Go ahead and leave the “Allocation” field at the default, “Campaign for a New Academy”. Update: Forgot to mention that if you donate $2,000 they’ll name a new species after you or whomever you designate.
It’s hard to do justice to what I saw last night in a blog post, but here goes…
[EDITED 05/08/2009: see here] The majority of people I’ve talked to like the idea of revolutionizing angel funding. Among the skeptical minority, there are several common objections. Perhaps the weakest is that individual angels can pick winners at the seed stage.
Now, those who make this objection usually don’t state it that bluntly. They might say that investors need technical expertise to evaluate the feasibility of a technology, or industry expertise to evaluate the likelihood of demand materializing, or business expertise to evaluate the evaluate the plausibility of the revenue model. But whatever the detailed form of the assertion, it is predicated upon angels possessing specialized knowledge that allows them to reliably predict the future success of seed-stage companies in which they invest.
It should be no surprise to readers that I find this assertion hard to defend. Given the difficulty in principle of predicting the future state of a complex system given its initial state, one should produce very strong evidence to make such a claim and I haven’t seen any from proponents of angels’ abilities. Moreover, the general evidence of human’s ability to predict these sorts of outcomes makes it unlikely for a person to have a significant degree of forecasting skill in this area.
Via Tyler Cowen at Marginal Revolution, an excellent article in Wired about how one formula, embodying one assumption, catalyzed the meltdown. I recommend you read it and ponder it. There are many useful lessons for modeling complex systems in general.
However, I will summarize for those of you short on time. A fundamental problem in securitization is figuring out how different components of a security are related. Think of it as measuring how well the components are diversified. The more independent the components, the less risk embodied in the security. Thus AAA rated tranches of mortgage-backed securities are supposed to be very safe because the components are supposed to be highly independent.
A Chinese mathematician named David X. Li had an insight. You don’t have to analyze the dependencies directly, you just have to observe the correlations in the market prices of the components. Then you can compute these really tight sounding confidence intervals on the correlations of various components because you have all this market data. Of course, the market can’t take into account what it doesn’t understand. So you see a bunch of 25-sigma events. At least, your model says they are 25-sigma. Oops!
Now that I’ve had a week to digest what I saw at the summit, I have some thoughts on the most likely path we’ll take to the singularity. From an absolute perspective, this path isn’t very likely because there are a lot of different ways to get there (or not get there). But given what I’ve seen so far, I assign this path the highest concentration of the admittedly diffuse conditional probability mass.
As most of you know, one of the commonly proposed paths to The Singularity is the development of artificial general intelligence (AGI). As you can read in my rundown of the Singularity Summit, speakers showcased a lot of progress in hardware substrate and software infrastructure, but no significant conceptual advances in implementing executive function in software.
Absence of evidence isn’t necessarily evidence of absence, but I believe that if anyone were making headway on this problem, the chances that someone at the summit would have alluded to it are high. Therefore, I predict that the first being with substantially higher g than current humans is much more likely to be an augmented human than an AGI [Edit: more thoughts on electronically enhancing humans here].
I attended the Singularity Summit today. Overall, it was worth the time spent. I did not attend the workshop on Friday because it didn’t look substantive when I reviewed the program. Today, I spoke to several people who were there and they confirmed my prediction. I took 7 pages of notes at the summit and hope to have some insightful synthesis of the material in a few days [Edit: first thought here, more here]. In the meantime, here is a short review of the talks.
As most of you already know, I am an anthropogenic global warming skeptic, aka “denier”. Well, a new paper by the Federal Reserve Bank of Minneapolis has turned me into a credit crunch skeptic too.
The maintstream narrative on why we need a bailout is that credit is “frozen”. We can’t just let the financial sector sort itself out because it provides the credit “grease” that lubricates the rest of the economy. The graphs in this paper make it pretty clear that the wheels of Main Street have plenty of grease. So it looks to me like the bailout is corporate welfare plain and simple. It also means that Paulson and Bernanke talking about how bad things are to justify the bailout may have actually exacerbated any real recession by magnifying the psychological salience of the crisis.
As we saw in Act I and Act II, the current financial crisis was enabled by government interference in the housing and mortgage markets, then initiated by Wall Street’s willful blindness to systematic risk in the MBS market. Now we are observing the government’s flailing response.
First they bail out Bear Stearns. Then they let Lehman go bankrupt. But AIG gets a lifeline. On to a $700B bailout intended to purchase toxic MBSs. And most recently forcing several probably healthy banks to absorb $250B in government investment. Along the way, there were a bunch of changes to FDIC regulations and a see-sawing stock market.
You might be asking yourself, what the heck is going on here? The reason for all the flailing is that the government is attempting to implement a command and control solution to an extremely distributed problem.
When the bailout passed, I first thought this post was moot. But then I reconsidered. There’s still plenty of time to affect the implementation and several lessons to be learned. Also, when I’m pissed off, it’s nice to know that I have a good reason.
In Act I, we saw how government meddling overheated the housing and mortgage markets. Now we’ll see how Wall Street took advantage of this opportunity and also apportion some blame to ourselves.