Archive for September 2011
Several people have pointed me to Dan Frommer’s post on Moneyball for Tech Startups, noting that “Moneyball” is actually a pretty good summary of our approach to seed-stage investing at RSCM. Steve Bennet, one of our advisors and investors, went so far as to kindly make this point publicly on his blog.
Regular readers already know that I’ve done a fair bit of Moneyball-type analysis using the available evidence for technology startups (see here, here, here, here, here, and here). But I thought I’d take this opportunity to make the analogy explicit.
I’d like to start by pointing out two specific elements of Moneyball, one that relates directly to technology startups and one that relates only indirectly:
- Don’t trust your gut feel, directly related. There’s a quote in the movie where Beane says, “Your gut makes mistakes and makes them all the time.” This is as true of tech startups as it is of baseball prospects. In fact, there’s been a lot of research on gut feel (known in academic circles as “expert clinical judgement”). I gave a fairly detailed account of the research in this post, but here’s the summary. Expert judgement never beats a statistical model built on a substantial data set. It rarely even beats a simple checklist, and then only in cases where the expert sees thousands of examples and gets feedback on most of the outcomes. Even when it comes to evaluating people, gut feel just doesn’t work. Unstructured interviews are the worst predictor of job performance.
- Use a “player” rating algorithm, indirectly related. In Moneyball, Beane advocates basing personnel decisions on statistical analyses of player performance. Of course, the typical baseball player has hundreds to thousands of plate appearances, each recorded in minute detail. A typical tech startup founder has 0-3 plate appearances, recorded at only the highest level. Moreover, with startups, the top 10% of the startups account for about 80% of the all the returns. I’m not a baseball stats guy, but I highly doubt the top 10% of players account for 80% of the offense in the Major Leagues. So you’ve got much less data and much more variance with startups. Any “player” rating system will therefore be much worse.
Despite the difficulty of constructing a founder rating algorithm, we can follow the general prescription of trying to find bargains. Don’t invest in “pedigreed” founders, with startups in hot sectors, that have lots of “social proof”, located in the Bay Area. Everyone wants to invest in those companies. So, as we saw in Angel Gate, valuations in these deals go way up. Instead, invest in a wide range of founders, in a wide range of sectors, before their startups have much social proof, across the entire US. Undoubtedly, these startups have a lower chance of succeeding. But the difference is more than made up for by lower valuations. Therefore, achieving better returns is simply a matter of adequate diversification, as I’ve demonstrated before.
Now, to balance out the disadvantage in rating “players”, startup investors have an advantage over baseball managers. The average return of pure seed stage angel deals is already plenty high, perhaps over 40% IRR in the US according to my calculation. You don’t need to beat the market. In fact, contrary to popular belief, you don’t even need to try and predict “homerun” startups. I’ve shown you’d still crush top quartile VC returns even if you don’t get anything approaching a homerun. Systematic base hits win the game.
But how do you pick seed stage startups? Well, the good news from the research on gut feel is that experts are actually pretty good at identifying important variables and predicting whether they positively or negatively affect the outcome. They just suck at combining lots of variables into an overall judgement. So we went out and talked to angels and VCs. Then, based on the the most commonly cited desirable characteristics, we built a simple checklist model for how to value seed-stage startups.
We’ve made the software that implements our model publicly available so anybody can try it out [Edit 3/16/2013: we took down the Web app in Jan 2013 because it wasn’t getting enough hits anymore to justify maintaining it. We continue to use the algorithm internally as a spreadsheet app]. We’ve calibrated it against a modest number of deals. I’ll be the first to admit that this model is currently fairly crude. But the great thing about an explicit model is that you can systematically measure results and refine it over time. The even better thing about an explicit model is you can automate it, so you can construct a big enough portfolio.
That’s how we’re doing Moneyball for tech startups.