Possible Insight

Archive for the ‘Economics’ Category

A Meta-Startup Manifesto

with one comment

As most of you know, RSCM is part of the group offering every TechStars company an additional $100K investment.  When we first started talking to the TechStars folks, my immediate reaction was “kindred spirits”.  This was also my reaction when we first spoke with Adeo Ressi of TheFunded and Founder Institute.  Recently, I’ve been trying to put my finger on why I think of us all as similar.  I think I’ve got it.

We’re all meta-startups—startups that working to improve the process of launching startups.

Up until a few years ago, founding a tech startup usually followed the same haphazard process it had for decades.  Founders were pretty much on their own to thrash around and figure out how to test their innovations in the marketplace.  It’s how I did my first startup in 1993.  It’s how I did my last startup in 2004.  Same with Dave Lambert, my partner at RSCM, with his first startup in 1993 and his last in 2003.  It’s what we saw all our entrepreneurial friends do.

This “process” has a high bar for founders to clear and a low success rate, limiting innovation.  Now, the Internet increased the speed at which the haphazard process can execute.  So the situation has modestly improved over the last 15 years.  But what we want is fundamental improvement.  What we want is the equivalent of an Amazon, Google, or Facebook to change the rules of the startup game.

High-volume, high-speed incubators like Y Combinator (2005), TechStars (2006), and Founder Institute (2009) are a great leap forward in discovering a more systematic startup process.  They’re paving the way toward more startups, higher success rates, and dramatically more innovation.  Many of the improvements they pioneer should diffuse out into startup community.  So they’ll enable startup in general, not just the ones in their programs, to launch more smoothly.

Our role in this revolution is eliminating the huge seed stage funding roadblock.  Now, with all the press about angels and superangels funding startups in Silicon Valley and New York, you may think getting seed funding is easy.  But I’ve run the numbers.  Seed funding is actually down 40% from it’s peak in 2005.  I’m pretty sure the cost of doing a startup hasn’t dropped 40%.   And I’m quite sure that the number of quality founding teams hasn’t dropped 40%.

Startups succeed by challenging conventional wisdom.  As a meta-startup, we are no different.  Luckily, our job is easier than most tech startups’.  They have to challenge the conventional wisdom about the future of technology.  There just isn’t much data to go on.  We have to challenge the conventional wisdom about funding startups.  In our case, there’s a lot of data that existing investors are just ignoring:

  • The seed stage generates very high returns.  There’s a good dataset of angel investments from the Kauffman Foundation, mostly from 1998 and later.  Depending on your selection criteria, returns are at least 30% and possibly over 40%.  Calculations here and here.
  • You can’t pick winners at the seed stage.  This is precisely the kind of prediction tasks where “gut feel” or “expert judgement” performs poorly.  Human predictions almost never beat even a simple checklist.  Review of evidence here.
  • Interviewing founding teams is a poor indicator of their ability.  There’s been an incredible amount of research trying to figure out how to predict who will be good at which job tasks.  Unstructured interviews are the worst predictor.  Highly structured interviews or matching of past experience to current requirements does better.  Review of evidence here.
  • Grand slams aren’t necessary to achieve high returns.  You don’t have to do anything special to make sure you get the “best” deals.  First, nobody has shown they can identify such startups before they release a product.  Second, even if all you get is base hits, you’ll still have returns of about 30%.  Calculation here.

The conclusion from this evidence is pretty straightforward.  But it implies a seed-stage funding approach very different what we see today.  A classic opportunity for a startup (or meta-startup).

If the seed stage has high average returns but you can’t pick specific winners, basic financial theory says to use a portfolio approach.  You want enough investments to have a reasonably high confidence of achieving the average return.  I’ve done the calculation, and my conclusion is that you want to build a portfolio of hundreds of seed-stage investments.

That may sound like an unreasonable number.  But if you don’t have to spend hours doing multiple interviews of every founding team, you can dramatically streamline the process.  In fact, we have a demo version of our software that can produce a pre-money valuation from an online application in a few seconds [Edit 3/16/2013: we took down the Web app in Jan 2013 because it wasn’t getting enough hits anymore to justify maintaining it.  We continue to use the algorithm internally as a spreadsheet app]. After that, it’s a  matter of back-office automation.  Luckily, previous generations of startups have made such automation pretty easy.

Just think about how these conclusions should lead to a much better environment for seed stage startups.  Investors will actually decrease risk by making more investments.  The same process that allows them to efficiently building a large portfolio means they can give a much faster response to entrepreneurs.  Lower transaction costs mean startups will see more of the money investors put on the table.  As always, innovation is a win-win.  And in this case, we’re innovating in the process of funding innovation.  A meta-innovation, if you will.

Even better, the average person on the street wins too.  Because, as I’ve shown before here and here, increasing the rate of startup formation increases the rate of economic growth.  So if meta-startups can permanently increase this rate, the returns to society as a whole will quickly compound.

Written by Kevin

October 11, 2011 at 12:37 am

Posted in Economics, Innovation

Moneyball for Tech Startups: Kevin’s Remix

with 14 comments

Several people have pointed me to Dan Frommer’s post on Moneyball for Tech Startups, noting that “Moneyball” is actually a pretty good summary of our approach to seed-stage investing at RSCM.  Steve Bennet, one of our advisors and investors, went so far as to kindly make this point publicly on his blog.

Regular readers already know that I’ve done a fair bit of Moneyball-type analysis using the available evidence for technology startups (see here, here, here, here, here, and here).  But I thought I’d take this opportunity to make the analogy explicit.

I’d like to start by pointing out two specific elements of Moneyball, one that relates directly to technology startups and one that relates only indirectly:

  • Don’t trust your gut feel, directly related.  There’s a quote in the movie where Beane says, “Your gut makes mistakes and makes them all the time.”  This is as true of tech startups as it is of baseball prospects.  In fact, there’s been a lot of research on gut feel (known in academic circles as “expert clinical judgement”).  I gave a fairly detailed account of the research in this post, but here’s the summary.  Expert judgement never beats a statistical model built on a substantial data set.  It rarely even beats a simple checklist, and then only in cases where the expert sees thousands of examples and gets feedback on most of the outcomes.  Even when it comes to evaluating people, gut feel just doesn’t work.  Unstructured interviews are the worst predictor of job performance.
  • Use a “player” rating algorithm, indirectly related.  In Moneyball, Beane advocates basing personnel decisions on statistical analyses of player performance.  Of course, the typical baseball player has hundreds to thousands of plate appearances, each recorded in minute detail.  A typical tech startup founder has 0-3 plate appearances, recorded at only the highest level.  Moreover, with startups, the top 10% of the startups account for about 80% of the all the returns.  I’m not a baseball stats guy, but I highly doubt the top 10% of players account for 80% of the offense in the Major Leagues.  So you’ve got much less data and much more variance with startups.  Any “player” rating system will therefore be much worse.

Despite the difficulty of constructing a founder rating algorithm, we can follow the general prescription of trying to find bargains.  Don’t invest in “pedigreed” founders, with startups in hot sectors, that have lots of “social proof”, located in the Bay Area.  Everyone wants to invest in those companies.  So, as we saw in Angel Gate, valuations in these deals go way up.  Instead, invest in a wide range of founders, in a wide range of sectors, before their startups have much social proof, across the entire US. Undoubtedly, these startups have a lower chance of succeeding. But the difference is more than made up for by lower valuations.  Therefore, achieving better returns is simply a matter of adequate diversification, as I’ve demonstrated before.

Now, to balance out the disadvantage in rating “players”, startup investors have an advantage over baseball managers.  The average return of pure seed stage angel deals is already plenty high, perhaps over 40% IRR in the US according to my calculation.  You don’t need to beat the market.  In fact, contrary to popular belief, you don’t even need to try and predict “homerun” startups.  I’ve shown you’d still crush top quartile VC returns even if you don’t get anything approaching a homerun.  Systematic base hits win the game.

But how do you pick seed stage startups?  Well, the good news from the research on gut feel is that experts are actually pretty good at identifying important variables and predicting whether they positively or negatively affect the outcome.  They just suck at combining lots of variables into an overall judgement.  So we went out and talked to angels and VCs.  Then, based on the the most commonly cited desirable characteristics, we built a simple checklist model for how to value seed-stage startups.

We’ve made the software that implements our model publicly available so anybody can try it out [Edit 3/16/2013: we took down the Web app in Jan 2013 because it wasn’t getting enough hits anymore to justify maintaining it.  We continue to use the algorithm internally as a spreadsheet app].  We’ve calibrated it against a modest number of deals.  I’ll be the first to admit that this model is currently fairly crude.  But the great thing about an explicit model is that you can systematically measure results and refine it over time.  The even better thing about an explicit model is you can automate it, so you can construct a big enough portfolio.

That’s how we’re doing Moneyball for tech startups.

Written by Kevin

September 27, 2011 at 10:56 pm

Why We're Smart

with 3 comments

Most people believe humans evolved intelligence because using tools was an advantage.  However, I believe tool use was secondary.  Group cooperation was the primary advantage conferred by intelligence.  You see, cooperation is fundamentally difficult.

This insight coalesced when I was reading about Mark Satterthwaite, an economist at Northwestern’s Kellogg School of Management.  He’s famous for two important impossibility theorems: (1) the Myerson-Satterthwaite Theorem and (2) the Gibbard-Satterthwaite Theorem.

Informally, (1) says that there is no bargaining mechanism that can guarantee a buyer and seller will trade if there are potential gains from trade, while  (2) says that there is no voting mechanism for determining a single winner that can induce people to vote their true preferences.  In both cases, the reason for the impossibility is that people have incentives to hide their actual values to achieve a strategic advantage.

Add these to the Prisoner’s Dilemma and Arrow’s Impossibility Theorem on the list of fundamental barriers to cooperation (Holmstrom’s Theorem is another good one; it explains why you can’t get everyone in a firm to exert maxium effort).  By “fundamental”, I mean there is no general solution.  So the evolutionary process cannot just discover a mechanism that guarantees cooperation when it is efficient.  There will always be the opportunity for individuals to subvert the cooperative process to promote themselves, thus creating selection pressure against the cooperation mechanism.

(Note that there is a hack: make sure each individual has the same genes.  This is how multicellular and hive organisms get around the problem.  But the existence of cancer in the former case and the reduced genetic diversity in the latter case make them limited solutions.)

To achieve extensive cooperation in large groups, individuals need the ability to model the strategic situation, estimate the payoffs to various group members, and continuously assess what strategies other members may be playing. On top of that, there’s an arms race between deceiving and detecting deception.  It’s the old, “I know that you know that I know…” schtick.  The smarter you are, the further you can compute this series.

Bottom line: the impossibility theorems mean the only way to achieve cooperation is to have the machinery in place to make detailed case-by-case determinations.  We’ve talked about the Dunbar Number before: the maximum size of primate groups is determined in large part by a species’ average neocortical volume.   I claim you need to be smarter to process more complex strategic configurations and maintain models of more individuals’ goals.

If I’m right, there are two interesting implications.  First, politics will be with us forever.  No magical technology or philosophical enlightenment will eliminate it.  Second, if we ever encounter intelligent aliens, they’ll have politics too.  Nothing else about them may be recognizable, but they’ll have analogs of haggling over price and building political coalitions.

Written by Kevin

May 24, 2011 at 10:54 am

Posted in Economics

Ratcheting State and Local Taxes

with 2 comments

Yesterday, Mark Perry at Carpe Diem looked at state and local tax revenues.  Then Don Boudreaux at Cafe Hayek observed they should be adjusted for inflation.  Given my previous analysis (here, here, and here), I thought I’d chime in and adjust them for population*:

Over the entire period, real per-capita revenues rose 22%.  On the graph, you can clearly see the 2001-2002 and 2008-2009 recessions. If we go peak to peak in the last business cycle, 2001 to 2007, the growth was 13%.  If we go trough to trough, 2002-2009, the growth was 9%.

Notice that even from the peak of the previous business cycle in 2001 to the trough of the current business cycle in 2009, real per-capita revenues were still up 6%.  So the recession is clearly not the proximal cause of the state and local budget problems.

State and local government agencies have more inflation-adjusted dollars per person today than they did at the peak of the boom in 2000.

Their revenues are consistently growing. The problem is that they can’t get their damned spending under control!

* State and local government tax revenue from this Census Bureau data.  2000-2009 population estimates from here.  1990-1999 from here.  Ironically, 2010 population data is not yet available so I couldn’t generate a per-capita datapoint for 2010.  I used the CPI-U series for inflation.  I did all the analysis in this spreadsheet file.

Written by Kevin

March 30, 2011 at 11:31 am

Posted in Economics, Government

My Wife Plays a Labor Economist

with 2 comments

My wife has an Art History degree from Princeton.  However, she often has excellent insights into economics. So I think either (a) one of the reasons she fell in love with me is that she’s a latent economics geek or (b) she loves me so much that she actually pays enough attention to my economics ramblings that some of it rubs off.

About a year ago, she came up with a solution to the “employee union problem”.  With the recent public employee union showdown in Wisconsin, I thought I should share it with you.  What’s particularly ironic is that she’s out in front with some pretty serious economists in thinking about issue.  For example, check out this Wall Street Journal article by Robert Barro, a Harvard professor and author of my undergraduate macroeconomics textbook.

Most people think that unions are simply workers exercising their rights to band together.  Actually, this isn’t the case.  As Barro points out, unions are a monopolies that are specially exempted from anti-trust law. Moreover, the government actually enforces their monopolies.

The key law here is the National Labor Relations Act (NLRA) as administered by the National Labor Relations Board (NLRB).  The NLRB protect the rights of employees to collectively “improve working terms and conditions” regardless of whether they are part of a union.  In addition, the NLRB can force all employees at a private firm to join a union, or at least pay union dues.  This process is called “certification”.  Once a union is certified, it has the presumptive right to negotiate on behalf of all employees at the firm, and collect dues from them, for one year.

After that, it is possible to “decertify” a union.  However, the union usually has enough time to consolidate its power and then use it to keep the rank and file in line.  When you have monopoly power, you have the opportunity to abuse it.  In fact, there’s a whole government agency devoted to investigating such abuses: the Office of Labor-Management Standards.  UnionFacts.com has helpfully collated the related crime statistics for 2001-2005.  Assuming that only a fraction of abusive behavior faces prosecution, these statistics are pretty sobering.

Of course, federal, state, and local government employees are exempt from the NLRA.  You might think this exemption is a good thing for limiting union power.  However, what it means in practice is that each level of government is free to offer special treatment to their employee’s unions without oversight from the NLRB.  As you can imagine, the politicians and public employee union leaders get nice and cozy.  The politicians give the unions a sweet deal and the unions give politicians their political support.  Everybody wins.  Except ordinary citizens.

Personally, my solution would have been to completely eliminate the government-enforced monopoly of unions.  However, I admit this blanket approach could swing power too far towards management in some industries.  My wife’s solution is better.  She says the unions can get monopolies, but only for a set period of time.  Say 3 or 5 years.

From an economics standpoint, this approach is really insightful.  First, it removes union leaders’ incentive to form a union just to accumulate power.  It will all go away pretty soon.  Second, it prevents originally well meaning union leaders from getting corrupted over time.  Pretty soon they’re ordinary workers again.  Third, it does provide help to those workers who feel management is truly abusing them.  They can form a union and get better treatment.  When the union’s existence terminates, they can still bargain collectively, just not exclusively.  If management tries to screw them again, they will have the example of how to work together.  An economist would call this “moving to a better equilibrium”.

I’ll admit this solution isn’t perfect.  Some management abuses will slip through the cracks.  But I’m pretty confident they’ll be less extensive than the current union abuses.  It’s also probably better than my original thought of banning unions altogether.  And there’s some small chance my wife’s approach would actually be politically feasible.  Nice work, Jane!

Written by Kevin

March 24, 2011 at 3:15 pm

Posted in Economics

State Budget Redux

with 4 comments

You may recall my two posts on the California budget back in May 2009.  I just haven’t had the heart to dive back into this issue again, even though it’s obviously timely.  However, I though it was worth mentioning this article in Reason Magazine highlighted in one of today’s  Coyote Blog posts.

Funnily enough, the article was published in the May 2009 issue.  So I guess great minds not only think alike, they do so at the same time.  What struck my about the article was that they performed a similar exercise to that of my post, which looked at real, per-capita spending in California.  Reason compared actual revenues to a constant real, per-capita baseline totaled across all 50 states.  Here are the money graphs for all revenues and just taxes:

When times were relatively good, the money was flowing in.  So we went on a spending binge.  When we hit the recession in 2008, we discovered that this level of real, per-capita revenue was not permanent.  But by then, a bunch of people had been accustomed to getting their money from the states and it was hard to cut them off.

One budget watchdog estimates that the states are in a combined $112B budget hole for 2012.  As you can see, if we’d stuck to our 2002 baseline, we’d have accumulated plenty of surplus during the good times to plug this hole.  But asking a state to save money is like asking an addict to go without a fix.

Written by Kevin

March 15, 2011 at 1:21 pm

Posted in Economics, Government

Production Function Space and Hiring

with one comment

Previously, I used my Production Function Space (PFS) hypothesis to illuminate the differences between startups, small businesses, and large companies. Now, I’d like to turn my attention to the implications of PFS on a firm’s demand for labor.

I don’t know about you, but a lot of hiring behavior baffles me. I see companies that appear clearly and consistently understaffed or overstaffed, relative to demand for their offerings.  Then the hiring process itself is strange.  I’ve consistently seen companies burn all kinds of energy and incur all sorts of angst just to come up with a job description. Shouldn’t it be obvious what isn’t getting done? Why do they delegate apparently core aspects of production to contractors? And despite decades of evidence, why do firms insist on using selection procedures like unstructured interviews that aren’t very effective.  Also, there’s the mystery of why incentive-based pay doesn’t work in general despite plenty of evidence that humans respond well to incentives in other circumstances.

Can the PFS hypothesis shed any light?  I think so.  But this hypothesis implies that a firm’s labor decisions are substantially more complicated than we thought, so don’t expect a nice “just-so” story.

I see three big implications if many employees benefit the firm primarily through searching PFS instead of producing goods and services:

  1. Uncertainty. The payoffs from searching PFS are uncertain.  In many cases, they’re really uncertain. You could end up with a curious but unmarketable adhesive or you could end up with the bestselling Post-it notes. You could end up with just another search engine or you could combine it with AdWords and end up with Google. A search over a given region is essentially a call option on implementing discovered production functions.
  2. Specificity. Economists refer to an asset as “specific” if its usefulness is limited primarily to a certain situation. The classic example is the railroad track leading up to the mouth of a coal mine. I think employees searching PFS are fairly specific. Each firm’s ability to exploit production functions is rather unique. Google and Microsoft can’t do exactly the same things. Moreover, each firm’s strategy for exploring PFS is different. So it takes time for an employee to “tune” himself to searching PFS for a particular firm. All other things being equal, an employee with 3 months on the job is not as effective at searching PFS as an employee with 3 years. And an employee that leaves firm A will not be as effective at searching PFS for firm B for a significant period of time. Think of specificity in searching PFS as a fancy way of justifying the concept of “corporate culture”.
  3. Network Effect. The number of people searching a given region seems to matter. I’m not at all sure if it’s a network effect, a threshold effect, or something else. But there seems to be a “critical mass” of people necessary to search a coherent region of PFS. You need a certain collection of skills to evaluate the economic feasibility of a production function. The larger a firm’s production footprint and the larger the search area, the greater the collective skill that is required.

Let’s start with hiring and firing decisions.  As you can see, firms face a really complex optimization problem when choosing how many people to employ and with what skills.  Suppose demand for a firm’s products suddenly declines.  What’s the optimal response?  Due to the the network effect, firing x% of the workforce reduces the ability to search PFS by more than x%.  Due to specificity, this reduction in capability will last much longer than the time it takes to rehire the same number of people.  Thus, waiting to see if the drop in demand is temporary or permanent provides substantial option value.  Of course, a small firm doesn’t have much cushion, so may have to lay off people anyway.

Thus I predict a sudden drop in demand will result in disproportionately low or significantly delayed layoffs, and the disproportion or delay will be positively correlated with firm size.  Moreover, firms will tend to concentrate layoffs among direct production workers to minimize the effect on searching PFS.  This tendency may explain why they delegate some apparently core functions.  Being able to flexibly adjust those direct costs preserves the ability to search PFS. This hypothesis implies that the more volatile the demand for a firms’ products, the more they will outsource direct production.

Conversely, what should a firm do if demand suddenly increases?  Based on the PFS hypothesis, I have three predictions: the firm will (1) delay hiring to see if the demand increase is sustained, (2) “over hire” relative to the size of the demand increase, and (3) hire a disproportionate number of people outside of core production.  The reason is simple, diversification.  Due to uncertainty, the best way for a firm to ensure its long-term survival is to have a large portfolio of ongoing PFS searches. Extra dollars should therefore be allocated to PFS searching labor rather than capital or direct production labor.  However, because a firm knows that it will be reluctant to fire in the future, it will initially be conservative in deciding to hire.

It seems like these predictions should be testable.  I wish I had a research assistant to go through the available data and crank through some econometric analysis.  I’m thinking the next step is to work through the implications of PFS searching on employee behavior.  Unless anyone has other thoughts.

Written by Kevin

March 9, 2011 at 8:47 pm

Posted in Economics