Possible Insight

Archive for the ‘Economics’ Category

A Meta-Startup Manifesto

with one comment

As most of you know, RSCM is part of the group offering every TechStars company an additional $100K investment.  When we first started talking to the TechStars folks, my immediate reaction was “kindred spirits”.  This was also my reaction when we first spoke with Adeo Ressi of TheFunded and Founder Institute.  Recently, I’ve been trying to put my finger on why I think of us all as similar.  I think I’ve got it.

We’re all meta-startups—startups that working to improve the process of launching startups.

Up until a few years ago, founding a tech startup usually followed the same haphazard process it had for decades.  Founders were pretty much on their own to thrash around and figure out how to test their innovations in the marketplace.  It’s how I did my first startup in 1993.  It’s how I did my last startup in 2004.  Same with Dave Lambert, my partner at RSCM, with his first startup in 1993 and his last in 2003.  It’s what we saw all our entrepreneurial friends do.

This “process” has a high bar for founders to clear and a low success rate, limiting innovation.  Now, the Internet increased the speed at which the haphazard process can execute.  So the situation has modestly improved over the last 15 years.  But what we want is fundamental improvement.  What we want is the equivalent of an Amazon, Google, or Facebook to change the rules of the startup game.

High-volume, high-speed incubators like Y Combinator (2005), TechStars (2006), and Founder Institute (2009) are a great leap forward in discovering a more systematic startup process.  They’re paving the way toward more startups, higher success rates, and dramatically more innovation.  Many of the improvements they pioneer should diffuse out into startup community.  So they’ll enable startup in general, not just the ones in their programs, to launch more smoothly.

Our role in this revolution is eliminating the huge seed stage funding roadblock.  Now, with all the press about angels and superangels funding startups in Silicon Valley and New York, you may think getting seed funding is easy.  But I’ve run the numbers.  Seed funding is actually down 40% from it’s peak in 2005.  I’m pretty sure the cost of doing a startup hasn’t dropped 40%.   And I’m quite sure that the number of quality founding teams hasn’t dropped 40%.

Startups succeed by challenging conventional wisdom.  As a meta-startup, we are no different.  Luckily, our job is easier than most tech startups’.  They have to challenge the conventional wisdom about the future of technology.  There just isn’t much data to go on.  We have to challenge the conventional wisdom about funding startups.  In our case, there’s a lot of data that existing investors are just ignoring:

  • The seed stage generates very high returns.  There’s a good dataset of angel investments from the Kauffman Foundation, mostly from 1998 and later.  Depending on your selection criteria, returns are at least 30% and possibly over 40%.  Calculations here and here.
  • You can’t pick winners at the seed stage.  This is precisely the kind of prediction tasks where “gut feel” or “expert judgement” performs poorly.  Human predictions almost never beat even a simple checklist.  Review of evidence here.
  • Interviewing founding teams is a poor indicator of their ability.  There’s been an incredible amount of research trying to figure out how to predict who will be good at which job tasks.  Unstructured interviews are the worst predictor.  Highly structured interviews or matching of past experience to current requirements does better.  Review of evidence here.
  • Grand slams aren’t necessary to achieve high returns.  You don’t have to do anything special to make sure you get the “best” deals.  First, nobody has shown they can identify such startups before they release a product.  Second, even if all you get is base hits, you’ll still have returns of about 30%.  Calculation here.

The conclusion from this evidence is pretty straightforward.  But it implies a seed-stage funding approach very different what we see today.  A classic opportunity for a startup (or meta-startup).

If the seed stage has high average returns but you can’t pick specific winners, basic financial theory says to use a portfolio approach.  You want enough investments to have a reasonably high confidence of achieving the average return.  I’ve done the calculation, and my conclusion is that you want to build a portfolio of hundreds of seed-stage investments.

That may sound like an unreasonable number.  But if you don’t have to spend hours doing multiple interviews of every founding team, you can dramatically streamline the process.  In fact, we have a demo version of our software that can produce a pre-money valuation from an online application in a few seconds [Edit 3/16/2013: we took down the Web app in Jan 2013 because it wasn’t getting enough hits anymore to justify maintaining it.  We continue to use the algorithm internally as a spreadsheet app]. After that, it’s a  matter of back-office automation.  Luckily, previous generations of startups have made such automation pretty easy.

Just think about how these conclusions should lead to a much better environment for seed stage startups.  Investors will actually decrease risk by making more investments.  The same process that allows them to efficiently building a large portfolio means they can give a much faster response to entrepreneurs.  Lower transaction costs mean startups will see more of the money investors put on the table.  As always, innovation is a win-win.  And in this case, we’re innovating in the process of funding innovation.  A meta-innovation, if you will.

Even better, the average person on the street wins too.  Because, as I’ve shown before here and here, increasing the rate of startup formation increases the rate of economic growth.  So if meta-startups can permanently increase this rate, the returns to society as a whole will quickly compound.


Written by Kevin

October 11, 2011 at 12:37 am

Posted in Economics, Innovation

Moneyball for Tech Startups: Kevin’s Remix

with 14 comments

Several people have pointed me to Dan Frommer’s post on Moneyball for Tech Startups, noting that “Moneyball” is actually a pretty good summary of our approach to seed-stage investing at RSCM.  Steve Bennet, one of our advisors and investors, went so far as to kindly make this point publicly on his blog.

Regular readers already know that I’ve done a fair bit of Moneyball-type analysis using the available evidence for technology startups (see here, here, here, here, here, and here).  But I thought I’d take this opportunity to make the analogy explicit.

I’d like to start by pointing out two specific elements of Moneyball, one that relates directly to technology startups and one that relates only indirectly:

  • Don’t trust your gut feel, directly related.  There’s a quote in the movie where Beane says, “Your gut makes mistakes and makes them all the time.”  This is as true of tech startups as it is of baseball prospects.  In fact, there’s been a lot of research on gut feel (known in academic circles as “expert clinical judgement”).  I gave a fairly detailed account of the research in this post, but here’s the summary.  Expert judgement never beats a statistical model built on a substantial data set.  It rarely even beats a simple checklist, and then only in cases where the expert sees thousands of examples and gets feedback on most of the outcomes.  Even when it comes to evaluating people, gut feel just doesn’t work.  Unstructured interviews are the worst predictor of job performance.
  • Use a “player” rating algorithm, indirectly related.  In Moneyball, Beane advocates basing personnel decisions on statistical analyses of player performance.  Of course, the typical baseball player has hundreds to thousands of plate appearances, each recorded in minute detail.  A typical tech startup founder has 0-3 plate appearances, recorded at only the highest level.  Moreover, with startups, the top 10% of the startups account for about 80% of the all the returns.  I’m not a baseball stats guy, but I highly doubt the top 10% of players account for 80% of the offense in the Major Leagues.  So you’ve got much less data and much more variance with startups.  Any “player” rating system will therefore be much worse.

Despite the difficulty of constructing a founder rating algorithm, we can follow the general prescription of trying to find bargains.  Don’t invest in “pedigreed” founders, with startups in hot sectors, that have lots of “social proof”, located in the Bay Area.  Everyone wants to invest in those companies.  So, as we saw in Angel Gate, valuations in these deals go way up.  Instead, invest in a wide range of founders, in a wide range of sectors, before their startups have much social proof, across the entire US. Undoubtedly, these startups have a lower chance of succeeding. But the difference is more than made up for by lower valuations.  Therefore, achieving better returns is simply a matter of adequate diversification, as I’ve demonstrated before.

Now, to balance out the disadvantage in rating “players”, startup investors have an advantage over baseball managers.  The average return of pure seed stage angel deals is already plenty high, perhaps over 40% IRR in the US according to my calculation.  You don’t need to beat the market.  In fact, contrary to popular belief, you don’t even need to try and predict “homerun” startups.  I’ve shown you’d still crush top quartile VC returns even if you don’t get anything approaching a homerun.  Systematic base hits win the game.

But how do you pick seed stage startups?  Well, the good news from the research on gut feel is that experts are actually pretty good at identifying important variables and predicting whether they positively or negatively affect the outcome.  They just suck at combining lots of variables into an overall judgement.  So we went out and talked to angels and VCs.  Then, based on the the most commonly cited desirable characteristics, we built a simple checklist model for how to value seed-stage startups.

We’ve made the software that implements our model publicly available so anybody can try it out [Edit 3/16/2013: we took down the Web app in Jan 2013 because it wasn’t getting enough hits anymore to justify maintaining it.  We continue to use the algorithm internally as a spreadsheet app].  We’ve calibrated it against a modest number of deals.  I’ll be the first to admit that this model is currently fairly crude.  But the great thing about an explicit model is that you can systematically measure results and refine it over time.  The even better thing about an explicit model is you can automate it, so you can construct a big enough portfolio.

That’s how we’re doing Moneyball for tech startups.

Written by Kevin

September 27, 2011 at 10:56 pm

Why We're Smart

with 3 comments

Most people believe humans evolved intelligence because using tools was an advantage.  However, I believe tool use was secondary.  Group cooperation was the primary advantage conferred by intelligence.  You see, cooperation is fundamentally difficult.

This insight coalesced when I was reading about Mark Satterthwaite, an economist at Northwestern’s Kellogg School of Management.  He’s famous for two important impossibility theorems: (1) the Myerson-Satterthwaite Theorem and (2) the Gibbard-Satterthwaite Theorem.

Informally, (1) says that there is no bargaining mechanism that can guarantee a buyer and seller will trade if there are potential gains from trade, while  (2) says that there is no voting mechanism for determining a single winner that can induce people to vote their true preferences.  In both cases, the reason for the impossibility is that people have incentives to hide their actual values to achieve a strategic advantage.

Add these to the Prisoner’s Dilemma and Arrow’s Impossibility Theorem on the list of fundamental barriers to cooperation (Holmstrom’s Theorem is another good one; it explains why you can’t get everyone in a firm to exert maxium effort).  By “fundamental”, I mean there is no general solution.  So the evolutionary process cannot just discover a mechanism that guarantees cooperation when it is efficient.  There will always be the opportunity for individuals to subvert the cooperative process to promote themselves, thus creating selection pressure against the cooperation mechanism.

(Note that there is a hack: make sure each individual has the same genes.  This is how multicellular and hive organisms get around the problem.  But the existence of cancer in the former case and the reduced genetic diversity in the latter case make them limited solutions.)

To achieve extensive cooperation in large groups, individuals need the ability to model the strategic situation, estimate the payoffs to various group members, and continuously assess what strategies other members may be playing. On top of that, there’s an arms race between deceiving and detecting deception.  It’s the old, “I know that you know that I know…” schtick.  The smarter you are, the further you can compute this series.

Bottom line: the impossibility theorems mean the only way to achieve cooperation is to have the machinery in place to make detailed case-by-case determinations.  We’ve talked about the Dunbar Number before: the maximum size of primate groups is determined in large part by a species’ average neocortical volume.   I claim you need to be smarter to process more complex strategic configurations and maintain models of more individuals’ goals.

If I’m right, there are two interesting implications.  First, politics will be with us forever.  No magical technology or philosophical enlightenment will eliminate it.  Second, if we ever encounter intelligent aliens, they’ll have politics too.  Nothing else about them may be recognizable, but they’ll have analogs of haggling over price and building political coalitions.

Written by Kevin

May 24, 2011 at 10:54 am

Posted in Economics

Ratcheting State and Local Taxes

with 2 comments

Yesterday, Mark Perry at Carpe Diem looked at state and local tax revenues.  Then Don Boudreaux at Cafe Hayek observed they should be adjusted for inflation.  Given my previous analysis (here, here, and here), I thought I’d chime in and adjust them for population*:

Over the entire period, real per-capita revenues rose 22%.  On the graph, you can clearly see the 2001-2002 and 2008-2009 recessions. If we go peak to peak in the last business cycle, 2001 to 2007, the growth was 13%.  If we go trough to trough, 2002-2009, the growth was 9%.

Notice that even from the peak of the previous business cycle in 2001 to the trough of the current business cycle in 2009, real per-capita revenues were still up 6%.  So the recession is clearly not the proximal cause of the state and local budget problems.

State and local government agencies have more inflation-adjusted dollars per person today than they did at the peak of the boom in 2000.

Their revenues are consistently growing. The problem is that they can’t get their damned spending under control!

* State and local government tax revenue from this Census Bureau data.  2000-2009 population estimates from here.  1990-1999 from here.  Ironically, 2010 population data is not yet available so I couldn’t generate a per-capita datapoint for 2010.  I used the CPI-U series for inflation.  I did all the analysis in this spreadsheet file.

Written by Kevin

March 30, 2011 at 11:31 am

Posted in Economics, Government

My Wife Plays a Labor Economist

with 2 comments

My wife has an Art History degree from Princeton.  However, she often has excellent insights into economics. So I think either (a) one of the reasons she fell in love with me is that she’s a latent economics geek or (b) she loves me so much that she actually pays enough attention to my economics ramblings that some of it rubs off.

About a year ago, she came up with a solution to the “employee union problem”.  With the recent public employee union showdown in Wisconsin, I thought I should share it with you.  What’s particularly ironic is that she’s out in front with some pretty serious economists in thinking about issue.  For example, check out this Wall Street Journal article by Robert Barro, a Harvard professor and author of my undergraduate macroeconomics textbook.

Most people think that unions are simply workers exercising their rights to band together.  Actually, this isn’t the case.  As Barro points out, unions are a monopolies that are specially exempted from anti-trust law. Moreover, the government actually enforces their monopolies.

The key law here is the National Labor Relations Act (NLRA) as administered by the National Labor Relations Board (NLRB).  The NLRB protect the rights of employees to collectively “improve working terms and conditions” regardless of whether they are part of a union.  In addition, the NLRB can force all employees at a private firm to join a union, or at least pay union dues.  This process is called “certification”.  Once a union is certified, it has the presumptive right to negotiate on behalf of all employees at the firm, and collect dues from them, for one year.

After that, it is possible to “decertify” a union.  However, the union usually has enough time to consolidate its power and then use it to keep the rank and file in line.  When you have monopoly power, you have the opportunity to abuse it.  In fact, there’s a whole government agency devoted to investigating such abuses: the Office of Labor-Management Standards.  UnionFacts.com has helpfully collated the related crime statistics for 2001-2005.  Assuming that only a fraction of abusive behavior faces prosecution, these statistics are pretty sobering.

Of course, federal, state, and local government employees are exempt from the NLRA.  You might think this exemption is a good thing for limiting union power.  However, what it means in practice is that each level of government is free to offer special treatment to their employee’s unions without oversight from the NLRB.  As you can imagine, the politicians and public employee union leaders get nice and cozy.  The politicians give the unions a sweet deal and the unions give politicians their political support.  Everybody wins.  Except ordinary citizens.

Personally, my solution would have been to completely eliminate the government-enforced monopoly of unions.  However, I admit this blanket approach could swing power too far towards management in some industries.  My wife’s solution is better.  She says the unions can get monopolies, but only for a set period of time.  Say 3 or 5 years.

From an economics standpoint, this approach is really insightful.  First, it removes union leaders’ incentive to form a union just to accumulate power.  It will all go away pretty soon.  Second, it prevents originally well meaning union leaders from getting corrupted over time.  Pretty soon they’re ordinary workers again.  Third, it does provide help to those workers who feel management is truly abusing them.  They can form a union and get better treatment.  When the union’s existence terminates, they can still bargain collectively, just not exclusively.  If management tries to screw them again, they will have the example of how to work together.  An economist would call this “moving to a better equilibrium”.

I’ll admit this solution isn’t perfect.  Some management abuses will slip through the cracks.  But I’m pretty confident they’ll be less extensive than the current union abuses.  It’s also probably better than my original thought of banning unions altogether.  And there’s some small chance my wife’s approach would actually be politically feasible.  Nice work, Jane!

Written by Kevin

March 24, 2011 at 3:15 pm

Posted in Economics

State Budget Redux

with 4 comments

You may recall my two posts on the California budget back in May 2009.  I just haven’t had the heart to dive back into this issue again, even though it’s obviously timely.  However, I though it was worth mentioning this article in Reason Magazine highlighted in one of today’s  Coyote Blog posts.

Funnily enough, the article was published in the May 2009 issue.  So I guess great minds not only think alike, they do so at the same time.  What struck my about the article was that they performed a similar exercise to that of my post, which looked at real, per-capita spending in California.  Reason compared actual revenues to a constant real, per-capita baseline totaled across all 50 states.  Here are the money graphs for all revenues and just taxes:

When times were relatively good, the money was flowing in.  So we went on a spending binge.  When we hit the recession in 2008, we discovered that this level of real, per-capita revenue was not permanent.  But by then, a bunch of people had been accustomed to getting their money from the states and it was hard to cut them off.

One budget watchdog estimates that the states are in a combined $112B budget hole for 2012.  As you can see, if we’d stuck to our 2002 baseline, we’d have accumulated plenty of surplus during the good times to plug this hole.  But asking a state to save money is like asking an addict to go without a fix.

Written by Kevin

March 15, 2011 at 1:21 pm

Posted in Economics, Government

Production Function Space and Hiring

with one comment

Previously, I used my Production Function Space (PFS) hypothesis to illuminate the differences between startups, small businesses, and large companies. Now, I’d like to turn my attention to the implications of PFS on a firm’s demand for labor.

I don’t know about you, but a lot of hiring behavior baffles me. I see companies that appear clearly and consistently understaffed or overstaffed, relative to demand for their offerings.  Then the hiring process itself is strange.  I’ve consistently seen companies burn all kinds of energy and incur all sorts of angst just to come up with a job description. Shouldn’t it be obvious what isn’t getting done? Why do they delegate apparently core aspects of production to contractors? And despite decades of evidence, why do firms insist on using selection procedures like unstructured interviews that aren’t very effective.  Also, there’s the mystery of why incentive-based pay doesn’t work in general despite plenty of evidence that humans respond well to incentives in other circumstances.

Can the PFS hypothesis shed any light?  I think so.  But this hypothesis implies that a firm’s labor decisions are substantially more complicated than we thought, so don’t expect a nice “just-so” story.

I see three big implications if many employees benefit the firm primarily through searching PFS instead of producing goods and services:

  1. Uncertainty. The payoffs from searching PFS are uncertain.  In many cases, they’re really uncertain. You could end up with a curious but unmarketable adhesive or you could end up with the bestselling Post-it notes. You could end up with just another search engine or you could combine it with AdWords and end up with Google. A search over a given region is essentially a call option on implementing discovered production functions.
  2. Specificity. Economists refer to an asset as “specific” if its usefulness is limited primarily to a certain situation. The classic example is the railroad track leading up to the mouth of a coal mine. I think employees searching PFS are fairly specific. Each firm’s ability to exploit production functions is rather unique. Google and Microsoft can’t do exactly the same things. Moreover, each firm’s strategy for exploring PFS is different. So it takes time for an employee to “tune” himself to searching PFS for a particular firm. All other things being equal, an employee with 3 months on the job is not as effective at searching PFS as an employee with 3 years. And an employee that leaves firm A will not be as effective at searching PFS for firm B for a significant period of time. Think of specificity in searching PFS as a fancy way of justifying the concept of “corporate culture”.
  3. Network Effect. The number of people searching a given region seems to matter. I’m not at all sure if it’s a network effect, a threshold effect, or something else. But there seems to be a “critical mass” of people necessary to search a coherent region of PFS. You need a certain collection of skills to evaluate the economic feasibility of a production function. The larger a firm’s production footprint and the larger the search area, the greater the collective skill that is required.

Let’s start with hiring and firing decisions.  As you can see, firms face a really complex optimization problem when choosing how many people to employ and with what skills.  Suppose demand for a firm’s products suddenly declines.  What’s the optimal response?  Due to the the network effect, firing x% of the workforce reduces the ability to search PFS by more than x%.  Due to specificity, this reduction in capability will last much longer than the time it takes to rehire the same number of people.  Thus, waiting to see if the drop in demand is temporary or permanent provides substantial option value.  Of course, a small firm doesn’t have much cushion, so may have to lay off people anyway.

Thus I predict a sudden drop in demand will result in disproportionately low or significantly delayed layoffs, and the disproportion or delay will be positively correlated with firm size.  Moreover, firms will tend to concentrate layoffs among direct production workers to minimize the effect on searching PFS.  This tendency may explain why they delegate some apparently core functions.  Being able to flexibly adjust those direct costs preserves the ability to search PFS. This hypothesis implies that the more volatile the demand for a firms’ products, the more they will outsource direct production.

Conversely, what should a firm do if demand suddenly increases?  Based on the PFS hypothesis, I have three predictions: the firm will (1) delay hiring to see if the demand increase is sustained, (2) “over hire” relative to the size of the demand increase, and (3) hire a disproportionate number of people outside of core production.  The reason is simple, diversification.  Due to uncertainty, the best way for a firm to ensure its long-term survival is to have a large portfolio of ongoing PFS searches. Extra dollars should therefore be allocated to PFS searching labor rather than capital or direct production labor.  However, because a firm knows that it will be reluctant to fire in the future, it will initially be conservative in deciding to hire.

It seems like these predictions should be testable.  I wish I had a research assistant to go through the available data and crank through some econometric analysis.  I’m thinking the next step is to work through the implications of PFS searching on employee behavior.  Unless anyone has other thoughts.

Written by Kevin

March 9, 2011 at 8:47 pm

Posted in Economics

Production Function Space and Government Inefficiency

with one comment

This weekend, I was at a party chatting with two of my friends who have PhD’s in Education.  They were explaining the sources of the inefficiency they see in the public school system.  That’s when I had yet another Production Function Space (PFS) epiphany.

Recall my previous discussion about how I think large companies devote much of their energy to searching PFS rather than implementing specific production functions.  However, in listening to how public education works, the contrast struck me.

Only a tiny sliver of effort goes to searching PFS in public education. Instead, the vast majority of overhead consists of two tasks. First, because there’s no market discipline, government organizations use a rigid system of rules in an effort prevent waste.  So they have a bunch of people and processes dedicated to enforcing  these rules: the bureaucracy. Of course, in an ever-changing landscape, these rules aren’t anywhere close to optimal for long.  But hey, they are the rules.  So a big chunk of effort that would go towards innovation in a commercial organization actually goes toward preventing innovation in a government organization. Because there’s no systematic way to tell waste from innovation without a market.

Without market signals, there’s also no obvious way for employees to improve their position by improving organizational performance.  But don’t count out human ingenuity.  That’s the second substitute for searching PFS: improving job security and job satisfaction.  So you see workers seeking credentialing, tenure, guaranteed benefit plans and limitations on the number of hours spent in staff meetings.

From what I’ve read, this observation generalizes to other government agencies.  If searching PFS is a major source of continuous improvement in commercial organizations, it’s conspicuous absence in government organizations means they are even less efficient than we thought.  We don’t just lose out on the discipline of market signals today.  We lose out on the inspiration they provide for improvement tomorrow.

Written by Kevin

January 31, 2011 at 9:23 pm

Posted in Economics, Government

Startups, Employment, and Growth

leave a comment »

In regards to my posts on how startups help drive both employment and growth, a couple of people have pointed me to this essay in BusinessWeek by Andy Grove.  He says that:

  1. We have a, “…misplaced faith in the power of startups to create U.S. jobs.”
  2. “The scaling process is no longer happening in the U.S. And as long as that’s the case, plowing capital into young companies that build their factories elsewhere will continue to yield a bad return in terms of American jobs.”
  3. “Scaling isn’t easy. The investments required are much higher than in the invention phase.”

His basic argument is more sophisticated than the typical “America no longer makes things” rhetoric. It boils down to network effect. If the US doesn’t manufacture technology-intensive products, we incur two penalties because we lack the corresponding network of skills: (a) in technology areas that exist today, we will not be able to innovate as effectively in the future and (b) we will lack the knowledge it takes to scale tomorrow’s technology areas altogether.  I don’t believe this argument for all the usual economic reasons, but let’s assume it’s true.  Would the US be in trouble?

Grove makes a lot of anecdotal observations and examines some manufacturing employment numbers.  However, I think it’s a mistake to generalize too much from any individual case or particular metric.  The economy is diverse enough to provide examples of almost any condition and we expect a priori to find specialization within sectors.  We should examine a variety of metrics to get the full picture of whether America is losing its ability to “scale up”.  Bear with me, Grove’s essay is four pages long so it will take me a while to fully address it.

Manufacturing Jobs

First, let’s look at the assertion that we aren’t creating jobs relevant to scaling up technologies.  The real question is: what kinds of jobs build these skills?  Does any old manufacturing job help?  I don’t think Foxconn-suicide inducing manufacturing jobs are what we really want.  Maybe what we want is “good” or “high-skill” manufacturing jobs?  Well take a look at this graph from Carpe Diem of real manufacturing output per worker:

That’s right, the average manufacturing worker in the US produces almost $280K worth of stuff per year.  More than 3x what his father or her mother produced 40 years ago. That should certainly support high quality jobs.  But is the quality of manufacturing jobs actually increasing?  Just check out these graphs of manufacturing employment by level of education from the Heritage Foundation:

This suggests to me that we’re replacing a bunch of low-skilled workers with fewer high-skilled workers.  But that’s good.  It means we’re creating jobs that require more knowledge (aka “human capital” that we can leverage).  Look at the 44% increase in manufacturing workers with advanced degrees!  Contrary to Grove, it seems like we’re accumulating a lot of high-powered know-how about how to scale up.

Manufacturing Capability

Now you might be thinking that we could still be losing ground if the productivity of our high-end manufacturing jobs isn’t enough to make up for job losses on the low-end.  In Grove’s terms, our critical mass of scaling-up ability might be eroding. Not the case.  Just consider the statistics on industrial production from the Federal Reserve.  This table shows that overall industrial production in the US has increased 68% in real terms over the 25 years from 1986 to 2010, which is 2.1% per year.

“Aha!” you say, “But Grove is talking about  the recent history of technology-related products.”  Then how about semiconductors and related equipment from 2001-2010?  That’s the sector Grove came from.  There, we have a 334% increase in output, or 16% per year . Conveniently, the Fed also has a category covering all of hi-tech manufacturing output (HITEK2): 163% increase over the same period, or 10% per year. Now the US economy only grew at 1.5% per year from 2001 to 2010 (actually 4Q00 to 3Q10 because the 4Q10 GDP numbers aren’t out).  So in the last ten years, our ability to manufacture high-technology products increased at almost 7x the rate of our overall economy.  We’re actually getting much better at scaling up new technologies!

I can think of one other potential objection. Lack of investment in future production. There is a remote possibility that, despite the terrific productivity and output growth in US high-tech manufacturing today, we won’t be able to maintain this strength in the future. I think the best measure of expected future capability is foreign direct investment (FDI). These are dollars that companies in one country invest directly in business ventures of another country. They do NOT include investments in financial instruments.  Because these dollars are coming from outside the country, they represent an objective assessment of which countries offer good opportunities.  So let’s compare net inflows of FDI for China and the US using World Bank data from 2009.  For China, we have $78B.  For the US, we have $135B. This isn’t terribly surprising given the relative sizes of the economies, but there certainly doesn’t seem to be any market wisdom that the US is going to lose lots of important capabilities in the future.

Will China outcompete the US in some hi-tech industries?  Absolutely.  But that’s just what we expect from the theory of comparative advantage.  They will specialize in the areas where they have advantages and we will specialize in other areas where we have advantages.  Both economies will benefit from this specialization.  An economist would be very surprised indeed if Grove couldn’t point to certain industries where China is “winning”.  However, the data clearly shows that China is not poised to dominate hi-tech manufacturing across the board.

Startup Job Creation

So we’ve addressed Grove’s concerns about the US losing its ability to “scale up”. Let’s move on to the issue of startups. Remember, he said that startups, “…will continue to yield a bad return in terms of American jobs.”  As I posted before, startups create a net of 3M jobs per year. Without startups, job growth would be negative. If Grove cares about jobs, he should care about startups. The data is clear.

The one plausible argument I’ve seen against this compelling data is that most of these jobs evaporate. It is true that many startups fail. The question is, what happens on average? Well, the Kauffman Foundation has recently done a study on that too, using Census Bureau Business Dynamic Statistics data.  They make a key point about what happens as a cohort of startups matures:

The upper line represents the number of jobs on average at all startups, relative to their year of birth. The way to interpret the graph is that a lot of startups fail, but the ones that succeed grow enough to support about 2/3 of the initial job creation over the long term; 2/3 appears to be the asymptote of the top line.  The number of firms continues declining, but job growth at survivors makes up the difference starting after about 15 years.  For example, a bunch of startups founded in the late 90s imploded. But Google keeps growing and hiring.  Same as in the mid 00s for Facebook.  Bottom line: of the 3M jobs created by startups each year, about 2M of them are “permanent” in some sense.  The other 1M get shifted to startups in later years.  So startups are in fact a reliable source of employment.

I’d like to make one last point, not about employment per se, but about capturing the economic gains from startups. If we generalize Grove’s point, we might be worried that the US develops innovations, but other countries capture the economic gains. To dispel this concern, we need only refer back to my post on the economic gains attributable to startups, using data across states in the US.  Recall that this study looked at differing rates of startup formation in states to conclude that a 5% increase in new firm births increases the GDP growth rate by 0.5 percentage points.

I would argue that it’s much more likely that a state next door could “siphon off” innovation gains from its neighbor than a distant country could siphon off innovation gains from the US: (a) the logistics make transactions more convenient, (b) there are no trade barriers between states, and (c) workers in New Mexico are a much closer substitute for workers in Texas than workers in China. But the study clearly shows that states are getting a good economic return from startups formed within their boundaries.  Now, I’m certain there are positive “spillover” effects to neighboring states.  But the states where the startups are located get a tremendous benefit even with the ease of trade among states.


I think it’s pretty clear that, even if you accept Grove’s logic, there’s no sign that the US is losing its ability to scale up.  However, I would be remiss if I didn’t point out my disagreement with the logic. I’ve seen no evidence of a need to be near manufacturing to be able to innovate.  In fact, every day I see evidence against it.

I live in Palo Alto. As far as I know, we don’t actually manufacture any technology products in significant quantities any more. Yet lots of people who live and work here make a great living focusing on technology innovation.  As Don Boudreaux is fond of pointing out on his blog and in letters to the mainstream media, there is no difference in the trade between Palo Alto and San Jose and the trade between Palo Alto and Shanghai.  In fact, I know lots of people in the technology industry who work on innovations here in the Bay Area and then fly to Singapore, Taipei, or Shanghai to work with people at the factories cranking out units.

Certainly, I acknowledge that a government can affect the ability of its citizens to compete in the global economy. But the best way to support its citizens is to reduce the barriers to creating new businesses and then enable those businesses to access markets, whether those markets are down the freeway or across the world.  One of the worst thing a government could do is fight a trade war, which is what Grove advocates in the third-to-last paragraph of his essay.

The ingenuity of American engineers and entrepreneurs is doing just fine, as my data shows.  We don’t need an industrial policy.

Written by Kevin

January 23, 2011 at 11:39 am

Startups, Small Businesses, and Large Companies

with 14 comments

One of the reasons I like my Production Function Space (PFS) hypothesis, is that it clarifies a lot of issues that have puzzled me for a while.  For example, as part of my work on seed-stage startup investing with RSCM, I have struggled with two questions: (1) what’s the difference between a startup  and a small business and (2) why do some large companies have initiatives like venture groups and startup incubators?

To answer these questions, I’m going to have to get a little mathematical.  Don’t worry.  No derivatives or integrals, but I need to introduce some notation to keep the story straight.  First, let’s define production footprint and search area.

A production footprint is the surface that encloses a set of points in PFS.*  If A is a production function for basketballs, B is a production function for baseballs, and C is a production function for golfballs, then pf(ABC) looks like this in two dimensions:

We can also talk about the production footprint of a company like Google.  pf(Google) is the surface that encloses all the production functions that Google currently uses.

A search area is the surface defined by extending the production footprint outward by the search radius.** Imagine that we wanted to see if making basketballs, baseballs, and golfballs would enable a company to make footballs using production function F, we might want to compute sa(r, ABC).  It looks like this in two dimensions:

We can also talk about search areas for a company.  sa(r,Google) is the surface enclosing all the points in PFS within r units of Google’s current production footprint.  If we write just sa(Google), we mean search area using Google’s actual search radius.

With these two basic concepts in place, I can now easily answer questions (1) and (2).  SU stands for startup, SB stands for small business, and LC stands for large company.  So what’s the difference between a startup and a small business?  Well, when they are founded, a startup doesn’t have any production footprint at all and a small business does.  To a first order, pf(SU)=0 and pf(SB)>0.  A startup doesn’t know with any precision how it’s going to make stuff.  A small business does.  Whether it’s a dry cleaner, law office, or liquor distributor, the founders know pretty precisely what they’re going to do and how they’re going to do it.  However, startups have a much larger search radius than small businesses.   r(SU)>r(SB).  Assuming that we can define a search area on a set of production functions the startup could currently implement (but hasn’t yet), I content that also sa(SU)>sa(SB).

This realization was an epiphany for me.  Even though the average person thinks of startups and small businesses as similar, they are actually polar opposites.  They may both have a few employees working in a small office, but one is widely focused on exploring a huge region of PFS while the other is narrowly focused on implementing  production functions within a tiny region of PFS.  I also realized that you need to evaluate two things in a startup: (a) its ability to search PFS and (b) the ability to implement a production function once it locates a promising region of PFS.  But the magnitude of impact for (a) is at least as big as (b) in the very early stages.

Now on to the issue of large companies.  The problem here is search costs.  Remember that, in three-dimensional space, volume increases as the cube of distance.  In PFS, volume increase as an exponent of distance equal to the dimensionality of PFS.  I posit that PFS is high-dimension, so this volume increases very quickly indeed.  Now try to visualize the production footprint and search area of a large company.  A large company has a lot of production functions in play so pf(LC) is large.  But sa(r,LC) increases exponentially from this large volume by a large exponent.

In three dimensions, imagine that pf(LC) is like a hot air balloon.  Extending just 10 feet out from the hot air balloon’s surface encompasses a huge volume of additional air.  But in high-dimension PFS, the effect is… well… exponentially greater.  So a large company has a problem.  On the one hand, increasing its search radius is enormously costly.  On the other hand, we know that Black Swan shifts in the fabric of PFS will occasionally render a huge volume dramatically less profitable, probably killing companies limited to that volume.  So it’s only a matter of time before sa(r,LC) is hit by one of these shifts, for any value of r.

Obviously, there will be some equilibrium value of r where the cost balances the risk, but that also implies there’s an equilibrium value for the expected time-to-live of large companies.  Yikes!  This explains why the average life expectancy of a Fortune 500 company is measured in decades. Another epiphany.

Internal venture groups and incubators represent a hack that attempts to circumvent this cold, hard calculation. The problem is that it’s difficult to explore a region of PFS without actually trying to implement a production function in that region.  Sure, paper analysis and small experiments buy you some visibility, but not very much.  Also, in most cases, you don’t get very good information from other firms on their explorations of PFS, unless you observe a massive success or failure, at which point it’s too late to do much about your position.  That’s why search costs are so darn high.  Enter corporate venture groups and startup incubators.

These initiatives require some capital investment by the large company.  But this investment is then multiplied by the monetary capital of other investors as well as the human capital of the entrepreneurs.  With careful management, a large company can get almost as much insight into explorations of PFS by these startups as it would from its own direct efforts.  Moreover, because startups are willing to explore PFS farther from existing production footprints, the large company actually gets better search coverage of PFS.

This framework answers a key question about such initiatives.  To what extent should corporate venture groups and startup incubators restrict the startups they back to those with a “strategic” fit”?  If you believe in my PFS hypothesis, the answer is close to zero or perhaps even less than zero (look for startups in areas outside your company’s area of expertise).  Otherwise, they’re biasing their searches to the region of PFS that’s close to the region they’re already searching.  It doesn’t increase the ability to survive that much.  As far as I know, Google is the only company that adopts this approach.  I think they’re right and now I think I understand why.  Epiphany number three.

* There are some mathematical details here that need to be fleshed out to define this surface.  But I don’t think they add to the discussion and my topology is really rusty.

** More mathematical details omitted.  The only important one is that the extension outward doesn’t have to be by a constant radius.  It can be a function of the point on the pf. In that case, r is a global scaling parameter for the function.

Written by Kevin

January 13, 2011 at 11:41 am

Posted in Economics