There exists a humongous book edited by Hall and Rosenberg that shares title with this post, counting 1256 pages. I've recently read it, and here I'll provide a summary of it. I will include mostly things that I considered interesting, shocking or illustrative. There is no other way to cover a work that large in a blog post. There are also two more considerations I'm making to ease my work: Concentrating on empirical results (No discussion of modeling here), and avoiding reporting results that are mixed, unless it seemed a priori clear to me that the evidence pointed one way or another. Take into account that hat it seems interesting tome will differ from what seems interesting to you.

Note that the Handbook summarises the state of the art in the field of the economics of innovation as of 2010. So there are five years that are not covered. For example, Boldrin and Levine work against intellectual property. Also, the Handbook is full of 'we don't know', 'there is contradictory evidence', 'we need more research', etc. In general, we don't know as much about innovation as we'd like, and this derives from the time lags involved, the definitional problems of what counts and what doesn't, and so on.

The Handbook comes in two volumes divided by chapters, here I will just summarise by chapter.

Chapter 1: Introduction to the Handbook (Hall and Rosenberg)

An explanation of what will come in the chapters that follow

Chapter 2: the contribution of economic history to the study of innovation and technical change: 1750-1914 (Hall and Rosenberg)

The progress of technology has been explained by both internalist and externalist theories. Internalists see an autonomous logic, an evolutionary process in which one advance leads to another, in which contingency plays a major role, in which the past largely determines the future. Externalists think of technological change as determined by economic needs, by necessity stimulating invention, by induced innovation being guided by factor prices and resource endowments. In the same camp, but with a different emphasis are social constructionists who regard technology as the result of political processes and cultural transformations, in which certain ideas triumph in the marketplace because they serve certain special class or group interests and powerful lobbies. The history of technology since the Industrial Revolution provides support as well as problems for all of those approaches. A more inclusive approach would separate the process into interactive components. For instance, there is no question that economic needs serve as a “focusing device” in Rosenberg’s (1976) famous simile, but the popular notion that “necessity is the mother of invention” manages to be simultaneously a platitude and a falsehood. Societies tend to be innovative and creative for reasons that have little to do with pressing economic need; our own society is a case in point. Modern Western society is by and large wealthy enough to not feel any pressing “need,” yet it is innovative and creative beyond the wildest dreams of the innovators of the eighteenth century. There was no “necessity” involved in the invention of ipods or botox. The social agenda of technology is often set by market forces or national needs, but there is nothing ever to guarantee that this agenda will be successful and to make sure what it willlead to. [...] This essay will not be an exercise in technological determinism. Technology does not “drive” History. Improvements in technological capabilities will only improve economic performance if and when they are accompanied by complementary changes in institutions, governance, and ideology. It is never enough to have clever ideas to liberate an economy from an equilibrium of poverty. But it is equally true that unless technology is changing, alternative sources of growth such as capital accumula- tion or improved allocations of resources (due, for instance, to improved institutions such as law and order and a more commerce-friendly environment) will ineluctably run into diminishing returns. Only a sustained increase in useful knowledge will in the end allow the economy to grow, and to keep growing without limit as far as the eye can see. I have explored the relation between useful knowledge and technology in Mokyr (2002). The basic proposition of this essay will be that the technological component of economic modernity was created in the century before the Industrial Revolution, not through the growth of foreign trade, the emergence of an urban bourgeoisie, or the growing use of coal (as has often been argued) but by a set of intellectual and ideological changes that profoundly altered the way Europeans interacted with their physical environment. By that I mean both how they related to and studied the physical world in which they lived and the ways they manipulated that knowledge to improve the production of goods and services.

Chapter 3: Technical change and industrial dynamics as evolutionary processes (Dosi and Nelson)

An important part of paradigmatic knowledge takes the form of design concepts which characterize in general the configuration of the particular artifacts or processes that are operative at any time. Shared general design concepts are an important reason why there often is strong similarity among the range of particular products manufactured at any time—as the large passenger aircrafts produced by different aircraft companies, the different television sets available at the electronics stores, etc. Indeed, the establishment of a given technological paradigm is quite often linked with the emergence of some dominant design Together, the foregoing features of technological paradigms both provide a focus for efforts to advance a technology and channel them along distinct technological trajectories, with advances (made by many different agents) proceeding over significant periods of time in certain relatively invariant directions, in the space of techno-economic characteristics of artifacts and production processes. As paradigms embody the identification of the needs and technical requirements of the users, trajectories may be understood in terms of the progressive refinement and improvement in the supply responses to such potential demand requirements. A growing number of examples of technological trajectories include aircrafts, helicopters, various kinds of agricultural equipment, automobiles, semiconductors, and a few other technologies (Dosi, 1984; Gordon and Munson, 1981; Grupp, 1992; Sahal, 1981, 1985; Saviotti, 1996; Saviotti and Trickett, 1992). So, for example, technological advances in aircraft technologies have followed two quite distinct trajectories (one civilian and one military) characterized by log-linear improvements in the tradeoffs between horsepower, gross takeoff weight, cruise speed, wing load, and cruise range (Frenken and Leydesdorff, 2000; Frenken et al., 1999; Giuri et al., 2007; Sahal, 1985; and more specifically on aircraft engines Bonaccorsi et al., 2005). Analogously, in microelectronics, techni- cal advances are accurately represented by an exponential trajectory of improvement in the relationship between density of electronic chips, speed of computation, and cost per bit of information (see Dosi, 1984, but the trajectory has persisted since then). First, as Schumpeter, and Marx before him, argued long ago, competi- tion in industries where innovation is central has little to do with the idea that such process generates results that are economically “efficient” in the standard static sense of that concept in economics. What is driving the process is the striving by some firms to get an economic advantage over their competitors. As discussed in Section 3, both the cross section and the time profiles of modern industrial sectors inevitably show considerable variation across firms in measures of economic efficiency and in rofitability: in short, industries are characterized by considerable and persistent “inefficiency” in the standard allocative sense of that term. Second, in industries marked by continuing innovation, competi- tive conditions may be fragile. This applies particularly to the cases whereby firms who have been successful innovators are able to hold off imitation or other effective competitive responses, and their profitability enables them to stretch their advantage further. Third, this notwithstanding, while the evolutionary notion of “competition” differs from competition of the economic textbooks in fundamen- tal respects, it does serve a related function. To the extent competition is preserved, a significant share of the benefits of technological progress go to the customers/users of the technology. And on the supply side, over industrial evolution, competition tends to roughly keep prices moving in line with costs (including R&D costs). This is the bird eye interpretation of innovation-driven competition and the ensuing industrial evolution. How well does it hold against the evidence? (i) Innovative capabilities appear to be highly asymmetric, with a rather small number of firms in each sector responsible for a good deal of innovatios even among highly developed countries. (ii) Somewhat similar considerations apply to the adoption of innovations, in the form of new pro- duction inputs, machinery, etc. (see Section 3.9 on “diffusion”) revealing asymmetric capabilities of learning and “creative adaptation.” (iii) Differential degrees of innovativeness are generally persistent over time and often reveal a small “core” of systematic innovators (cf. Bottazzi et al., 2001a; Cefis, 2003b; Cefis and Orsenigo, 2001; Malerba and Orsenigo, 1996a among others). (iv) Relatedly, while the arrivals of major innovations are rare events, they are not independently distributed across firms. Rather, recent evidence suggests that they tend to arrive in firm-spe- cific “packets” of different sizes. 55 [...] in several studies, firms that are identified as innovators tend to be more profitable than other firms: see Geroski et al. (1993), Cefis (2003a), Cefis and Ciccarelli (2005), Roberts (1999), and Dosi (2007) among others. Production efficiency also shows a systematic positive influence upon profitability (cf. Bottazzi et al., 2009; Dosi, 2007). [...] Finally, the same evidence appears to run against the conjecture, put forward in the 1960s and 1970s by the “managerial” theories of the firm on a tradeoff between profitability and growth with “manage- rialized” firms trying to maximize growth subject to a minimum profit constraint [...] significant industry-specific differences emerge from the data. The finding that variables like capital intensity, advertising intensity, R&D intensity—along with structural measures like concentration and performance measures like profitability—differ widely across sectors is at the very origin of the birth of industrial economics as a discipline. Longitudinal microdata add further evidence.

Chapter 4: Fifty years of empirical studies of innovative activity and performance

A literal reading of Schumpeter’s (1942) classic discussion suggests that he was primarily impressed by the qualitative differences between the innovative activities of small, entrepreneurial enterprises and those of large, modern corporations with formal R&D laboratories. Nonetheless, the empirical literature has interpreted Schumpeter’s claim for a large firm advantage in innovation as a proposition that innovative activity increases more than proportionately than firm size. 2 With some exceptions (e.g., Gellman Research Associates, 1976; Nelson et al., 1967; Pavitt et al., 1987; Scherer, 1965a) the Schumpeterian hypothesis about firm size has been tested by regressing some measure of innovative activity (input or output) on a measure of size. [...] Notwithstanding the various challenges in evaluating the R&D–firm size relationship, the consensus is that either in the majority of industries, or when controlling for industry effects in more aggregate samples, R&D rises proportionately with firm size among R&D performers (e.g., Baldwin and Scott, 1987; Scherer and Ross, 1990). Although the source of this relationship had not been determined, the finding was widely interpreted through the mid-1990s as indicating that, contrary to Schumpeter, large size offered no advantage in the conduct of R&D. The intuition behind this interpretation is that, if the relationship is proportional, then, holding industry sales constant, the same amount of R&D will be conducted whether an industry is comprised of large firms or a greater number of smaller firms. Fisher and Temin (1973) argued, however, that to the extent that Schumpeter’s hypothesis can be given a clear formulation, it must refer to a relationship between innovative output and firm size, not to a relationship between R&D (an innovative input) and firm size, which is the one most commonly tested in the literature. They demonstrated, among other things, that an elasticity of R&D with respect to size in excess of one does not necessarily imply an elasticity of innovative output with respect to size greater than one Both before and subsequent to Fisher and Temin’s critique, however, several studies exploiting measures of innovative output reinforced the earlier consensus of no advantage to size. Scherer (1965a), Gellman Research Associates (1976, 1982), The Futures Group (1984), Pavitt et al. (1987) and Acs and Audretsch (1988, 1990, 1991b) have shown that, in either panel or cross-sectional data spanning a broad range of firm sizes, smaller firms tend to account for a disproportionately large share of innovations relative to their size, and that R&D productivity (e.g., innovations per unit of R&D) tends to decline with firm size. Bound et al.’s (1984) analysis of patenting activity similarly found that patents produced per R&D dollar for smaller firms (i.e., less than 1 million dollars in sales) is considerably higher than that for larger firms. Acs and Audretsch (1990, 1991b) provide evidence that this pattern varies, however, across industries. Also, Pavitt et al.’s (1987) findings, based on the SPRU data set that counts the successful introduction of “significant” new products or processes, suggest that the relationship may be somewhat U-shaped, with the very largest firms displaying relatively high R&D productivity, defined as simply the number of innovations per R&D dollar. Also drawing upon the SPRU data set, Geroski (1994, Chapter 2) highlights the clear negative correlation between firm size and R&D productivity. Using information on financial service innovations drawn from the Wall Street Journal over the period, 1990–2002, Lerner (2006) also observes that smaller firms account for a disproportionate share. Thus, the predominant pattern is that R&D productivity appears to decline with size. [...] Thus, the robust empirical patterns relating to R&D and innovation to firm size are that R&D increases monotonically—and typically proportionately—with firm size among R&D performers within industries, the number of innovations tends to increase less than proportionately than firm size, and the share of R&D effort dedicated to more incremental and process innovation tends to increase with firm size. These patterns, however, raise a number of questions. Why should there be such a close positive, monotonic—no less proportional—relationship between R&D and firm size to begin with? Also, how can this relationship be reconciled with the apparent decline in R&D productivity with firm size, no less with the apparent association between incremental and process innovation with firm size? he apparent decline in R&D productivity with firm size has been explained in a number of ways. For example, some have argued that smaller firms, especially new ventures, are more capable of innovating than larger firms (e.g., Acs and Audretsch, 1990, 1991b; Cooper, 1964), or, similarly, are more capable of spawning more significant or distinctive innovations than larger incumbents (e.g., Baumol, 2002; Henderson, 1993). Bound et al. (1984) and Griliches (1990) suggest two other explanations. One is selection bias; only the most successful small firm innovators tend to be included in the samples that have been examined, perhaps because greater firm size increases the likelihood of survival, and thus surviving smaller firms likely manifest some compensating advantage such as greater innovative capability. Griliches (1990) also suggests the possibility that measurement error may account for the seemingly greater R&D productivity of small firms due to the systematic underestimation of formal R&D for small firms (cf. Kleinknecht, 1987; Schmookler, 1959; Sirilli, 1987). These explanations for a decline in R&D productivity with size leave open, however, the question of why we should also observe a strong positive relationship between firm size and R&D. In Schumpeter’s discussion of the effects of market power on innovation, there are two distinct themes. First, Schumpeter recognized that firms require the expectation of some form of transient market power to have the incentive to invest in R&D. This is, of course, the principle underlying patent law; it associates the incentive to invent with the expectation of ex post (i.e., post-innovation) market power tied to the innovations originating from R&D. Second, Schumpeter argued that the possession of ex ante market power, linked to an ex ante oligopolistic (or monopolistic) market structure, also favored innovation. An oligopolistic market structure, for example, made rival behavior more stable and predictable, he claimed, and thereby reduced the uncertainty associated with excessive rivalry that tended to undermine the incentive to invent. Implicitly assuming that capital markets are imperfect, he also suggested that the profits derived from the possession of ex ante market power provided firms with the internal financial resources necessary to invest in innovative activity. Finally, he also appeared to argue that ex ante market power would tend to confer ex post market power [...] The majority of studies that examine the relationship between market concentration and R&D have found a positive relationship. First among many were Horowitz (1962), Hamberg (1964), Scherer (1967a), and Mansfield (1968). A few have found evidence that concentration has a negative effect on R&D (e.g., Bozeman and Link, 1983; Mukhopadhyay, 1985; Williamson, 1965). Rather than examine the relationship between market structure and R&D—an input, Geroski and Pomroy (1990) and Geroski (1990) consider the relationship between market structure and innovation, the output of innovative activity, which they measure with counts of commercially significant innovations drawn from the SPRU database (cf. Geroski, 1994, Chapter 2; Robson and Townsend, 1984). Geroski (1990) also departs from the prior literature by employing a number of measures of market structure, including market concentration, but also measures of entry, exit, import penetration, and the number of small firms. Geroski (1990) finds a positive relationship between competition and innovation, a qualitative reversal of the majority of prior findings that he attributes to his inclusion of a control for technological opportunity. [...] A finding that long ago captured the imagination of numerous theorists was that of Scherer (1967a), who found evidence of a nonlinear, “inverted-U” relationship between R&D intensity and concentra- tion. Using data from the Census of Population, Scherer found that R&D employment as a share of total employment increased with industry concentration up to a four-firm concentration ratio between 50% and 55%, and declined with concentration thereafter. This inverted-U result, in the context of a simple egression of R&D intensity against market concentration and a quadratic term, has been replicated by Scott (1984) and Levin et al. (1985) using the FTC Line of Business data. Using a 21-year panel, spanning 1973–1994, for 17 two-digit industries, Aghion et al. (2005) observe a similar “inverted-U” between industry-level market power, measured with an averaged Lerner index—an arguably better measure of the intensity of competition than a concentration ratio—and industry innovation, measured with the average number of citation-weighted patents. There have been two central challenges facing empirical studies on the relationship between competition and innovation, as suggested by our discussion of entry and innovation. First, it is likely that competition and innovation are simultaneously determined, either with causality running in both directions, or with both innovation and competition codetermined by other exogenous factors. Second, there is a question of the sensitivity of the relationship to industry-level factors, and what that sensitivity might imply about the nature and importance of the influence of competition on innovation. Phillips (1966) was among the first to propose that causality might run from innovation to market structure, rather than the reverse. Although Schumpeter envisioned that the market power accruing from successful innovation would be transitory, eroding as competitors entered the field, Phillips argued that, to the extent that “success breeds success,” concentrated industrial structure would tend to emerge as a consequence of past innovation. Phillips’ (1971) monograph on the manufacture of civilian aircraft illustrates how market structure can evolve as a consequence of innovation, as well as how it can affect the conditions for subsequent innovation. [...]

Chapter 5: The Economics of Science (Stephan)

As economists, we owe a substantial debt to Robert Merton for establishing the importance of priority in scientific discovery. In a series of articles and essays begun in the late 1950s, Merton (1957, 1961, 1968, 1969) argues convincingly that the goal of scientists is to establish priority of discovery by being first to communicate an advance in knowledge and that the rewards to priority are the recognition awarded by the scientific community for being first. Merton further argues that the interest in priority and the intellectual property rights awarded to the scientist who is first are not a new phenomenon but have been an overriding characteristic of science for at least three centuries Science is sometimes described as a “winner-take-all” contest,” meaning that there are no rewards for being second or third. One characteristic of science that contributes to such a reward structure is the difficulty that occurs in monitoring scientific effort (Dasgupta, 1989; Dasgupta and David, 1987). This class of problem is not unique to science. Lazear and Rosen (1981) have investigated incentive-compatible compensation schemes where monitoring is costly. Another factor that contributes to such a reward structure is the low social value of the contributions made by the runner-up. “There is no value added when the same discovery is made a second, third, or fourth time.” (Dasgupta and Maskin, 1987, p. 583). But it is somewhat extreme to view science as a winner-take-all contest. Even those who describe scientific contests in such ways note that it is a somewhat inaccurate description, given that replication and verification have social value and are common in science. It is also inaccurate to the extent that it suggests that only a handful of contests exist. True, some contests are world class, such as identification of the Higgs particle or the development of high-temperature superconductors. But there are many other contests that have multiple components, and the number of such contests appears to be on the increase. By way of example, while for many years it was thought that there would be “one” cure for cancer, it is now realized that cancer takes multiple forms and that multiple approaches are needed to find a cure. There will not be but one winner; there will be multiple winners. A more realistic metaphor is to see science as following a tournament arrangement, much like tournaments in golf or tennis, where the losers, too, get some rewards. This keeps individuals in the game, raises their skills, and enhances their chances of winning a future tournament. A similar type of competition exists in science. Dr X is passed over for the Lasker Prize, but her work is sufficiently distinguished that she is invited to give an important lecture, consistently receives support for her research and is awarded an honorary degree from her undergraduate institution. Financial remuneration is another component of the reward structure of science. While scientists place great importance on priority and are highly motivated by an interest in puzzle-solving, money clearly plays a role in the reward structure. Rosovsky (1990) recounts how, upon becoming dean of the Faculty of Arts and Sciences at Harvard, he asked one of Harvard’s most eminent scientists the source of his scientific inspiration. The reply (which “came without the slightest hesitation”) was “money and flattery.” (p. 242). The other reward often attributed to science is the satisfaction derived from solving the puzzle. Hagstrom (1965, p. 16), an early sociologist of science, noted this when he said “Research is in many ways a kind of game, a puzzle-solving operation in which the solution of the puzzle is its own reward.” The philosopher of science Hull (1988, p. 305) describes scientists as being innately curious and suggests that science is “play behavior carried to adulthood.” Feynman (1999), explaining why he did not have anything to do with the Nobel Prize (which he won in 1965), said: “I don’t see that it makes any point that someone in the Swedish Academy decides that this work is noble enough to receive a prize— I’ve already got the prize. The prize is the pleasure of finding the thing out, the kick in the discovery . . .” This suggests that time spent in discovery is an argument in the utility function of scientists. Pollak and Wachter (1975) demonstrate that maximization problems of this type are generally intractable, because implicit prices depend upon the preferences of the producer. While this provides a rationale for excluding the process of discovery from models of scientific behavior, the failure of economists to acknowledge the puzzle as a motivating force makes economic models of scientific behavior lack credibility. Recent work by Sauermann and Cohen (2007) seeks to address this in part for scientists and engineers working in industry. Although it is popular to characterize scientists as having instant insight, studies suggest that science takes time. Investigators often portray productive scientists—and eminent scientists especially—as strongly motivated, with the “‘stamina’ or the capacity to work hard and persist in the pursuit of long-range goals.” (Fox, 1983). 14 Several dimensions of cognitive resources are associated with discovery. One aspect is ability. It is generally believed that a high level of intelligence is required to do science, and several studies have documented that, as a group, scientists have above average IQs. 15 There is also a general consensus that certain people are particularly good at doing science and that a handful are superb. 16 Another dimension of cognitive inputs is the knowledge base the scientist(s) working on a project possesses. This knowledge is used not only to solve a problem but to choose the problem and the sequence in which the problem is addressed. The importance knowledge plays in discovery leads to several observations. First, it intensifies the race, because the public nature of knowledge means that multiple investigators have access to the knowledge needed to solve a problem. Second, knowledge can either be embodied in the scientist(s) working on the research or disembodied, but available in the literature (or from others). Different types of research rely more heavily on one than the other. The nuclear physicist Leo Szilard, who left physics to work in biology, once told the biologist Sydney Brenner that he could never have a comfortable bath after he left physics. “When he was a physicist he could lie in the bath and think for hours, but in biology he was always having to get up to look up another fact” (Wolpert and Richards, 1988, p. 107). Fourth, there is anecdotal evidence that “too much” knowledge can be a bad thing in discovery in the sense that it “encumbers” the researcher. There is the suggestion, for example, that exceptional research may at times be done by the young because the young “know” less than their elders and hence are less encumbered in their choice of problems and the way they approach a question. 17 Finally, the cognitive resources brought to bear on a problem can be enhanced by assembling a research team or, at a minimum, engaging in a collaborative arrangement with investigators in other labs and countries. Research is rarely done in isolation, especially research of an experimental rather than theoretical bent (Fox, 1991). Scientists work in labs. How these labs are staffed varies across countries. For example, in Europe research labs are often staffed by permanent staff scientists, although increas- ingly these positions are held by temporary employees (Stephan, 2008). In the United States, while positions such as staff scientists and research associates exist, the majority of scientists working in the lab are doctoral students and postdocs. Stephan et al.’s study (2007b) of 415 labs affiliated with a nanotechnology center finds that the average lab has 12 technical staff, excluding the principal investigator (PI). Fifty percent of these are graduate students; 16% are postdocs, and 10% are under- grads. 18 Such patterns mean that labs in the United States are disproportionately staffed by young, temporary workers. The reliance on such a system, with its underlying pyramid scheme, at a time when there has been minimal expansion in faculty positions, has resulted in an increasing supply of scientists trained in the United States (as well as those trained abroad, who come to the United States to take a postdoctoral position) who are less and less likely to find permanent PI positions in the university. The importance of equipment is one reason to stress the nonlinearity of scientific discovery. Scientific research can lead to technological advance, but technology very much affects advances in science. The history of science is the history of how important resources and equipment are to discovery—a theme in the research of Rosenberg and Mokyr, among others. Equipment for research is costly. 27 At the extreme are costs associated with building and running an accelerator. The 27-km-long LHC which is scheduled to come online early in 2008 at CERN will cost $8 billion; the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory in the United States cost $1.41 billion (Science, vol. 312, 5 May 2006; p. 675). A microscope used for nanotechnology research can cost $750,000 (http://www.unm.edu/market/cgi-bin/archives/000132.html). A sequencer, such as Applied Biosystems’ 3730 model costs approximately $300,000. Next-generation sequencers cost between $400,000 and $500,000. Mice are not free. An inbred off-the-shelf mouse costs between $17 and $60; mutant strains begin around $40 and can go to $500 plus. Prices are for mice supplied from live breeding colonies. Many strains, however, are only available from cyropreserved material. Such mice cost considerably more: in 2009 the cost to recover any strain from cryopreservation (either from cryopreserved sperm or embryos) was $1900. For this, investigators receive at least two breeding pairs of animals in order to establish their own breeding colony. 28 Custom made mice can cost much more. Johns Hopkins University, for example, estimates that it costs $3500 to engineer a mouse to order. With the large number of mice in use (over 13,000 are already published), the cost of mouse upkeep becomes a significant factor in doing research. US universities, for example, charged from $0.05 to $0.10 per day per mouse (mouse per diem) in 2000 (Malakoff, 2000). This can rapidly add up. Irving Weissman of Stanford University reports that before Stanford changed its cage rates he was paying between $800,000 and $1 million a year to keep the 10,000 to 15,000 mice in his lab. 29 Costs for keeping immune deficient mice are far greater (on the order of $0.65 per day), given their susceptibility to disease. 30 The importance of equipment and research materials in scientific research means that exchange, which has a long tradition in science (Hagstrom, 1965), plays a considerable role in fostering research and in creating incentives for scientists to behave in certain ways. For example, scientists routinely share information and access to research materials and expertise in exchange for citations and coauthorship. 31 But, as research materials have become increasingly important, exchange has arguably taken on more importance. Walsh et al. (2005, 2007) examine the practice of sharing materials (such as cell lines, reagents, and antigens) among academic biomedical researchers and find that 75% of the academic respondents in their sample made at least one request for material in a 2-year period, with an average of 7 requests for materials to other academics and two requests for materials from an industrial lab (Walsh et al., 2005). Serendipity also plays a role in scientific discovery; it is not that uncommon for researchers to find different, sometimes greater, riches than the ones they are seeking. Although serendipity is sometimes referred to as the “happy accident,” this is a bit of a misnomer. True, Pasteur “discovered” bacteria while trying to solve problems that were confronting the French wine industry. But his discovery, although unexpected, was hardly “an accident.” Distinguishing between the unexpected and the “accidental” is especially difficult when research involves exploration of the unknown. The analogy to discovery makes the point: Columbus did not find what he was looking for—but the discovery of the new world was hardly an accident. Einstein once said that “a person who has not made his great contribution to science before the age of thirty will never do so.” (Brodetsky, 1942, p. 299). There is a great deal of anecdotal evidence (Stephan and Levin, 1992) that he was right, that science is the domain of the young. However, investigating the veracity of the statement statistically is fraught with problems: measurement issues abound, as do the confounding of aging effects with cohort effects, as well as the availability of appropriate databases. We examine these issues, prefacing them with a discussion of theoretical reasons that one might expect age to be related to productivity. The presence of a gender differential in publishing outcomes is well established. Fox (2005), for example, finds that women published or had accepted for publication 8.9 papers in the 3-year period beginning in the early 1990s, compared to 11.4 for men. The difference owes to disparities at both extremes of the productivity distribution. Women are almost twice as likely as men to publish zero or one paper during the period (18.8% compared to 10.5%); men are almost twice as likely as women to publish 20 or more papers during the period (15.8% for men compared to 8.4% for women). 53 Gender differentials have also declined over time. Xie and Shauman (1998) find the female-to-male ratio to have been about 0.60 in the late 1960s, and to have increased to 0.82 by 1993. The question as to why research output is related to gender has long interested those studying scientific productivity. In economic terms, the question is often examined in terms of supply versus demand characteristics. Stated in these terms, the question is whether women publish less than men because of specific attributes, such as family characteristics, amount of time spent doing research, etc., or whether women publish less than men because they have fewer opportunities to be productive, due to hiring and funding decisions as well as possible network outcomes. This dichotomy is misleading, of course, to the degree that interactions exist between the two. Differential placement opportunities, for example, may lead women to allocate their time to activities that are rewarded (such as teaching) but diminish publishing activity. One of the most in-depth studies to be done on the subject in recent years is that by the sociologists (Xie and Shauman, 1998, 2003, p. 23). After carefully analyzing four datasets that span a 24-year period, they conclude that “women scientists publish fewer papers than men because women are less likely than men to have personal characteristics, structural positions, and facilitating resources that are conducive to publication.” In other words, both demand and supply play a role. 37). 58 From an economist’s point of view, an exceedingly appealing attribute of a reward system that is rooted in priority is that it offers nonmarket-based incentives for the production of the public good “knowledge.” (Stephan, 2004). Merton noted the functionality of the reward system in the inaugural lecture of the George Sarton Leerstoel that he delivered October 28, 1986 at the University of Ghent. In the lecture, published 2 years later in Isis, Merton spoke of the public nature of science, writing that “. . .a fund of knowledge is not diminished through exceedingly intensive use by members of the scientific collectivity—indeed, it is presumably augmented. . .” (Merton, 1988, p. 620). Merton not only recognized this but stood the public–private distinction on its head, proposing that the reward structure in science of priority functioned to make a public good private. “I propose the seeming paradox that in science, private property is established by having its substance freely given to others who might want to make use of it.” He continues (1988, p. 620) by saying that “only when scientists have published their work and made it generally accessible, preferably in the public print of articles, monographs, and books that enter the archives, does it become legitimately established as more or less securely theirs” or, as he says elsewhere, “one’s private property is established by giving its substance away” (1988, p. 620). Dasgupta and David (1987, p. 531) express the private–public paradox exceedingly well: “Priority creates a privately owned asset—a form of intellectual property—from the very act of relinquishing exclusive possession of the new knowledge.” Arrow (1987, p. 687), commenting on their work, articulates the cleverness of such a system: The recognition that priority is a form of property rights leads to the question of whether there are “too many” contestants in certain scientific contests. Would the social good be served by having fewer? In a speech delivered at the conference commemorating the 400th anniversary of the birth of Francis Bacon, Merton detailed the prevalence of what he called “multiples” in scientific discovery. And Merton was not the first to note their presence. In what Merton calls a “play within a play,” he gives 20 “lists” of multiples that were compiled between 1828 and 1922. Moreover, Merton is quick to point out that the absence of a multiple does not mean that a multiple was not in the making at the time the discovery was made public. This is a classic case of censored data where scooped scientists abandon their research after a winner is recognized. Indeed, Merton argues that “far from being odd or curious or remarkable, the pattern of independent multiple discoveries in science is in principle the dominant pattern rather than a subsidiary one.” (Merton, 1961, p. 356). 68 The presence of multiple discoveries is due in part to the free access scientists have to knowledge and in part to the fact that uncertainty associated with who will make a discovery leads scientists to choose research portfolios that are correlated (Dasgupta and Maskin, 1987). 69 The knowledge that multiples exist keeps scientists from shirking and moves the enterprise of science at a rapid pace. Such observa- tions invite the question of whether science moves at too rapid a pace and whether certain contests attract too many entrants. Dasgupta and David (1987, p. 540) argue that the priority system can create excesses, just as the patent system does, provided the “reward to the discoverer . . . is tempting enough.” They make no effort to define the boundary of temptation, but one wonders if the general knowledge that certain contests deserve the Nobel Prize does not attract an excessive number of scientists. In the mid-1950s, approximately one-third of basic research performed in the United States was done by industry; in 2004, the last year for which data are available, the proportion had declined to approximately 16% (National Science Board, 2006, Table 4-8, vol. 2). Other factors contributing to the decline, in addition to the closure or refocusing of certain large industrial labs, include an increased propensity to “outsource” research to the university sector, as well as possible changes in definition and classification. At the same time that industry’s share of basic research declined, their share of applied research rose from 56.3% to 61.8% (National Science Board, 2006, Table 4-12, vol. 2); the combined share of basic and applied research went from 50.1% to 40.3%. 80 Our knowledge of scientists working in industry comes largely from a number of excellent case studies. These include Gambardella’s (1995) study of the pharmaceutical industry, Hounshell and Smith’s (1988) study of Du Pont, Willard Mueller’s discussion of Du Pont (1962); Nelson’s (1962) study of the development of the transistor, and Sobel’s (1986) study of RCA. For a discussion of specific industries, see Mowery and Rosenberg (1998). Science, perhaps more than any other enterprise, is international in scope. We see this in terms of location of training, location of work and, as we have noted earlier, in coauthorship patterns. In terms of training, a very large percent of degrees, especially in Europe and the United States, are awarded to foreign students. While the percent has fluctuated over time in response to such things as changes in available funding and visa policies, overall the percent of PhDs awarded to international students in the United States has grown considerably during the past 30 years. By 2006, 36.0% of PhDs awarded in science and 58.6% of those awarded in engineering went to candidates on a temporary visa while 6.0% of science PhDs and 4.5% of engineering PhDs were awarded to noncitizens on permanent visas (National Science Foundation, 2006, Table 3). 84 A somewhat similar situation exists in Europe, especially in the United Kingdom, where in 2003 over 50% of engineering PhDs and approximately 45% of math and computer science degrees were awarded to foreign students (National Science Board, 2004, Figure 2-40).

Chapter 6: University Research and Public-Private Interaction (Foray and Lissoni)

Universities also contribute directly to innovation, by providing industry and services with technical solutions or devices, or by getting involved in applied research activities. Such a role is in accordance with a view of the university as a “permeable institution” (Le ́cuyer, 1998), which allocates efforts and attention to problem-solving activities that have immediate relevance for business firms (most often the national or local ones). Such a view is not at all new, as it dates back at least to the nineteenth century, sometimes in coexistence, sometimes in competition, with the emphasis on basic research and teaching (Rothblatt and Wittrock, 1993). More recently, however, governments and large sections of the public opinion have placed more emphasis on demands that universities fulfil this type of task by commercia- lizing their own academic inventions. This requires them to get involved into the creation and management of intellectual property rights (IPRs), and even into entrepreneurial activities such as the foundation of new firms (Martin, 2003; Slaughter and Leslie, 1997; Yusuf and Nabeshima, 2007). A major witness of this change is the wave of legislation aimed at encouraging universities to take patents and license them under profitable conditions, started in the United States with the Bayh-Dole Act of 1980 and continued elsewhere with many imitations of this Act and, in several European countries, with the abolition of the “professor’s privilege” typical of the German academic model. This change of perspective has gone hand in hand with the increasing attention paid by industry to universities’ research, as part of a general strategy to move away from a “vertical” model of R&D to a “network strategy” of innovation, based upon the exploitation of external knowledge resources. 4 Since the 1980s, industrial funding of academic science in OECD countries has grown considerably both in real terms and as a percentage of GDP. Public funding has also grown in real terms, but it has not kept up with the growth both of GDP and of industrial funding, so that in 2003 the share of government-funded academic research was down to 72%, from over 80% in 1981. In the meantime, the share of industrial funding had doubled, from 3% to 6%; and universities’ self-financing share has gone up from 13% to 16%, thanks largely to the expansion of new entrepreneurial activities both in the field of education and in technology commercialization (Vincent-Lancrin, 2006). At the present time, the most research-oriented of modern universities look quite like the “multiver- sity” envisaged by Clark Kerr, the prescient president of the University of California of the 1960s: a “knowledge factory . . . to which policy wonks turn for expertise, industrialists turn for research, government agencies turn for funding proposals, and donors turn for leveraging their philanthropy into the greatest impact” (Wagner, 2007); and, one may wish to add, university administrators turn for self-financing. 5 Starting with the 1990s, most econometric attempts to measure the extent of knowledge spillovers from academic research have been coupled with exercises aimed at measuring the geographical scope of those spillovers. 21 Jaffe (1989) is generally acknowledged as the pioneering paper in this field. Aiming to assess the Real effects of academic research, Jaffe estimated a “modified knowledge production function” in which the dependent variable is given by the number of private corporate patents produced in a given technology by each state of the United States, and the explanatory variables include, among others, the research expenditures of universities and a measure of within-state geographic coincidence of corporate R&D labs and university research. Jaffe’s results show that the number of corporate patents is positively affected by the R&D performed by local universities, after controlling for both private R&D inputs and the state size, as measured by population. Many authors have replicated Jaffe’s exercise. Using innovation counts from the Small Business Innovation Data Base (SBDIB), Audretsch and Feldman (1996) and Feldman and Audretsch (1999) show that, even after controlling for the geographic concentration of production, innovative activities present a greater propensity to cluster spatially in those industries in which industry R&D, university research and skilled labor are important inputs. Acs et al. (1994) also find that the elasticity of innovation output with respect to university R&D is greater for small firms than for large ones. This is interpreted as evidence that small firms, while lacking internal knowledge inputs, have a comparative advantage at exploiting spillovers from university laboratories. Along similar lines, Anselin et al. (1997) refine Jaffe’s original methodology to take into account cross-border effects, and show that university research has a positive impact on regional rates of innovation. The literature on science parks is of limited help when it comes to getting a better understanding of university–industry relationship. By and large, in fact, it is a chronicle of repeated failures, and of a sequence of evaluation attempts aimed at elusive targets. As pointed out by Link and Scott (2003, 2007), there is no generally accepted definition of science park, a term which is prevalent in Europe but not as popular in the United States (where “research park” or “university research park” are more common) and in Asia (where “technology park” is more diffused). In general, science parks (and their synonyms) are intended to be real estate developments aimed at hosting hi-tech or science-based firms, which provide for some technology transfer activities and involve a local university, some level of govern- ment, and possibly the private sector. Link and Scott (2003) trace their proliferation in the United States back to the 1980s, a decade when also the UK local governments set up many of them, soon to be followed by many other European and Asian countries (Bakouros et al., 2002; Lee and Yang, 2001; Phillimore, 1999; Vedovello, 1997). Founders of new science parks, inevitably invoked the Stanford Science Park as the model to imitate, but proved to have little knowledge of the unique circumstances that surrounded its creation and made its replication very hard to achieve. 36 Early criticism of the UK experience, especially of the idea that science parks could be useful tools both to revitalize deindus- trialized areas and support local universities, did not deter subsequent imitation (MacDonald, 1987; Massey et al., 1992).

Chapter 7: Property Rights and Invention (R0ckett)

Article 1, Section 8 of the US Constitution is quite explicit that the objective of the intellectual property rights system is the progress of “Science and the Useful Arts.” 36 If one were to take this at its word, one would not necessarily want to use social welfare—or even economic growth—as the standard of optimality in a model of the intellectual property rights system. Instead, one might wish to use the rate of innovation or, less directly, the rate of research and development spending: the more the better. The interpretation one takes is important to the conclusion one reaches about the optimality of any intellectual property protection system. For example, Horowitz and Lai (1996) compare the optimal design of patents when the objective is to maximize the rate of innovation to the optimal design when the objective is to maximize discounted consumers’ surplus. A system that aims to maximize con- sumers’ surplus places more value on frequent innovation than a system that maximizes the rate of innovation since intermediate steps generate surplus gains as each quality step enters consumption. Despite the ambiguity in how one should interpret the goal of establishing a system of intellectual property rights in the first place, however, the bulk of the economics literature has taken social welfare to be the appropriate objective that is maximized by policymakers. While some argue that the patent system is not necessary, others argue that even if it is necessary it is not very effective. Survey evidence of Cohen et al. (2000) has indicated that managers do not view patents as very effective at generating direct rewards to innovation. While certain sectors, such as biotechnology and pharmaceuticals, appear to get great benefit from patents, various first mover advantages (such as learning by doing) are credited with generating greater reward to innovation than intellectual property rights. If firms rely on other “frictions” such as barriers to entry to generate profits from invention, patents may at best be redundant. On the other hand, Farrell (1995) argues that the “honeymoon” period of patent protection may allow these other potentially long lasting first mover advantages to get going. Hence, patents may contribute more to profits than is acknowledged in the survey results. Still, if frictions and not patents are generating the rewards, then perhaps we should consider weakening or eliminating patents, since the patent system is costly to maintain and may generate few benefits.

Chapter 8: Stylized Facts in the Geography of Innovation (Feldman and Kogler)

An extensive literature addresses the topic of geography of innovation and describes the importance of proximity and location to innovative activity. This has been termed the “new economic geography,” an area of research that is less than 20 years old (Clark et al., 2000). This field is now developed sufficiently so that the discussion can be organized around certain stylized and commonly accepted facts: Innovation is spatially concentrated. Geography provides a platform to organize economic activity. All places are not equal: urbanization, localization, and diversity. Knowledge spillovers are geographically localized. Knowledge spillovers are nuanced, subtle, pervasive, and not easily amenable to measurement. Local universities are necessary but not sufficient for innovation. Innovation benefits from local buzz and global pipelines. Places are defined over time by an evolutionary process.

Chapter 9: Open User Innovation (von Hippel)

Skipped. Here is the conclusion

I summarize this overview chapter by again saying that users’ ability to innovate is improving radically and rapidly as a result of the steadily improving quality of computer software and hardware, improved access to easy-to-use tools and components for innovation, and access to a steadily richer innovation commons. Today, user firms and even individual hobbyists have access to sophisticated programming tools for software and sophisticated CAD design tools for hardware and electronics. These information- based tools can be run on a personal computer, and they are rapidly coming down in price. As a consequence, innovation by users will continue to grow even if the degree of heterogeneity of need and willingness to invest in obtaining a precisely right product remains constant (Baldwin and von Hippel, 2009). Equivalents of the innovation resources described above have long been available within corporations to a few. Senior designers at firms have long been supplied with engineers and designers under their direct control and with the resources needed to quickly construct and test prototype designs. The same is true in other fields, including automotive design and clothing design: just think of the staffs of engineers and model makers supplied so that top auto designers can quickly realize and test their designs. But if, as we have seen, the information needed to innovate in important ways is widely distributed, the traditional pattern of concentrating innovation-support resources on a few individuals is hugely inefficient. High-cost resources for innovation support cannot efficiently be allocated to “the right people with the right information:” it is very difficult to know who these people may be before they develop an innovation that turns out to have general value. When the cost of high-quality resources for design and prototyping becomes very low (the trend we have described), these resources can be diffused very widely, and the allocation problem diminishes in significance. The net result is a pattern in which development of product and service innovations is increasingly shifting to users—a pattern that will involve significant changes for both users and producers.

Chapter 10: Learning by doing (Thompson)

This chapter has reviewed the theoretical and empirical literature on LBD. Many of the distinctive theoretical implications of LBD have been derived under the assumption that the cost–quantity relation- ships observed in numerous empirical studies are largely the result of passive learning, and some further require that passive learning is unbounded. The empirical literature raises doubts about both assump- tions. When observed cost–quantity relationships indicate sustained productivity growth, factors other than passive learning are generally at work. When passive learning is the dominant factor, productivity growth is invariably bounded. Thus, empirically relevant theories incorporating LBD are hybrid models in which passive learning coexists with other sources of growth. But in such models, many of the distinctive implications of passive learning become unimportant. Moreover, passive learning is often an inessential component of long-run growth; to the contrary, too much learning can lead to stagnation.

Chapter 11: Innovative Conduct in Computing and Internet Markets (Greenstein)

Similarly, the household experience with computing has also undergone significant change: It began from virtually nothing in the 1970s. Later, a 1995 survey found less than 20% of households had a personal computer (PC) (NTIA, 1995). In sharp contrast, an October 2003 survey found that 62% of respondents had a computer at home and an even larger percentage used a computer at work (Mankiw et al., 2005). These events motivate a wide variety of microeconomic questions about innovative conduct in US commercial computing. In the first section of this review, I discuss these questions in light of six propositions commonly found in the themes of many studies: While technical frontiers in computing may stretch due to events reasonably described as “tech- nology push,” a more substantial amount of valuable innovation arises endogenously in response to market incentives and market-oriented events; The diffusion and development of computing resembles diffusion and development of a general- purpose technology (GPT), and as with such a technology, substantial costs arise from creating value by customizing the technology to the unique needs of users; The presence of computing platforms shapes incentives to innovate, and the unification or divi- sion of technical leadership shapes distribution of value within and between platforms; Leading incumbent firms and new entrants face differential incentives to innovate when innovation reinforces or alters market structure; Market-based learning activity plays an essential role in innovative conduct, especially in enabling exploration of multiple approaches for translating the frontier into innovative and valu- able goods and services; The localization of economics activity leads to a concentration of some types of innovative con- duct in a small set of locations. The constant improvement in performance supports the view that many changes in computing arise from “technology push.” That is, the invention pushed out the technical or scientific frontier, leading other commercial actors to search for valuable uses. There is a grain of truth to this view, but it also requires proper qualification. The supporting evidence is well known. Numerous prototypical technologies in computing found their way into products and services long after their invention. These inventions arose in university or commercial laboratories, sometimes as a by-product of basic scientific research goals and sometimes with no direct vision about their application to a valuable commercial activity. Then, these inventions spread through academic papers, by licensing of patents, or the movement of computer scientists and engineers into companies. 7 Of these inventions, many arose from prototypes built with large subsidies from government funding. For example, the original investment by DARPA (Defense Advanced Research Agency) in the funda- mental science of packet switching did not lead to any immediate practical commercial products. Years of sustained funding, however, led to a set of events that broadly subsidized the invention and operation of the basic building blocks of the Internet, such as the experiments that led to the definition of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack and its practical implementation in a working communications and computing network. This funding occurred long before the commercial Internet was operational. What became the Internet began in the late 1960s as a research project of the Advanced Research Projects Administration of the United States Department of Defense, the ARPANET. From these origins sprang the building blocks of a new technology for a communications network, one based on sending data where some amount of delay was tolerated. By the mid-1980s, the entire Internet used TCP/IP packet-switching technology to connect most universities and defense contractors. Management for large parts of the Internet was transferred to the NSF in the mid-1980s. Through NSFNET, the NSF was able to provide connections to its supercomputer centers and a high-speed backbone. Since use of NSFNET was limited to academic and research locations, carriers who carried commercial traffic, such as UUNET, PSINET, and Sprint developed their own private backbones for corporations looking to connect their systems with TCP/IP (Kahn, 1995). By the early 1990s, the NSF had developed a plan to transfer ownership of the Internet out of government hands and into the private sector. The plan for privatization was motivated by several factors. For example, it was forecast (correctly) that a privatized Internet would be more efficient than a government operated one, leading to lower costs for all users. There was also a concern that the NSF could not fund indefinitely the operations of the network, and it was thought that privatization would put the network on more stable financial footing. During the transition another issue arose: several of the private providers of data services were chafing under the NSF’s “acceptable use” policy forbidding them to use government-owned assets for commercial purposes. Complete privatization also would remove this issue. The lack of government involvement could also be seen in other aspects of the Internet in the United States. For example, although the Federal Communications Commission (FCC) had mandated a standard for digital television (as it had for color television), it refrained from mandating most Internet equipment design decisions. Just as the FCC had not mandated Ethernet design standards, so it let the spectrum become available for experiments by multiple groups who competed for wireless Ethernet standards, which eventually became Wi-Fi. Similarly, the FCC did not mandate a standard for modems other than to impose requirements that limited interference. It also did not mandate interconnection regulatory regime for Internet carriers in the 1990s, explicitly letting the firms innovate in the structure of their business dealings with one another, and evolve those dealings as they saw fit.

Chapter 12: Pharmaceutical Innovation

The discovery and development of new pharmaceutical substances are among the most interesting of innovation processes. Unusually large privately financed expenditures on research and development (R&D) outlays are required to achieve a successful new product, and the pharmaceutical industry’s R&D/sales ratios are extraordinarily high. The links to academic science and basic research performed in government laboratories are rich. The expectation of patent protection plays a more important role than in most other high-technology industries. New products must meet not only the test of market acceptance, but also survive rigorous scrutiny from government regulatory agencies. And the medical services market into which pharmaceuticals sell is itself unusually complex, with a significant fraction of consumers’ purchases, at least in the wealthier nations, covered by insurance and hence subject to diverse moral hazard and adverse selection imperfections. Despite these problems, there is compelling evidence that the introduction of many new pharmaceutical products has yielded substantial net benefits in extending human lives and reducing the burden of disease (Lichtenberg, 2001, 2004, 2007; Long et al., 2006; Murphy and Topel, 2006). Among these various characteristics, we focus preliminarily on one: the high ratio of pharmaceutical companies’ R&D spending in relation to their sales. The clearest indicator of this trait comes from data systematically collected over the years by the US industry trade association, previously called the Pharmaceutical Manufacturers Association (PMA) and more recently the Pharmaceutical Research and Manufacturers of America (PhRMA). The evolution of pharmaceutical discovery away from unguided or at best intuitive random screening toward rational drug design and biological methods has led to increasingly rich linkages between the work of pharmaceutical companies on the one hand and academic science carried out in universities and governmentally supported research institutes on the other, both in the nations where the companies operate and across national boundaries. This has always been the case to some extent. The early work on sulfa drugs was conducted in German industrial laboratories by scientists trained at prominent German universities, which were at the time world leaders in chemical research and teaching. Penicillin moved quickly from the laboratories of Oxford University to numerous companies producing in quantity for the war effort. The first oral contraceptive was introduced by the G.D. Searle Company in 1960, a decade before the earliest date at which the trend toward rational drug design was said to begin. But a study by the IIT Research Institute (1968, pp. 58–72) for the US NSF revealed an intricate “tree” of scientific discoveries extending back to 1849 that laid a foundation for the Searle contraceptive and later improvements. Cockburn and Henderson (2000) studied the histories of the 21 new drugs introduced between 1965 and 1992 with the highest over-all therapeutic impact, as judged by industry experts. Among the 21, only five, or 24%, were developed with essentially no input from public sector research. 14 They contrast their results with an earlier analysis by Maxwell and Eckhardt (1990) concluding that 38% of an older sample of drugs was developed without public support. Academic science is transformed into pharmaceutical innovations through richly interconnected networks. 16 Open science, to be sure, is available to pharmaceutical companies through journal articles and presentations at professional meetings. But in addition, there are tighter links. Pharmaceutical companies provide financial support for academic researchers, and their staffs sometimes perform joint research with academic researchers and coauthor articles with them. They also enter into cooperative research and development agreements (CRADAs) with government laboratories such as the US National Institutes of Health, permitting joint research, joint publication, and (under the Stevenson- Wydler Act of 1980, extended through a 1986 amendment), assignment of resulting patents to the companies. In recent years, many pharma companies have opened new laboratories in the vicinity of top academic research institutions in order to facilitate cooperation. Quick absorption of the newest scientific discoveries is facilitated when traditional pharmaceutical companies support their own active programs of basic research. In 1993, for example, drug companies reported that 13.6% of their total company-financed R&D budgets was devoted to basic research, as defined by the NSF—18% of the basic research spending of all industries covered by the NSF survey for that year. For the average research-performing company across all industries except pharmaceuticals, basic research was 6.3% of total company-financed R&D spending. 17 Even closer links between academia and industry are seen in the emergence of hundreds of small new biotech firms, which tend to locate near academic centers, have academic scientists as their founding entrepreneurs, and count distinguished academic researchers as members of their boards of directors and/or scientific advisory councils. Traditional “Big Pharma” companies in turn license molecules discovered in biotech startups for later-stage commercial development or, with increasing frequency, acquire the biotech companies outright, securing full ownership rights in their development “pipeline” molecules and adding staff associated with them to their own R&D staffs (Kettler, 2000). In this way, they augment their inventories of interesting drug development candidates, among other things filling voids created when more traditional drug discovery approaches have yielded disappointing results. An indication of the extent to which firms introducing new drugs to the US market depended upon others for early-stage discoveries is provided through a study undertaken by the author. For the five years 2001–2005, the Food and Drug Administration’s Web site listing new medical entities approved for marketing during those years was searched. From information provided in the approval lists, the patents claimed by the drug developers as impediments to generic competition could be traced by searching the FDA’s so-called “Orange Book.” On the 85 new medical entities for which patent information was disclosed, 18 251 applicable patents were found, or an average of 2.95 patents per molecule. Altogether, 47% of the patents were assigned at the time of their issue to companies with names (abstracting from obvious name changes due to large-company mergers) different from the company authorized by the FDA to begin commercial marketing of the sample drugs. 19 Patents issued in the earlier stages of development, that is, prior to January 2000, were more likely (54%) to be assigned originally to firms other than FDA approval recipients than patents issued in later years (38.4%). The difference is statistically significant. Evidently, the companies carrying out final-stage development and testing relied disproportionately upon outsiders for early-stage discovery. Among the 251 patents, 10.4% went to essentially academic institutions, that is, universities, hospitals, and independent research institutes. Seven percent went to universities, although a handful of the university assignments were joint with other institutions, including US government laboratories. Seven of the 251 patents had multiple organizational assignees and 10 had only individual inventors as assignees. Many of the nonacademic patent assignees were biotech companies, although an exact breakdown was not possible because information on companies that have not yet “gone public” is scarce. It cannot be ruled out that at least some of the assignees with names different from that of the company receiving FDA approval had common stock partially or wholly controlled by larger corporate parents, notably, the companies receiving FDA approvals. Since 1938, when the Pure Food and Drug Act of 1906 was amended after approximately one hundred persons were killed by sulfanilamide adulterated with poisonous diethylene glycol (used for antifreeze), the interstate sale of new drugs was prohibited in the United States unless the would-be drug provider obtained a safety certification (an NDA) from the Food and Drug Administration. A recurrent theme in this essay has been the presence of uncertainty. As the research and testing process progresses, uncertainties are gradually mitigated. Several sources put the number of alternative molecules subjected to early screening at between 4000 and 10,000 in order to have a single approved drug at the end of the process. 31 According to PhRMA (2006, p. 4), the US industry association, a single approved drug emerges on average from five compounds entering clinical testing, 250 molecules subjected to animal and other laboratory tests, and 5000–10,000 molecules initially screened. As the number of drug candidates is winnowed, the costs of continued testing and hence the stakes in the game escalate. To be sure, pharmaceutical companies have some bases for predicting before marketing begins whether their new drug will enter a market with blockbuster potential or occupy a niche in which quasi-rents will at best be modest. Among other things, first movers typically enjoy larger market shares than latecomers. The expectation of patent protection on new products plays a particularly important role in pharmaceu- tical R&D decision making. Levin et al. (1987) surveyed 650 corporate R&D managers, asking them inter alia to evaluate on a scale of 1 (not at all effective) to 7 (very effective) the effectiveness of patents as a means of protecting the competitive advantages from new products. From 17 pharmaceutical industry respondents, the average score was 6.53, compared to a response-weighted average of 4.33 for all 130 surveyed lines of business. Among the industries with more than one respondent, pharmaceu- ticals ranked second in its patent protection effectiveness score. This result is consistent with the findings of Mansfield (1986), who asked the top R&D executives of 100 US corporations what fraction of the inventions they commercialized between 1981 and 1983 would not have been developed in the absence of patent protection. For pharmaceuticals, the average was 60%; for all industries, 14%. The importance of patents to pharmaceutical R&D decision makers stems not only from the large average investments in a typical new product and the many uncertainties lining the path to a new product approval. The differentiating factor is seen among other things through a comparison with another industry—aircraft—that taps a range of highly sophisticated technologies and spends billions of dollars developing the typical new product. For aircraft (both civilian and military), the average “effectiveness of product patents” score in the Levin et al. survey was 3.79—in the lowest third among 130 industry categories. The key difference lies in the relative ease of imitation, that is, how difficult it would be with versus without patent protection, for new product imitators to launch their own competing products. Even without patents, the firm that would seek to imitate the Boeing 787 would have to build its own scale models, perform wind tunnel tests, compile detailed engineering drawings and specifications for all structural parts, work out electronic system interfaces, construct full-scale test models, test them both on the ground and in flight for structural soundness and aerodynamic performance, and much else, spending very nearly as much as Boeing did to develop its 787. Presumably, it would have observed Boeing’s design before undertaking the project, and by the time the imitator completed its develop- mental work, Boeing would be a decade ahead in sales and have progressed far down its learning curve, enjoying a substantial production cost advantage. But in pharmaceutical discovery and testing, much of the R&D is aimed at securing knowledge: knowledge of which molecules are therapeutically interesting, knowledge of which molecules work in animals, and most costly, knowledge as to whether a target drug is safe and efficacious in human beings. Once that knowledge is accumulated, absent patent protection, it is essentially there as a public good available to any interested party. Achieving it requires by recent US standards an investment measured in the hundreds of millions of dollars. But for most new drugs, and especially small-molecule drugs, 35 a would-be generic imitator could spend a few million dollars on process engineering and enter the market with an exact knock- off copy. Generic entry in turn could quickly erode the quasi-rents anticipated by a pharmaceutical innovator to repay its R&D investment. Hence the importance attributed to patents by drug companies.

Chapter 13: Collective Invention and Inventor Networks (Powell and Giannella)

From his studies of the disclosure of improvements in manufacturing processes within the iron industry, Allen suggested that the distinctive feature of collective invention is the exchange and circulation of ideas and practices among distributed networks of individuals located in diverse settings, rather than the housing of such efforts within the confines of particular firms. Building upon Allen’s (1983) work, Nuvolari (2004: p. 348), in his study of Cornish steam pumping engines, defines collective invention as a setting in which: “competing firms release information freely to one another on the design and the performance of the technologies they have just introduced.” Historical examples bear out the importance of collective invention in improving a number of notable technologies (Lamoreaux and Sokoloff, 2000; McGaw, 1987; Meyer, 2003; Scranton, 1997). A general lesson from numerous historical studies is that collective invention was an attempt to overcome the limitations of information access that accompanied extant economic and organizational structures. For some organizations, the inability to appropriate many types of technical improvements resulted in a lack of motivation to pursue internal research programs. Why invest in expensive exploratory efforts when the odds of capturing the fruits of research were low? Participation in collective efforts offered one solution. Many instances of collective invention today represent joint efforts at solving problems whose value cannot be appropriated by a single party, but which represent a bottleneck for the interdependent economic activities of participants. On the other side of the fence, some companies that are actively engaged in R&D may want their researchers to be involved in a larger technical community. Collective invention affords the chance for access to more diverse sources of knowledge, even if gaining control over these divergent ideas proves difficult. With time, many knowledge-sharing practices associated with collective invention can become institutionalized as a set of norms or agreements (David, 2008; Merton, 1979; Sabel and Zeitlin, 1985). In the case of the diffusion of the Bessemer steel process, a patent license that nearly all manufacturers signed had a clause that required any subsequent operational improvements to be disclosed. This mandated sharing of knowledge led to the establishment of a small community of practice among engineers from different firms and launched a productivity race between participants from different firms (Allen, 1983: p. 11). A variety of practices—such as mutually respected prices, collective training programs, and technological standards, that spread risks and dampened competition were commonplace across industrial districts. Nuvolari’s (2004) analysis of Cornish steam engines in the nineteenth century finds that the publication of advances in several trade outlets let to dramatic gains in the efficiency of the engines, due to the accumulation of myriad incremental improvements. Despite the variety of vibrant nineteenth century examples of collective invention, these efforts were largely displaced by the rise of the large corporate research and development (R&D) lab in the early twentieth century. For a time it seemed that these community efforts would be relegated to the annals of history. Over the past 30 years, however, the large corporate R&D lab has fallen in prominence. Many of the most notable corporate labs have been shuttered and dismantled. A second wave of collective invention is now shaping the rate and direction of technological change in numerous technologically advanced industries (Freeman and Soete, 2009). These processes of distributed innovation characterize a wide array of contemporary industries, from the early origins of the computer to the development of software to the genesis and evolution of biotechnology. This transformation has been sparked by strategic, technical, and economic factors that influence the organization of innovative labor. Inventors with multiple contacts across organizations are more likely to be exposed to diverse ideas and benefit from them. Consequently, organizations attempt to position themselves in partnerships and alliances that foster connections across organizational boundaries, in hopes that novel ideas in one setting spark fresh approaches in another (Burt, 2004; Granovetter, 1973; Powell et al., 1996). Shared awareness of a technological frontier creates the circumstances for inventors to act in concert, regardless of the perceived tangible benefits for their organizations. The central technical drivers are shifts in technolog- ical opportunity, dictating the potential rate and direction of technological change (Malerba, 2007). The economic factors are demand (on economic demand vs. need, see Mowery and Rosenberg, 1979) and appropriability (Teece, 1986; Winter, 2006), which together represent necessary conditions for firms to invest in R&D. Yet history and social structure also loom large, as many authors have noted (David, 2008; Scranton, 1993). The particularities of industry evolution and the historical organization of technical communities are deeply intertwined with economic and technical calculations. Whether nineteenth century glass making or blast furnaces, or the contemporary life sciences and open-source software, relationships within a community of inventors and researchers are influenced by a confluence of social, political, and economic forces. We summarize these disparate factors as follows: 1. The need to spread the costs of invention across multiple organizations. a. By implication, few participants possess a sufficient theoretical understanding to pursue new ideas without incurring the high costs of unguided trial and error. 2. The inability to appropriate innovations creates a discrepancy between the private value and social value of invention. a. The private value of invention is too low for some firms to pursue a technology individually, but individuals within these firms are able to recognize its potential benefits. b. Despite a lack of knowledge about demand and strong intellectual property rights, collective invention allows for continued improvement of technical performance. 3. The emergence of norms and identification of governance structures that encourage knowledge sharing among legally distinct parties. 4. Uncertainty about the direction a technology will evolve and the kinds of applications that may unfold encourage greater discussion within and across communities and provide an impetus for organizing. Collective invention thus involves the combination of both open innovation and private interests. Participants move in and out of technical communities, and can use their connections for public or private gain. The important point, as Lakhani and Panetta (2007: pp. 104–105) observe in their work on open source, is that: “these systems are not “managed” in the traditional sense of the word, that is, “smart” managers are not recruiting staff, offering incentives for hard work, dividing tasks, integrating activities, and developing career paths. Rather, the locus of control and management lies with the individual participants who decide themselves the terms of interaction with each other.” (See chapter by von Hippel for further discussion). Hughes (1989) describes how the aerospace, computing, and communication industries acquired technological momentum with the injection of cash and the alignment of political and industrial interests behind the systems they produced. For example, in the case of communications, common goals were eventually institutionalized via the ITU’s (International Telecommunications Union) imple- mentation of standards that enabled regional telephone monopolies to interoperate. Systems engineers played the critical role in coordinating the development of various technological systems among dispersed organizations.

Chapter 14: The Financing of R&D and Innovation (Hall and Lerner)

t is a widely held view that research and development (R&D) and innovative activities are difficult to finance in a freely competitive market place. Support for this view in the form of economic-theoretic modeling is not difficult to find and probably begins with the classic articles of Nelson (1959) and Arrow (1962), although the idea itself was alluded to by Schumpeter (1942). 1 The main argument goes as follows: the primary output of resources devoted to invention is the knowledge of how to make new goods and services, and this knowledge is nonrival: use by one firm does not preclude its use by another. To the extent that knowledge cannot be kept secret, the returns to the investment in knowledge cannot be appropriated by the firm undertaking the investment, and therefore such firms will be reluctant to invest, leading to the underprovision of R&D investment in the economy. Since the time when this argument was fully articulated by Arrow, it has of course been developed, tested, modified, and extended in many ways. For example, Levin et al. (1987) and Mansfield et al. (1981), using survey evidence, found that imitating a new invention in a manufacturing firm was not free, but could cost as much as 50–75% of the cost of the original invention. This fact will mitigate but not eliminate the underinvestment problem. However, Arrow’s influential paper also contains another reason for underinvestment in R&D, again one which was foreshadowed by Schumpeter and which has been addressed by subsequent researchers in econom- ics and finance: the argument that an additional gap exists between the private rate of return and the cost of capital when the innovation investor and financier are different entities. This chapter concerns itself with this second aspect of the market failure for R&D and other investments in innovation: even if problems associated with incomplete appropriability of the returns to R&D are solved using intellectual property protection, subsidies, or tax incentives, it may still be difficult or costly to finance such investments using capital from sources external to the firm or entrepreneur. That is, there is often a wedge, sometimes large, between the rate of return required by an entrepreneur investing his own funds and that required by external investors. By this argument, unless an inventor is already wealthy, or firms already profitable, some innovations will fail to be provided purely because the cost of external capital is too high, even when they would pass the private returns hurdle if funds were available at a “normal” interest rate. Venture capital can be defined as independently managed, dedicated capital focusing on equity or equity-linked investments in privately held, high-growth companies. Typically, these funds are raised from institutional and wealthy individual investors, through partnerships with a decade-long duration. These funds are invested in young firms, usually in exchange for preferred stock with various special privileges. Ultimately, the venture capitalists sell these firms to corporate acquirers or else liquidate their holdings after taking the firms public. The first venture firm, American Research and Development, was formed in 1946 and invested in companies commercializing technology developed during the Second World War. Because institutions were reluctant to invest, it was structured as a publicly traded closed-end fund and marketed mostly to individuals, a structure emulated by its successors. The subsequent years saw both very good and trying times for venture capitalists. Venture capitalists backed many successful companies, including Apple Computer, Cisco, Genentech, Google, Netscape, Starbucks, and Yahoo! But commitments to the venture capital industry were very uneven, creating a great deal of instability. The annual flow of money into venture funds increased by a factor of 10 during the early 1980s. From 1987 through 1991, however, fund raising steadily declined as returns fell. Between 1996 and 2003, this pattern was repeated. Later in this chapter, we discuss the reasons behind this cyclicality. Venture capital investing can be viewed as a cycle. In this section, we follow the cycle of venture capital activity. We begin with the formation of venture funds. We then consider the process by which such capital is invested in portfolio firms, and the exiting of such investments. We end with a discussion of open research questions, including those relating to internationalization and the real effects of venture activity. Venture capitalists usually make investments with peers. The lead venture firm involves other venture firms. One critical rationale for syndication in the venture industry is that peers provide a second opinion on the investment opportunity and limit the danger of funding bad deals. Lerner (1994a) finds that in the early investment rounds experienced venture capitalists tend to syndicate only with venture firms that have similar experience. He argues that, if a venture capitalist were looking for a second opinion, then he would want to get one from someone of similar or greater ability, certainly not from someone of lesser ability. The advice and support provided by venture capitalists is often embodied in their role on the firm’s board of directors. Lerner (1995) examines whether venture capitalists’ representation on the boards of the private firms in their portfolios is greater when the need for oversight is larger, looking at changes in board membership around the replacement of CEOs. He finds that an average of 1.75 venture capitalists are added to the board between financing rounds when a firm’s CEO is replaced in the interval; between other rounds 0.24 venture directors are added. No differences are found in the addition of other outside directors. Initial research into the exiting of venture investments focused on IPOs, reflecting the fact that the most profitable exit opportunity is usually an IPO. Barry et al. (1990) and Megginson and Weiss (1991) document that venture capitalists hold significant equity stakes and board positions in the firms they take public, which they continue to hold a year after the IPO. They argue that this pattern reflects the certification they provide to investors that the firms they bring to market are not overvalued. Moreover, they show that venture-backed IPOs have less of a positive return on their first trading day, a finding that has been subsequently challenged (Kraus, 2002; Lee and Wahal, 2004). The authors suggest that investors need a smaller discount because the venture capitalist has certified the offering’s quality. Subsequent research has examined the timing of the exit decision. Several potential factors affect when venture capitalists choose to bring firms public. Lerner (1994b) examines how the valuation of public securities affects whether and when venture capitalists choose to finance companies in another private round in preference to taking the firm public. He shows that investors tend to take the firm public when the market value is high, relying on private financings when valuations are lower. Seasoned venture capitalists appear more proficient at timing IPOs. This finding is consistent with the work by Brown, Fazzari, and Petersen on the importance of public equity financing of R&D during the 1990s stock market boom. While the overall level of venture capital returns does not exhibit abnormal returns relative to the market (Brav and Gompers, 1997), there is a distinct rise and fall around the time of the stock distribution. The results are consistent with venture capitalists possessing inside information and with the (partial) adjustment of the market to that information. A related research area is venture-fund performance. Kaplan and Schoar (2005) show substantial performance persistence across consecutive venture funds with the same general partners. General partners that outperform the industry in one fund are likely to outperform in the next fund, while those who underperform in one fund are likely to underperform with the next fund. These results contrast with those of mutual funds, where persistence is difficult to identify. First, Poterba (1987, 1989) notes that the fluctuations could arise from changes in either the supply of or the demand for venture capital. It is very likely, he argues, that decreases in capital gains tax rates increase commitments to venture funds, even though the bulk of the funds are from tax-exempt investors. The drop in the tax rate may spur corporate employees to become entrepreneurs, thereby increasing the need for venture capital. The increase in demand due to greater entrepreneurial activity leads to more venture fund raising. Gompers and Lerner (1998b) find empirical support for Poterba’s claim: lower capital gains taxes have particularly strong effects on venture capital supplied by tax-exempt investors. This suggests that the primary mechanism by which capital gains tax cuts affect venture fund raising is the higher demand of entrepreneurs for capital. The authors also find that a number of other factors influence venture fund raising, such as regulatory changes and the returns of venture funds. A second area is even thornier: the impact of venture capital on the economy. While theorists have suggested a variety of mechanisms by which venture capital may affect innovation, the empirical record is more mixed. It might be thought that establishing a relationship between venture capital and innovation would be straightforward. For instance, one could look in regressions across industries and time whether, controlling for R&D spending, venture capital funding has an impact on various measures of innovation. But even a simple model of the relationship between venture capital, R&D, and innovation suggests that this approach is likely to give misleading estimates. The first of these papers, Hellmann and Puri (2000), examines a sample of 170 recently formed firms in Silicon Valley, including both venture-backed and nonventure firms. Using questionnaire responses, they find empirical evidence that venture capital financing is related to product market strategies and outcomes of startups. They find that firms that are pursuing what they term an innovator strategy (a classification based on the content analysis of survey responses) are significantly more likely to obtain venture capital and also obtain it more quickly. The presence of a venture capitalist is also associated with a significant reduction in the time taken to bring a product to market, especially for innovators. Furthermore, firms are more likely to list obtaining venture capital as a significant milestone in the lifecycle of the company as compared to other financing events. The results suggest significant interrelations between investor type and product market dimensions, and a role of venture capital in encouraging innovative companies. Given the small size of the sample and the limited data, they can only modestly address concerns about causality. Unfortunately, the possibility remains that more innovative firms select venture capital for financing, rather than venture capital causing firms to be more innovative. Kortum and Lerner (2000), by way of contrast, examine whether these patterns can be discerned on an aggregate industry level, rather than on the firm level. They address concerns about causality in two ways. First, they exploit the major discontinuity in the recent history of the venture capital industry: as discussed above, in the late 1970s, the US Department of Labor clarified the Employee Retirement Income Security Act, a policy shift that freed pensions to invest in venture capital. This shift led to a sharp increase in the funds committed to venture capital. This type of exogenous change should identify the role of venture capital, because it is unlikely to be related to the arrival of entrepreneurial opportunities. They exploit this shift in instrumental variable regressions. Second, they use R&D expenditures to control for the arrival of technological opportunities that are anticipated by economic actors at the time, but that are unobserved to econometricians. In the framework of a simple model, they show that the causality problem disappears if they estimate the impact of venture capital on the patent- R&D ratio, rather than on patenting itself. Even after addressing these causality concerns, the results suggest that venture funding does have a strong positive impact on innovation. The estimated coefficients vary according to the techniques employed, but on average a dollar of venture capital appears to be three to four times more potent in stimulating patenting than a dollar of traditional corporate R&D. The estimates therefore suggest that venture capital, even though it averaged less than 3% of corporate R&D from 1983 to 1992, is responsible for a much greater share—perhaps 10%—of US industrial innovations in this decade. These findings have been supported by recent working paper by Mollica and Zingales (2007), who also use an instrumental variable approach based on state pension fund resources to look at the relationship of venture capital and innovation and find a strong relationship. Examples of such programs are the US Small Business Investment Company (SBIC) and Small Business Innovation Research (SBIR) programs. Together, these programs disbursed $2.4 billion in 1995, more than 60% of the amount from venture capital in that year (Lerner, 1998). In Germany, more than 800 federal and state government financing programs have been established for new firms in the recent past (OECD, 1995). In 1980, the Swedish established the first of a series of investment companies (along with instituting a series of measures such as reduced capital gains taxes to encourage private investments in startups), partly on the US model. By 1987, the government share of venture capital funding was 43% (Karaomerliolu and Jacobsson, 1999). Recently, the United Kingdom has instituted a series of government programs under the Enterprise Fund umbrella which allocate funds to small- and medium-sized firms in high technology and certain regions, as well as guaranteeing some loans to small businesses (Bank of England, 2001). There are also programs at the European level. A limited amount of evidence, most of it US based, exists as to the effectiveness and “additionality” of these programs (see Lerner, 2009 for a review of the key programs and their evaluations). In most cases, evaluating the success of the programs is difficult due to the lack of a “control” group of similar firms that do not receive funding. 13 Therefore, most of the available studies are based on retrospective survey data provided by the recipients; few attempt to address the question of performance under the counterfactual seriously. A notable exception is the study by Lerner (1999), who looks at 1435 SBIR awardees and a matched sample of firms that did not receive awards, over a 10-year postaward period. Because most of the firms are privately held, he is unable to analyze the resulting valuation or profitability of the firms, but he does find that firms receiving SBIR grants grow significantly faster than the others after receipt of the grant. He attributes some of this effect to “quality certification” by the government that enables the firm to raise funds from private sources as well. See Jaffe (2002) for a review of methodologies for evaluation such government programs. For a complete review of the SBIR program, including some case studies, see the National Research Council (2002). 14 Also see Spivack (2001) for further studies of such programs, including European studies, and David et al. (2000) and Klette et al. (2000) for surveys of the evaluation of government R&D programs in general A series of papers by Czarnitzki and coauthors (Aerts and Czarnitzki, 2006; Almus and Czarnitzki, 2003; Czarnitzki and Hussinger, 2004) have looked at the performance of firms that receive public R&D subsidies in several European countries such as Belgium and Germany, using treatment effect analysis. They generally find that such subsidies do not completely displace private expenditure on R&D (i.e., they are additional) and that they are productive in the sense that they result in patenting by the firm. Hall and Maffioli (2008) survey a similar set of results for large Latin American economies and reach a more nuanced conclusion. First, there is fairly clear evidence, based on theory, surveys, and empirical estimation, that small and startup firms in R&D-intensive industries face a higher cost of capital than their larger competitors and firms in other industries. In addition to compelling theoretical arguments and empirical evidence, the mere existence of the VC industry and the fact that it is concentrated precisely where these startups are most active suggests that this is so. The fact that ex post venture returns may lag the market, however, remains a puzzle and makes a clear-cut conclusion more complex. Second, the evidence for a financing gap for large and established R&D firms is harder to establish. It is certainly the case that these firms prefer to use internally generated funds for financing investment, but less clear that there is an argument for intervention, beyond the favorable tax treatment that currently exists in many countries. 15 Third, the VC solution to the problem of financing innovation has its limits: First, it does tend to focus only on a few sectors at a time, and to make investments with a minimum size that is too large for startups in some fields. Second, good performance of the VC sector requires a thick market in small and new firm stocks (such as NASDAQ) in order to provide an exit strategy for early stage investors. Introducing a VC sector into an economy where it is not already present is nontrivial as it requires the presence of at least three interacting institutions: investors, experienced venture fund managers, and a market for IPOs. Fourth, the effectiveness of government incubators, seed funding, loan guarantees, and other such policies for funding R&D deserves further study, ideally in an experimental or quasiexperimental setting. In particular, studying the cross-country variation in the performance of such programs would be desirable, because the outcomes may depend to a great extent on institutional factors that are difficult to control for using data from within a single country.

Chapter 15: The Market for Technology

One consequence of the existence of well-functioning markets for technology is that they create incentives for vertical specialization. This is just a straightforward application of the classical theory of division of labor. Indeed, as Table 2 shows, in the United States, the revenues of establishments that supply scientific R&D services (NAIC 5417) are sizable: around $75 billion in 2004 and $85 billion in 2005. These establishments are highly R&D intensive, and perform about 5% of the total industrial R&D. This is consistent with other data reported by the NSF which indicate that contract R&D (the bulk of which was contracted to other companies) grew from 3.7% of total company funded R&D in 1993 to 5.6% in 2003, the latest year for which data are available. The pharmaceutical sector stands out in the extent to which R&D was outsourced, with 13.2% R&D outsourced in 2005. 18 These data clearly point to the substantial specialization in R&D, which is a rough indicator of the extent of what we call the division of innovative labor. It is also likely that the United States is in the vanguard of this trend. Comparable data, if available, would likely show a less extensive division of labor in Europe and Japan. Consistent with the rise of technology specialists, large firms account for a steadily smaller fraction of R&D performed in the United States. Figure 4 shows that the share of nonfederal R&D accounted for by large firms, defined at those with more than 25,000 employees, has fallen steadily from around two- thirds in 1980 to slightly more than one-third in 2005. Over the same period, small firms, defined as those with fewer than 500 employees, have increased their share from 6% to around 18%. Firms in the next size category (500–999 employees) have seen a similar increase. Doubtless this reflects changes in the industrial structure in the United States, but it also points to the growing ability of small firms to appropriate rents from innovations, perhaps through the licensing to others.

Chapter 16: Technological Innovation and the Theory of the Firm, the Role of Enterprise-level knowledge, complementarities, and (dynamic) Capabilities

The first organized research laboratory in the United States was established by the inventor Thomas Edison in 1876. In 1886, an applied scientist by the name of Arthur D. Little started his firm, which became a major technical services/consulting firm to other enterprises. Corporate laboratories on the German model began to appear in the United States soon after the Sherman Antitrust Act of 1890 steered companies to look for new ways to gain an advantage over rivals. Significant R&D labs were founded in the years before World War I at Eastman Kodak (1893), B.F. Goodrich (1895), General Electric (1900), Dow (1900), DuPont (1902), Goodyear (1909), and Ameri- can Telephone and Telegraph (AT&T; 1907). Independent research organizations like Arthur D. Little and the Mellon Institute continued to grow during the early twentieth century, but were surpassed by the rapid expansion of in-house research (Mowery, 1983). However, the many technology contracting problems and the efficiencies achievable from integration with manufacturing meant that external R&D could only serve as a complement, not a substitute, for in-house research (Armour and Teece, 1980). The founding of formal R&D programs and laboratories stemmed in part from competitive threats. For instance, AT&T at first followed the telegraph industry’s practice of relying on the market for—that is, it outsourced—technological innovation. However, the expiration of the major Bell patents and the growth of large numbers of independent telephone companies helped stimulate AT&T to organize Bell Labs to generate inventions and innovations internally. Competition likewise drove George Eastman to establish laboratories at Kodak Park in Rochester, New York, to counteract efforts by German dyestuff and chemical firms to enter into the manufacture of photographic chemicals and film. During the early years of the twentieth century, the number of research labs grew dramatically. By World War I there were perhaps as many as one hundred industrial research laboratories in the United States. The number tripled during World War I, and industrial R&D even maintained its momentum during the Great Depression. The number of scientists and research engineers employed by these laboratories grew from 2775 in 1921 to almost 30,000 by 1940. As international tensions increased during the Cold War, government funding grew considerably. In 1957, government funding of R&D performed by industry eclipsed the funding provided by the firms themselves. By 1967, it went back the other way, with private funding taking the lead. By 1975, industry funding of industry-conducted R&D was twice the federal level and the ratio was expanding. Government procurement was perhaps even more important to the technological development of certain industries, as it facilitated early investment in product facilities, thus easing the cost of commercialization. The newly emergent electronics industry in particular was able to benefit from the Defense Department’s demand for advanced products. By 1960, the US electronics industry had come to rely on the federal government for 70% of its R&D dollars (which may have cost US firms their leadership in consumer electronics as they became preoccupied with the more performance-oriented requirements of the US military). By the early 1970s, management was beginning to lose faith in the science-driven approach to innovation, primarily because few blockbuster products had emerged from the research funded during the 1950–1970s. Competition became more global, leaving firms less certain of cash flow from their domestic market for funding R&D. New technology was not converted into new products and processes rapidly enough, confronting many companies with the paradox of being leaders in R&D and laggards in the introduction of innovative products and processes. The fruit of much R&D was appropriated by domestic and foreign competitors, and much technology languished in research laboratories. In telecommunications, Bell Labs’ contribution to the economy at large far outstripped its contribution to AT&T. In the semiconductor industry, Fairchild’s large research organization contributed more to the economy through the spin-off companies it spawned than to its parent. Xerox Corporation’s Palo Alto Research Center made stunning contributions to the economy in the area of the personal computer, local area networks, and the graphical user interface that became the basis of Apple’s Macintosh computer (and, later, of Microsoft’s Windows). Xerox shareholders were well served too, but most of the benefits ended up in the hands of Xerox’s competitors or of companies in adjacent industries. Different modes of organization and different funding priorities were needed. Knowledge throughout the firm had to be embedded in new products promptly placed into the marketplace. A new way of conducting R&D and commercializing new products was needed. By the 1980s and 1990s, a new model for organizing research became apparent. First, inside large corporations, R&D activity came to be decentralized, with the aim of bringing it closer to users and customers. By the mid-1990s, Intel, the world leader in microprocessors, was spending over $1 billion per year on R&D, yet did not have a separate R&D laboratory. Rather, development was conducted in the manufacturing facilities. Intel did not invest in fundamental research at all apart from its funding of university research and some research activities located on or near university campuses. Second, many companies were looking to the universities for much of their basic or fundamental research, maintaining close associations with the science and engineering departments at the major research universities. Indeed, the percentage of academic research funded by industry, which had declined to 2.5% by 1966, rose steadily to 7.4% in 1999, declining since then to about 5%, its level in the early 1980s (National Science Board, 2008, Appendix Table 4-3). Strong links between university research and industrial research are present in electronics (especially semiconductors), chemical products, medicine, and agriculture. For the most part, university researchers are insufficiently versed in the particulars of specific product markets and customer needs to help configure products to the needs of the market. Third, corporations have embraced horizontal, vertical, and lateral alliances involving R&D, manufacturing, and marketing in order to get products to market quicker and leverage off complemen- tary assets and capabilities already in place elsewhere. A variant on this strategy is the new product- oriented corporate acquisition, employed as a vital complement to in-house R&D, perhaps most notably by Cisco, which has spent billions to acquire dozens of companies with products that had been recently placed into the market (Mayer and Kenney, 2004). It is important to note, however, that outsourced R&D is a complement, not a substitute, to in-house activities. Outsourcing and codevelopment arrange- ments had become common by the 1980s and 1990s (e.g., Pratt & Whitney’s codevelopment programs for jet engines, or the IBM-Sony-Toshiba alliance for the development of the Cell processor) as the costs of product development increased, especially after the antitrust laws were modified to recognize the benefits of cooperation in R&D and related activities. Cooperation was also facilitated by the emergence of capable potential partners in Europe and Japan. These developments meant that at the end of the twentieth century, R&D was being conducted in quite a different manner from how it was organized at the beginning. Many corporations had closed, or dramatically scaled back, their central research laboratories, including Westinghouse, RCA, AT&T, US Steel, and Unocal to name just a few. Alliances and cooperative efforts of all kinds were of much greater importance. 6 Many firms are now sourcing much of their innovation externally, following an “open” innovation model (Chesbrough, 2006). Moreover, much of the momentum for commercializing innovations had shifted to venture capital- funded “start-ups.” By the 1980s, private venture funds began to have a transformative effect on the US industrial landscape, particularly in biotech and information technology. They dramatically increased the funds that were available to, as well as the professionalism of, entrepreneurs. In many ways these new, agile venture-funded enterprises still depended on the organized R&D labs for their birthright. Some start-ups were exploiting technological opportunities that incumbents had considered and rejected. The long lead time needed to commercialize early stage research (and the potential for leakage to domestic and foreign rivals) was difficult for management to justify. Venture funds were also generally uninterested in funding exploratory research. This has left basic and applied research in some sectors (like communications) with a diminished funding base. Some observers fear that society is “eating its seed corn.” 7 Another prominent example of the ecosystem impacting innovation is the Internet. The basic technology and structure of the Internet has its origins in university research applied in the late 1960s by Bolt, Beranek, and Newman, a contractor to the US Department of Defense, to build a network connecting researchers with government contracts to government-sponsored computers in order to maximize resource utilization. ARPANET gradually extended its reach around the world and was merged in 1983 with similar networks to form the Internet. There has been considerable debate and scholarly attention to the role of market structure on determining firm-level innovation. Schumpeter was among the first to declare that perfect competition was incompatible with innovation. The hypothesis often attributed to him (see Schumpeter, 1942, especially Chapter VIII), posits that profits accumulated through the exercise of monopoly power (assumed to be correlated with large firms) are a key source of funds to support risky and costly innovative activity. These predictions, even as a matter of theory, are not well grounded in the financial realities of the firm (Kamien and Schwartz, 1978). Any theory of market power as a funding mechanism for innovation in specific markets is further unshackled if the multiproduct (multi-industry) firm is admitted onto the economic landscape. The multiproduct structure allows the allocation of cash generated anywhere to be directed to high-yield purposes everywhere inside the firm. The fungibility of cash inside the multiproduct firm thus unlocks any causal relationship between market power (which is a market-specific concept) and innovation. The Schumpeterian notion that small entrepreneurial firms lack adequate financial resources for innovation seems at odds with his earlier views (1934) on entrepreneur-led innovation and seems archaic in today’s circumstances where venture capital-funded enterprises play such a large role in innovation (Gompers and Lerner, 2001). From time to time, public equity markets have also funded relatively early stage biotech and Internet companies with minimal revenues and negative earnings. Another setback for the various Schumpeterian market structure-innovation hypotheses is that the logic can run the other way: namely, that innovation shapes market structure. Success garnered from innovation can lead to market concentration, as it has with Intel and Microsoft, and as it once did with the Ford Motor Company and Xerox. Various reviews of the extensive literature on innovation and market structure generally find that the relationship is weak or holds only when controlling for particular circumstances (Cohen and Levin, 1989; Gilbert, 2006; Sutton, 2001). The emerging consensus (Dasgupta and Stiglitz, 1980; Futia, 1980; Levin and Reiss, 1984, 1988; Levin et al., 1985; Nelson and Winter, 1978) is that market concentration and innovation activity most probably either coevolve (Metcaffe and Gibbons, 1988) or are simultaneously determined. Context (stage in the industry life cycle; technological environment) is likely to matter. One prominent feature of the environment is the abundance (or scarcity) of technological opportu- nities. In an industry with lots of technological opportunities, innovation is expected to be relatively easy due to a lower expected development cost and/or a plentiful supply of relevant and available knowledge. For example, university and government (funded) research in science and technology help create vibrant technological environments with multiple sources of new technology, fueling venture- funded new businesses. Biotech is a case where US government funds distributed through the National Institutes of Health have helped to create technological opportunities which are then seized upon and developed further by new venture-funded startups. While most of these companies fail, enough survive to impact the structure of the pharmaceutical industry.

Chapter 17: The Difussion of New Technology (Stoneman and Battisti)

Another indicator, this time of the periods taken by international diffusion processes, is that the level of phones per capita in the United States in 1910 was not matched by India until 90 years later. Nor are such differences only between advanced and developing economies. Comin et al. (2006) list a number of facts that they claim are typical of international diffusion processes. (i) Cross-country dispersion in technology adoption for individual technologies is three to five times larger than cross-country dispersion in income per capita. (ii) The relative position of countries according to the degree of technology adoption is very highly correlated across technologies. This correlation declines significantly within the OECD. (iii) There is convergence within technologies at an average speed between 4% and 7% per year. (iv) The cross-country speed of convergence within technologies after 1925 is about three times larger than for technologies developed before 1925.

There is still some controversy re the impact of market concentration upon the diffusion of innovations. Fudenberg and Tirole (1985) and also Riordan (1992) suggest that in line with the Schumpeterian models of innovation, competition leads to faster technology adoption. However, in highly competitive markets as the gains to adoption may be short lived as rival firms mimic innovations, firms might not be motivated to adopt productivity improving innovation practices of various kinds. The lack of unanimous consensus on the direction and the extent of the impact of market concentration upon technology adoption is reflected in a number of studies (see among others Battisti, 2000; Battisti and Pietrobelli, 2000; Go ̈tz, 1999; Mansfield, 1968; Quirmbach, 1986; Reinganum, 1981; Romeo, 1975).

Chapter 18: General Purpose Technologies (Bresnahan)

Related but distinct question was how and why GPTs emerge. Do important general principles, perhaps from science, create technological opportunity that can be widely exploited? Does work on critical demand needs induce technical progress of general importance? Or is the process of market invention that leads to GPTs more complex than either of those views? Together with the historical observations that problems of coordination and of slow diffusion (perhaps because of the need for complementary investments) this led us to GPTs. These dual motivations led to a basic definition of GPTs with three parts: a GPT (1) is widely used, (2) is capable of ongoing technical improvement, and (3) enables innovation in application sectors (AS). 8 The combination of assumptions (2) and (3) is called “innovational complementarities” (IC).

Finally, just as it would be a mistake to say that steam power came out of nowhere to create the age of steam, so too it would be a mistake not to note the many ideas closely related to GPTs, some much older. Clearly there is a relationship between the idea of a GPT and the idea of a technoeconomic paradigm (Dosi, 1982). Similarly, there is a relationship to the idea of a macro invention (Mokyr, 2002) and to a strategic invention (Usher, 1954). Finally, many industries have the concept of an enabling technology, by which they mean a GPT.

Long swings in productivity growth can arise simply because of those long and variable lags. Do we need a theory in which there was something going wrong to explain the productivity slowdowns of the late twentieth century or of the late nineteenth/early twentieth century? An alternative explanation, arising from my second story, is that in each case relaxation of earlier constraints had begun to slow down in its growth impact and relaxation of new constraints had not yet begun to occur at a high pace. In the late twentieth century slowdown, for example, the gains to automating blue-collar work were slowing and the gains to automating white-collar work associated with computerization had yet to cut in. Indeed, this appears to be the most attractive theory of the late twentieth century “productivity slowdown” and its later reversal. Many people (foolishly) concluded that there must have been something going wrong with firms’ investments in ICT technology during the early phase of the diffusion of computers. We now know that the problem with “we see computers everywhere around us except in the productivity statistics” was not with productivity, but with looking at computers in economics departments rather than in firms. At the time of Solow’s remark, the ICT capital stock was far too small to have (yet) created a growth boom, even though the private returns to use of computers were very substantial.

Chapter 19: International Trade, Foreign Direct Investment, and Technology Spillovers (Keller)

Why does the international diffusion of technology matter? Productivity differences explain a large part of the variation in incomes across countries, and technology plays the key role in determining productivity. 2 For most countries, foreign sources of technology are estimated to account for 90% or more of domestic productivity growth. Although the contribution of India, China, and a number of other countries is rising, most of the world’s technology creation occurs in only a handful of rich countries. ( The largest seven industrialized countries accounted for about 84% of the world’s research and development (R&D) spending in 1995, for example, their share in world GDP was only 64%. ) The pattern of worldwide technical change is thus determined in large part by international technology diffusion.

How can the theory laid out in Section 2 be used to think about the findings that were just discussed? First, there is the finding of geographic localization of international technology diffusion. This seemingly puzzling result—after all, is technological knowledge not weightless, after all?—is easily explained if one considers the transactions costs of international commerce more broadly. 39 Yes, there are trade costs for shipping technology in embodied form, but it is also costly to communicate disembodied technological knowledge, especially if it cannot be done face-to-face. As firms equate trade and technology transfer costs at the margin, technology transfer falls with the distance between technology sender and recipient, even though technological knowledge is weightless. Technology diffusion declines with distance because in equilibrium technology transfer to remote locations is relatively costly, so there is less of it.

Chapter 20: Innovation and Economic Deelopment (Fegerberg, Srholec, Verspagen)

Is innovation important for development? And if so, how? The answers to these questions depend, we will argue, on what is meant by the term innovation. One popular perception of innovation, that one meets in media every day, is that it has to do with developing brand new, advanced solutions for sophisticated, well-off customers, through exploitation of the most recent advances in knowledge. Such innovation is normally seen as carried out by highly educated labor in research and development (R&D) intensive companies, being large or small, with strong ties to leading centers of excellence in the scientific world. Hence innovation in this sense is a typical “first world” activity. It is fair to say that the question of how technology and innovation influence economic development is a controversial issue, and has been so for a long time (Fagerberg and Godinho, 2004). In Section 2 of this chapter we trace the discussions back to Torstein Veblen’s writings about Germany’s industrialization nearly a century ago. Here Veblen pointed to some of the issues, such as the nature of technology, the conditions for technological catch-up, etc., that have been central to the discussion to the present day. In fact, he was very optimistic about the possibilities for technological and economic catch-up by poorer economies. This optimistic mood came to be shared by neoclassical economists when they, nearly half a century later, turned their attention to the same issues. In this conception of reality, technology was assumed to be a so-called “public good,” freely available for everyone everywhere. Hence, a common interpretation of neoclassical growth theory (Solow, 1956) has been that catch-up and convergence in the global economy will occur automatically (and quickly) as long as market forces are allowed to “do their job.” However, writers from several other strands, such as economic historians, with Alexander Gerschenkron (1962) as the prime example, or economists inspired by the revival of interest in Joseph Schumpeter’s works that took place from the 1960s onwards, have been much less optimistic in this regard. According to these writers, there is nothing automatic about technological catch-up. It requires considerable effort and organizational and institutional change to succeed (Ames and Rosenberg, 1963). A central theme in the literature on the subject concerns the various “capabilities” that firms, industries, and countries need to generate in order to escape the low development trap. However, to allow for long-run growth in GDP per capita, Solow (1956) added an exogenous term, labeled “technological progress.” In this interpretation, technology—or knowledge—is a “public” good, that is, something that is accessible for everybody free of charge. Solow did not discuss the implications of this for a multicountry world but subsequent research based on the neoclassical perspective took it for granted that if technology—or knowledge—is freely available in, say, the United States, it will be so at the global level as well. The following remark by one of the leading empirical researchers in the field is typical in this respect: “Because knowledge is an international commodity, I should expect the contribution of advances of knowledge (. . .) to be of about the same size in all the countries. . . ” (Denison, 1967, p. 282). On this assumption the neoclassical model of economic growth predicts that, in the long run, GDP per capita in all countries will grow at the same, exogenously determined rate of global technological progress. Moreover, what came to be seen as the central prediction of theory—that convergence between rich and poor countries should be expected—was shown not to be consistent with the facts either (Islam, 2003). In fact, the long-run trend since the Industrial Revolution has been towards divergence, not convergence in productivity and income. For example, according to the economic historian David Landes, the difference in income or productivity per head between the richest and poorest country in the world has substantially increased over the last 250 years (Landes, 1998). Although different sources may give different estimates for this increase, the qualitative interpretation remains the same. As noted this perspective of technology was later wholeheartedly adopted by standard neoclassical economics. Following that approach, knowledge should be seen as a body of information, freely available to all interested, that could be used over and over again (without being depleted). Obviously, if this is what knowledge is about, it should be expected to benefit everybody all over the globe to the same extent, and cannot be used to explain differences in growth and development. It is understandable, therefore, that the first systematic attempts to use knowledge to explain differences in economic development did not come from economics proper but from economic historians (many of whom came to look at knowledge or technology in a rather different way from the prevailing view in economics). Rather than something that exists in the public domain and can be exploited by anybody everywhere free of charge, technological knowledge, whether created through learning or organized R&D, is in this tradition seen as deeply rooted in the specific capabilities of private firms and their networks/environments, and hence not easily transferable. Compared with the traditional neoclassical growth theory discussed earlier these writers painted a much bleaker picture of the prospects for catch-up. According to this latter view there is nothing automatic about catch-up: it requires a lot of effort and capability building on the part of the backward country. The concept “social capability” soon became very popular in applied work. Nevertheless it is, as Abramovitz himself admitted, quite “vaguely” defined (Abramovitz, 1994a, p. 25) and this has left a wide scope for different interpretations. But although Abramovitz found it hard to measure, it is not true that he lacked clear ideas about what the concept was intended to cover. These are some of the aspects that he considered to be particularly relevant (Abramovitz, 1986, 1994a,b): Technical competence (level of education) experience in the organization and management of large scale enterprises financial institutions and markets capable of mobilizing capital on a large scale honesty and trust the stability of government and its effectiveness in defining (enforcing) rules and supporting economic growth Moreover, there is currently no agreement in the literature on how innovation systems should be defined and studied empirically. Some researchers in this area emphasize a need for developing a common methodology, based on the functions and activities of the system, to guide empirical work (Edquist, 2004; Johnson and Jacobsson, 2003; Liu and White, 2001), while others advocate the advantage of keeping the approach open and flexible (Lundvall, 2007). As discussed earlier, the concept of technological capability refers to the ability to develop, search for, absorb, and exploit knowledge commercially. An important element of this is what Kim (1997) termed “innovation capability.” There are several data sources that capture different aspects of this. For example, the quality of a country’s science base, on which invention and innovation activities to some extent depend, may be reflected in articles published in scientific and technical journals. R&D expenditures measure some (but not all) resources that are used for developing new products or processes, while patents count (patentable) inventions coming out of that process. However, the impact of government’s actions on innovation activities and development outcomes may as pointed out by Abramovitz also depend on the prevailing social values in society such as, for example, tolerance, honesty, trust, and civic engagement. Such values, facilitating socially beneficial, cooperative activities, are often seen as expressions of so-called “social capital” (Putnam, 1993; for an overview see Woolcock and Narayan, 2000). The fact that the type of factors taken up by the literature on social capital may matter for economic development is widely accepted. For instance, Kenneth Arrow pointed out more than three decades ago that “It can plausibly be argued that much of the economic backwardness in the world can be explained by lack of mutual confidence” (Arrow, 1972, p. 357). The problem is rather how to measure it. One possible source of information that has been exploited to throw some light on the issue is the “World Value Survey.” Knack and Keefer (1997) used such data to analyze the relationship between trust, norms of civic behavior, and membership in groups on the one hand and economic growth on the other for a sample of 29 (mostly developed) countries. However, the limited time and country coverage of these data has, until recently at least, precluded its extension to a sizeable part of the developing world. It also needs to be emphasized that technological capability in developing country firms is much more than R&D. As Bell and Pavitt (1993) have pointed out, most firms in developing countries innovate on the basis of a broad range of capabilities. These are, they argue, typically concentrated in the departments of maintenance, engineering, or quality control (rather than in, say, a R&D department). This does not mean, however, that R&D is unimportant. For example, Kim (1980) emphasized the role of R&D efforts for firms’ ability to assimilate foreign technology, especially at more advanced stages of development. Of great importance according to Kim (1980) is also dense interaction with other firms or organizations in the local environment—so-called “linkage” capabilities in the terms of Lall (1992)—which may help to unlock the internal constraints for innovation that often hinder firms in developing countries with insufficient internal technological capabilities to succeed in their endeavors. The “capabilities” literature that was summarized above has mostly focused on the catch-up experience of individual countries (e.g., Lall and Urata, 2003). From these individual country histories, it appears that there is no single answer to the question of which channels are most important for sourcing knowledge from abroad. In Asia, Japan is the earliest example of a successful catching-up country. Industrialization in Japan started in the latter half of the nineteenth century, but a significant break in the process occurred with World War II. Goto and Odagiri (2003) describe how, in the postwar phase, the Japanese sourced technology mainly by importing capital goods, licensing of technology (and other forms of alliances) from Western firms, reverse engineering, and the use of trade missions and other forms of intelligence targeted at learning about foreign technology. In summary, Japan acquired advanced foreign technology through all channels except for inward FDI (Goto and Odagiri, 2003, p. 89) The most famous examples of countries that managed to escape the low development trap and raise their standards of living towards developed country levels relatively quickly were far from being passive adopters of new, developed countries technologies. On the contrary countries such as Korea, Taiwan, and Singapore, which were among the prime success stories, placed great emphasis on generating what later became known as “technological capabilities” through a concerted effort by public and private sector actors and apparently it paid off handsomely. Why were such activist development strategies that contradicted much common wisdom, seemingly much more successful than the “hands off” approach advocated by leading authorities and institutions such as the IMF and the World Bank, what is often called the “Washington consensus”? These were some of the questions that gradually became more central to the agendas of politicians, development experts and economists through the closing decades of the millennium and the beginning of the next and it led as we have shown to the emergence of new theories, approaches, and evidence. Arguably, the process started already back in the 1950s when economic historians started to analyze actual catching-up processes and came up with generalizations that were far from the liberal “hands off” approach in favor among economists. As a consequence, a stream of research emerged, mainly among economic historians and economists with a more heterodox leaning, that focused on “capability building” of various sorts as essential for development processes. This way of looking at things gained momentum during the 1980s and 1990s as the success of the Asian tigers (and Japan before that) became more widely recognized and studied. The term “technological capability,” originally developed as a tool for analyzing the Korean case, gradually became more widely used among students of development processes, and a large amount of research emerged using this approach to understand the performance of firms, industries, and countries in the developing part of the world. It is fair to say, however, that in spite of these developments, many economists continue to be unconvinced by the “capability” approach, maybe because it is seen as a meso or macro approach lacking proper micro foundations, theoretically as well empirically. However, it is particularly at this point that the research is most strongly increasing today, in the form of a massive data gathering effort on innovation activities in developing countries, and analyses based on these new sources of information. These new developments, which follow similar efforts in the developed part of the world (particularly Europe) from the 1990s onwards, has vividly demonstrated that the “high-tech” approach to innovation which has framed much thinking and policy advice on the subject is strongly misleading when it comes to understanding the relationship between innovation and development. In fact, the evidence shows that innovation is quite widespread among developing country firms, is associated with higher productivity (e.g., develop- ment) and, as in the developed part of the world, is dependent on a web of interactions with other private and public actors. This is not to say that innovation in developed and developing countries is identical in every respect, but in qualitative terms innovation is found to be a powerful force of growth in both and therefore an issue that is imperative to get a better understanding of, theoretically as well as empirically.

Chapter 21: Energy, the Environment, and Technological Change (Popp, Newell, and Jaffe)

Popp (2006a) considers the long-run welfare gains from both an optimally designed carbon tax (one equating the marginal benefits of carbon reductions with the marginal costs of such reductions) and optimally designed R&D subsidies. Popp finds that combining both policies yields the largest welfare gain. However, a policy using only the carbon tax achieves 95% of the welfare gains of the combined policy, while a policy using only the optimal R&D subsidy attains just 11% of the welfare gains of the combined policy in his model. In contrast to Schneider and Goulder, R&D policy has less effect in this study, as the subsidies only apply to the energy sector. In a similar exercise, Gerlagh and van der Zwaan (2006) find an emissions performance standard to be cheapest policy for achieving various carbon stabilization goals. They note that, like a carbon tax, the emissions performance standard directly addresses the environmental externality. In addition, like a renewable subsidy, the emissions performance standard stimulates innovation in a sector with high spillovers. In comparing the results of these two papers, Gerlagh and van der Zwaan note that the ordering of policies depends on the assumed returns to scale of renewable energy technologies. Fischer and Newell assume greater decreasing returns to renewable energy, due to the scarcity of appropriate sites for new renewable sources. Thus, an important question raised by Gerlagh and van der Zwaan is whether the cost savings from innovation will be sufficient to overcome decreasing returns to scale for renewable energy resulting from limited space for new solar and wind installations. Environmental policies can be characterized as either uniform “command-and-control” stan- dards or market-based approaches. Market-based instruments are mechanisms that encourage behavior through market signals rather than through explicit directives regarding pollution-control levels or methods. Such regulations allow firms flexibility to choose the least-cost solutions to improved environmental performance. In contrast, conventional approaches to regulating the environment are often referred to as “command-and-control” regulations, since they allow relatively little flexibility in the means of achieving goals. These regulations tend to force firms to take on similar magnitudes of the pollution-control burden, regardless of the cost. Command-and-control regulations do this by setting uniform standards for firms. The most commonly used types of command-and-control regulation are performance- and technology-based standards. A performance standard sets a uniform control target for firms (e.g., emissions per unit of output), while allowing some latitude in how this target is met. Technology-based standards specify the method, and sometimes the actual equipment, that firms must use to comply with a particular regulation. ltimately unachievable, leading to political and economic disruption (Freeman and Haveman, 1972). Technology standards are particularly problematic, since they can freeze the development of tech- nologies that might otherwise result in greater levels of control. Under regulations that are targeted at technologies, as opposed to emissions levels, no financial incentive exists for businesses to exceed control targets, and the adoption of new technologies is discouraged. However, there is still an incentive for equipment cost reduction. Under a “Best Available Control Technology” (BACT) standard, a business that adopts a new method of pollution abatement may be “rewarded” by being held to a higher standard of performance and thereby not benefit financially from its investment, except to the extent that its competitors have even more difficulty reaching the new standard (Hahn and Stavins, 1991). On the other hand, if third parties can invent and patent better equipment, they can—in theory—have a ready market. Under such conditions, a BACT type of standard can provide a positive incentive for technology innovation. In contrast with such command-and-control regulations, market-based instruments can provide powerful incentives for companies to adopt cheaper and better pollution-control technologies. This is because with market-based instruments, it always pays firms to clean up a bit more if a sufficiently low- cost method (technology or process) of doing so can be identified and adopted. The advantages of market-based policies are particularly true for flexible policies that allow the innovator to identify the best way to meet the policy goal. For instance, a carbon tax allows innovators to choose whatever technologies best reduce carbon emissions, whereas a tax credit for wind power focuses innovative efforts on wind power at the expense of other clean energy technologies. One significant caveat with estimated learning rates is that they typically focus on correlations between energy technology usage and costs, rather than causation. Recent papers by Klaassen et al. (2005), So ̈derholm and Sundqvist (2007), and So ̈derholm and Klaassen (2007) attempt to disentangle the separate contributions of R&D and experience by estimating “two-factor” learning curves for environmental technologies. These two-factor curves model cost reductions as a function of both cumulative capacity (LBD) and R&D (learning-by-searching, or LBS). To be comparable with the notion of cumulative capacity, in these models R&D is typically aggregated into a stock of R&D capital. Thus, endogeneity is a concern, as we would expect both investments in capacity to be a function of past R&D expenditures and R&D expenditures to be influenced by capacity, which helps determine demand for R&D. So ̈derholm and Sundqvist address this endogeneity in their paper and find LBD rates around 5%, and LBS rates around 15%, suggesting that R&D, rather than LBD, contributes more to cost reductions. However, these results are very sensitive to the model specification, illustrating the difficulty of sorting through the various channels through which costs may fall over time. To further address the problems associated with estimating and interpreting learning curves, Nemet (2006) uses simulation techniques to decompose cost reductions for PV cells into seven categories. Plant size (e.g., returns to scale), efficiency improvements, and lower silicon costs explain the majority of cost reductions. Notably, most of the major improvements in efficiency come from universities, where traditional learning by doing through production experience would not be a factor. Learning from experience (e.g., through increased yield of PV cells) plays a much smaller role, accounting for just 10% of the cost decreases in Nemet’s sample. While research on the various sources Until now, we have focused primarily on the incentives faced, and activities conducted, by private firms. However, governments also play an important role in energy R&D. The US Department of Energy (DOE) spent about $4 billion on energy R&D in 2007 (Newell, 2008a) This government investment plays several roles, each of which offers challenges to economists focusing on environmental innovation. First, note that government R&D can help to compensate for underinvestment by private firms. Unlike firms, the government is in position to consider social returns when making investment decisions. In addition, government R&D tends to have different objectives than private R&D. Govern- ment support is particularly important for basic R&D, as long-term payoffs, greater uncertainty, and the lack of a finished product at the end all make it difficult for private firms to appropriate the returns of basic R&D. Thus, the nature of government R&D is important. For example, Popp (2002) finds that government energy R&D served as a substitute for private energy R&D during the 1970s, but as a complement to private energy R&D afterwards. One explanation given for the change in impact is the changing nature of energy R&D. During the 1970s, much government R&D funding went to applied projects such as the effort to produce synfuels. Beginning with the Reagan administration, government R&D shifted toward a focus on more basic applications. The analyses that have been conducted of US federal research relating to energy and the environment have come to mixed conclusions. Cohen and Noll (1991) documented the waste associated with the breeder reactor and synthetic fuel programs in the 1970s, but in the same volume Pegram (1991) concluded that the photovoltaics research program undertaken in the same time frame had significant benefits. More recently, the US National Research Council attempted a fairly comprehensive overview of energy efficiency and fossil energy research at DOE over the last two decades (National Research Council, 2001). Using both estimates of overall return and case studies, they concluded, as one might expect, that there were only a handful of programs that proved highly valuable. Their estimates of returns suggest, however, that the benefits of these successes justified the overall portfolio investment. In general, one would expect government R&D to take longer to have an observable effect on outcomes than private R&D, as it is further upstream from the final commercialized product. At the same time, both private and public R&D are driven by the same demand-side influences, such as energy prices and environmental policy. This makes disentangling the effect of each difficult. However, measuring the impact of government R&D is important for modeling environmental policy. Economic theory suggests that a wedge should exist between social and private returns to R&D. Government R&D aims, at least in part, to close this gap. However, there is little empirical evidence specifically on the returns to govern- ment R&D, nor to the extent to which government R&D effectively closes this gap. This is due, in part, to the nature of government projects, which are often more basic and long term in nature, making estimating returns difficult. Given this, estimating the gap between private and social rates of return that exist after accounting for both private and public energy R&D spending is an important area for future research. Using a hazard model, Snyder et al. look at both the adoption and exit decisions of chlorine plants. They find that increases in the percentage of plants using membrane technology comes partially from adoption, but primarily from shutdowns of older plants. Environmental regulation does not have a statistically significant effect on adoption of membrane technology. However, the passage of more stringent regulations over time does appear to hasten the shutdown of older facilities, thus increasing the share of plants using membrane technology. In general, firms can choose one of two strategies to comply with environmental regulations. End-of- the-pipe abatement reduces emissions by using add-on technologies to clean the waste stream coming from a plant. In contrast, cleaner production methods reduce emissions by generating less pollution in the production process. Frondel et al. (2007) look at the factors influencing the choice of one strategy over the other. They find that many plants in OECD nations make use of cleaner production methods. However, environmental regulations are more likely to lead to the adoption of end-of-the-pipe techni- ques. In contrast, market forces such as cost savings or environmental audits lead to the adoption of cleaner production processes. In addition to economic incentives, direct regulation, and information provision, some research has emphasized the role that “informal regulation” or community pressure can play in encouraging the adoption of environmentally clean technologies. For example, in an analysis of fuel adoption decisions for traditional brick kilns in Mexico, Blackman and Bannister (1998) suggest that community pressure applied by competing firms and local nongovernmental organizations was associated with increased adoption of cleaner fuels, even when those fuels had relatively high variable costs. Popp et al. (2008) find that consumer concerns over dioxin found in the wastewater of pulp manufacturers helped spur the adoption of low-chlorine and chlorine-free bleaching techniques at pulp plants, even before regulations requiring such techniques took effect. An important difference between the technological choice here is that chlorine use not only has negative environmental impacts near the production site, but also affects the quality of the final product. Consumer concerns are more likely to be an issue when environmental choices affect product quality, such as chlorine in paper products or lead paint in children’s toys. Nonetheless, diffusion of environmental technologies, particularly to developing countries, is cur- rently one of the most pressing environmental concerns. Much of this concern stems from the need to address climate change while allowing for economic development. Rapid economic growth in countries such as China and India not only increases current carbon emissions from these countries, but results in high emission growth rates from these countries as well. In 1990, China and India accounted for 13% of world CO 2 emissions. By 2004, that figure had risen to 22%, and it is projected to rise to 31% by 2030 (Energy Information Administration, 2007). Similarly, Popp et al. (2008) show that pulp and paper manufacturers respond to the demands of consumers in key export markets when adopting cleaner paper bleaching techniques. Finally, Medhi (2008) finds that Korean automotive manufacturers first incorporated advanced emission controls into their vehicles to satisfy regulatory requirements in US and Japanese markets. It was only after fitting these technologies into their vehicles that the Korean government passed their own regulations requiring advanced emission controls. Finally, in developing country settings, factors inducing adoption of environmentally friendly technology may differ from factors that are important in developed countries. Blackman and Kildegaard (2003) study the adoption of three clean leather tanning technologies in Mexico. They use original survey data on a cluster of small- and medium-scale leather tanneries in Leo ́n, Guanajuato, noting that small- and medium-scale enterprises often dominate pollution intensive industries in developing countries. To explain the adoption of each tanning technique, they estimate a system of multivariate probit models. They find that a firm’s human capital and stock of technical information influence adoption. They also find that private-sector trade associations and input suppliers are important sources of technical information about clean technologies. In contrast to results typically found in developed countries, neither firm size nor regulatory pressure are correlated with adoption. Finally, several papers have looked at the intersection of politics and technology transfer. Fredriksson and Wollscheid (2008) study the adoption of cleaner steel production technologies across countries, measured by the percentage of steel produced using electric arc furnaces. While stricter environmental policy does encourage greater adoption of cleaner techniques, they surprisingly find that adoption of cleaner technologies is greater in countries with more corruption. They argue that firms in honest countries underinvest in technology in order to convince regulators to keep environmental standards weak. In corrupt countries, firms can invest in better technologies, and instead use bribes to weaken environmental regulations. Other examples in energy-economic modeling include Dowlatabadi (1998) and the US Energy Infor- mation Administration’s NEMS model (Energy Information Administration, 2003). The empirical evidence presented in Section 3.1 suggests that the price-inducement form of technological change has merit as a partial explanation; higher energy prices clearly are associated with faster improvements in energy efficiency. However, the reduced-form approach largely has been passed over for the R&D- or learning-induced technological change methodologies. In many models, the degree to which spillovers and crowding out arise is a complex interaction among underlying assumptions about model structure and distortions in the R&D market. Yet, these assump- tions have important ramifications for the total cost of a climate policy as well as the conclusions drawn about the degree to which estimates based on exogenous technology assumptions are biased. There is only a small empirical and conceptual literature to guide assumptions of the degree of crowding out, primarily on the elasticity of the science and engineering workforce in relation to greater R&D incentives (David and Hall, 2000; Goolsbee, 1998; Wolff, 2008). A third challenge for estimating the effects of environmental technological change is the role of government R&D, particularly with respect to environmentally-friendly energy R&D. Government R&D is particularly important for energy, where many technologies are still years from being commer- cially viable. The combination of long-term payoffs and high uncertainty make government R&D a popular policy choice. However, there is little research evaluating the effectiveness of these programs, making this a fruitful topic for technological change scholars interested in doing research on environ- mental topics.

Chapter 22 The Economics of Innovation and Technical Change in Agriculture (Pardey, Alston and Ruttan)

Skipped. Here is the abstract

Innovation in agriculture differs from innovation elsewhere in the economy in several important ways. In this chapter we highlight differences arising from (a) the atomistic nature of agricultural production, (b) the spatial specificity of agricultural technologies and the implications for spatial spillovers and the demand for adaptive research, and (c) the role of coevolving pests and diseases and changing weather and climate giving rise to demands for maintenance research, and other innovations that reduce the susceptibility of agricultural production to these uncontrolled factors. These features of agriculture mean that the nature and extent of market failures in the provision of agricultural research and innovation differ from their counterparts in other parts of the economy. Consequently, different gov- ernment policies are implied, including different types of intellectual property protection and different roles of the government in funding and performing research. Informal innovation and technical dis- covery processes characterized agriculture from its beginnings some 10,000 years ago, providing a foundation for the organized science and innovation activities that have become increasingly impor- tant over the past century or two. This chapter reviews innovation and technical change in agriculture in this more-recent period, paying attention to research institutions, investments, and intellectual prop- erty. Special attention is given to issues of R&D attribution, the nature and length of the lags between research spending and its impacts on productivity, and various dimensions of innovation outcomes, including rates of return to agricultural research and the distribution of benefits

Chapter 23 Growth Accounting (Hulten)

It is well established that R&D expenditures have a positive rate of return and that they are the source of much product and process innovation (see Griliches, 2000 for survey) and company valuation (Hall, 1993a,b). What qualitative truth does growth accounting reveal? This, of course, depends on the country, the sector, and the time period of the analysis. For the United States, BLS estimates for the US private business sector show that output per unit labor grew at an average annual rate of 2.5% per year over the period 1948–2007. At this rate, the level of output per worker more than quadrupled, a stellar performance considering the length of the period involved and the fact that output per worker is one of the key factors that determine the standard of living. What accounts for this success? BLS estimates indicate that somewhat more than half (58%) of the increase was due to the growth in MFP and the balance to input growth. Within the latter, there was a shift in the composition of capital toward information and communication technology (ITC) equipment. Growth accounting also reveals that the growth rate of Europe over recent years was only half that of the United States This result comes from the analysis of the EU-KLEMS data set for the period 1995– 2005 by van Ark et al. (2008), which reveals that output per hour worked in the market economies of the 15 countries in the European Union grew at an average annual rate of 1.5%, while the corresponding rate in the United States was 3.0%. Moreover, the drivers of growth were quite different: MFP explained about one-half of the US growth rate, but only one-fifth of the EU rate. EU growth relied more heavily on the growth of capital per hour worked, and within capital, more heavily on non-ITC capital. These two comparisons, BLS and EU/US, are based on a concept of capital that excludes intangible assets like R&D, brand equity, and organizational capital. As noted in Section 3, adding these intangibles to the growth account for the US changes the picture substantially. Corrado et al. (2009) report that the inclusion of intangibles increases the growth rates of output per hour in the US nonfarm business sector by 10% for the 1995–2003 period. This is a small overall effect, but the role of MFP as a driver of growth changes significantly, moving from 50% without intangibles to 35% when they are included. The role of ITC capital is also diminished, and intangible capital is found to account for more than a quarter of growth. A similar pattern is found in the United Kingdom during roughly the same period, though the contribution of MFP is smaller both with and without intangibles (Marrano et al., 2009). Fukao et al. (2009) find that the introduction of intangibles also matters in Japan’s growth accounts, though tangible capital is by far the most important source of growth, and the contribution of MFP growth is quite low. As with the EU versus US comparison, different countries exhibit different patterns of growth

Chapter 24: Measuring the Returns to R&D (Hall, Mairesse, Mohnen)

R&D expenditures may differ in type but their object is always to increase the stock of knowledge in order to find new applications and innovations. A distinction is usually made between basic research, applied research, and development, according to how close the research is to commercial applications. In general the closer it is, the larger the expenditure share devoted to it. Similarly, a distinction is made between R&D directed toward invention of new methods of production (process R&D) and R&D directed toward the creation of new and improved goods (product R&D). Before continuing, we would like to caution the reader that the “return” to R&D is not an invariant parameter, but the outcome of a complex interaction between firm strategy, competitor strategy, and a stochastic macroeconomic environment, much of which is unpredictable at the time a firm chooses its R&D program. Therefore, there is no reason to expect estimates of the ex post returns to be particularly stable over time or across sectors or countries. And in the case of social returns, they are not even tied to the cost of capital. However, these estimates can still be useful for making comparisons between various financing systems, sectors, or countries, and can also be a guide to policy-making toward R&D. Nevertheless, keep in mind that the measurement process is not a search for a “scientific constant.” On the whole, although the studies are not fully comparable, it may be concluded that R&D rates of return in developed economies during the past half century have been strongly positive and may be as high as 75% or so, although they are more likely to be in the 20–30% range. Looking at these studies, we also confirm two findings made earlier about the R&D elasticity: the estimated returns tend to decrease and become less significant when sector indicators are introduced and when the returns to scale are not constrained to be constant. We find that estimates based on industry data are generally quite close to those obtained from firm data. Finally, studies based on plant or establishment data produce results similar to those obtained with firm data, not surprisingly, since they are invariably forced to use firm- level R&D data due lack of disaggregated data on R&D. Given the presence of “within firm” spillovers, it is not even clear that disaggregation would be useful. The only exception is Clark and Griliches (1984), who have line of business data on R&D and even they report rates of return similar to the lower ones obtained at the firm level. A prime example of the case study approach is the pathbreaking paper by Griliches (1958) on the calculation of the social rate of return to research in hybrid corn. He adds up all private and public R&D expenditures on hybrid corn between 1910 and 1955, cumulated to 1955 using an external interest rate of 10%, and compares them to the net social returns over that period, cumulated to 1955, plus the projected future returns, where the net returns are assumed to be equal to the value of the increase in corn production with a price change adjustment. He arrives at a perpetual annuity of returns of 7$ per dollar spent on R&D, or to an equivalent internal rate of return equalizing R&D expenditures and net social returns of 35–40%. Mansfield et al. (1977) compute the private and social internal rates of return of 17 industrial innovations. Private benefits are measured by the profits to the innovator, net of the costs of producing, marketing and carrying out the innovation, and net of the profits the innovator would have earned on products displaced by the innovation, with an adjustment for the unsuccessful R&D. Social benefits are obtained by adding to the private benefits the change in consumer surplus arising from the possible price reduction and profits made by the imitators and by subtracting the R&D costs toward the same innovation incurred by other firms as well as possible environmental costs. However, such case studies tend to focus on “winners,” innovations that have been successful, and may therefore undercount the full cost of excavating the dry holes which was also necessary before these innovations took place. That is, given uncertainty of outcomes, not all research projects will lead to success, and those that do will need to earn a high rate of return to cover the ones that fail. So there is a role for aggregate analysis, even though it can be difficult to tease out the effects of R&D from other factors. As we alluded to earlier, one difficulty is that unlike the private returns case there is no “cost of capital” that provides a focal point for these returns. In addition, many of the dual estimates are obtained without time effects, and to some extent this may bias the external R&D coefficient upwards. In general, the rates of return obtained using the dual approach are somewhat higher than the others. In spite of the revealed complexity of the problem, we have learned something about the rates of return to R&D. They are positive in many countries, and usually higher than those to ordinary capital. The adjustment costs are also greater than that to ordinary capital. The depreciation rates appear to vary across industrial sector, probably reflecting the nature of competition and the ease of appropriability. When the production function is estimated in first-differenced form, there is a very substantial downward bias to the R&D coefficient that can be mitigated by imposing constant returns or performing GMM-SYS estimation. As to social returns, these are almost always estimated to be substantially greater than the private returns, and often to be quite asymmetric among trading partners and industries. In addition, most estimates for public government-funded R&D suggest that it is less privately productive than private R&D, as it should be, given the fact that it targets goals that either do not show up in conventional GDP or have substantial positive externalities.

Chapter 25: Patent Statistics as an Innovation Indicator (Nagaoka, Motohashi, Goto)

However, caveats are in order. Not all patents represent innovation, nor are all innovations patented. First, the value of patents is highly skewed, as there are a small number of highly valuable patents and a large number of patents with little value. Scherer and Harhoff (2000) showed that about 10% of the most valuable patents account for more than 80% of the value of all the patents, based on their survey of German patents. According to the Japan Patent Office (JPO) survey, more than 60% of patents are neither used internally nor licensed out. Firms often use patents strategi- cally; for instance, take out patents on inventions simply to block other firms’ patents or to deter entry. In the United States only around 2.2% of the inventions involved international coinventions in the 1980s, but increased to around 8.3% in the 2000s. It also varies significantly among these five countries. In the case of Japan, only 1.5% of the inventions involve international coinventions. On the other hand, in the case of the United Kingdom, more than 12% of them involve international coinventions. It is found that around half of patents are not used, either internally or by licensing to other firms. In drugs, this figure is as high as 63%. In this industry, R&D takes as long as 10–15 years for new drug to be introduced into the market. Therefore, there are a substantial number of patents, still in the process of R&D and not used for drug in the market. In the framework of Figure 10, the numbers in Table 5 mean only half of the patents are directly used for in-house production and sales activities. Of course there are other reasons that a firm holds unused patents. Some of them are held in the hope that they may be used in the future. More than half of unused patents are kept as blocking patents, in a sense of preventing other firms from using such technology. Others may be kept because a firm needs them for future licensing negotiation, particularly in the electronics industry where cross-licensing is relatively common (Hall and Ziedonis, 2001). the value of a patent for successful pharmaceutical products can be over one billion dollars. However, this kind of patent is only a very small fraction of millions of patents. Therefore, just counting numbers of patents of a firm or country without paying attention to their value can be misleading. The value of a patent consists of two parts, (1) the value of inventions per se and (2) the value of patent rights, in a sense of incremental value of patenting the inventions (Hall, 2009). However, it is difficult to separate these two parts empirically. Arora et al. (2008) is a rare example to estimate the latter value (“patent premium”). It is found that the value of patenting is estimated to be a 40% discount of the value of invention since demerits of patenting such as information disclosure outweighs the merits of invention protection. Therefore, a firm does not patent all inventions. However, if only patented inventions are included, the patent premium is estimated to be 47% on average. It is also found that the patent premium increases with firm size and is particularly large for medical instruments, biotechnology, and drugs.

Chapter 26: Using Innovation Surveys for Econometric Analysis (Mairesse, Mohnen)

Read, but no paragraphs highlighted. Here is the abstract

After presenting the history, the evolution and the content of innovation surveys, we discuss the cha- racteristics of the data they contain and the challenge they pose to the analyst and the econometrician. We document the two uses that have been made of these data: the construction of scoreboards for monitoring innovation and the scholarly analysis of various issues related to innovation. In particular, we review the questions examined and the results obtained regarding the determinants, the effects, the complementarities, and the dynamics of innovation. We conclude by suggesting ways to improve the data collection and their econometric analysis.

Chapter 27: Sytems of Innovation (Soete, Verspagen, Bas Ter Weel)

List’s recognition of the interdependence of tangible and intangible investments has a decidedly modern ring to it. He was probably the first economist to argue consistently that industry should be linked to the formal institutions of science and education: “There scarcely exits a manufacturing business which has no relation to physics, mechanics, chemistry, mathematics, or to the art of design, etc. No progress, no new discoveries and inventions can be made in these sciences by which a hundred industries and processes could not be improved or altered” (p. 162). His book entitled The National System of Political Economy might just as well have been called The National System of Innovation. List’s main concern was with the problem of how Germany could overtake England. For underdevel- oped countries (as Germany then appeared relative to England), he advocated not only protection of infant industries but a broad range of policies designed to accelerate or to make possible industrialization and economic growth. Most of these policies were concerned with learning about new technology and applying it. In this sense List anticipated and argued in accordance with contemporary theories of “national systems of innovation.” In fact, the role of the Prussian state in technology catch-up in the mid-nineteenth century resembled very much that played by the Japanese state a couple of decennia later, the Korean state a century later, or China today. At each time the coordinating role of the state was crucial, as were the emphasis on many features of the NSI which are at the heart of contemporary studies (e.g., education and training institutions, science, universities and technical institutes, user–producer interactive learning, and knowledge accumulation). In short, the systems of innovation approach spells out quite explicitly the importance of the “systemic” interactions between the various components of inventions, research, technical change, learning, and innovation; the national systems of innovation brings to the forefront the central role of the state as coordinating agent. Its particular attractiveness to policymakers lays in the explicit recognition of the need for complementary policies, drawing attention to weaknesses in the system, while highlighting the national setting of most of those institutions. There have been many different definitions of NSIs. Freeman (1987) states that an NSI is “the network of institutions in the public and private sectors whose activities and interactions initiate, import, modify, and diffuse new technologies” (p. 1). Lundvall’s broad conceptualization of NSI includes “all parts and aspects of the economic structure and the institutional setup affecting learning as well as searching and exploring” (Lundvall, 1992, p. 12). Nelson (1993, p. 4) notes that the innovation system is “a set of institutions whose interactions determine the innovative performance of national firms” and the most important institutions are those supporting R&D efforts. Metcalfe (1995) states that the NSI is “that set of institutions which jointly and individually contribute to the development and diffusion of new technologies and which provides the framework within which governments form and implement policies to influence the innovation process. As such it is a system of interconnected institutions to create, store, and transfer the knowledge, skills, and artifacts, which define new technologies. The element of nationality follows not only from the domain of technology policy but from elements of shared language and culture which bind the system together, and form the national focus of other policies, laws, and regulations which condition the innovative environment.” Edquist (1997) takes even a broad view of innovation systems being “all important economic, social, political, organizational, institutional, and other factors that influence the development, diffusion, and use of innovations” (p. 14). Freeman saw the novel and innovative forms of work organization in Japan and the associated work relations of the large companies as crucial elements in the growth process. Finally, Freeman puts strong emphasis on the conglomerate structure of Japanese industry, arguing that because of a lack of competition, large firms were able to internalize externalities that were associated with innovations in supply chains. Internaliz- ing vertically is beneficial to provide workers with the right incentives and to prevent hold-up and shirking. It also yields an overview of the entire process of production, which makes implementation of new work modes and innovative production of intermediates easier. This fits the systems approach to production and innovation in which the efficiency of the feedback loops is important. Freeman’s contribution was followed a year later by a book edited by Dosi (1988) which included three chapters on the NSI concept by Freeman, Lundvall, and Nelson. The second theoretical building block is concerned with the nature of innovation, in particular with the distinction between incremental and radical innovations. Lundvall mainly stresses the incremental and cumulative nature of innovation: it mainly consists of small steps that result from the constant learning and searching by firms. The resulting process of incremental innovations is much more of a continuum than suggested by the distinction between invention, innovation, and diffusion. An important dimension of this process is also the feedback between different actors, since each incremental innovation is at least partly a reaction to previous innovation by others who are active in the “system.” The third and final theoretical building block of Lundvall’s NSI concept is the factor of nonmarket institutions in the system. These take two major forms. The first is user–producer interaction. This is based in Lundvall’s earlier work (e.g., Lundvall, 1988), and is concerned with the exchange of information between users and producers. Although there is clearly a market relationship between those actors, the idea here is that the exchange of information on the use and production of the good or service goes beyond the pecuniary market exchange. Detailed user-feedback leads producers to adapt their products (innovation). The second major form of nonmarket factors is formed by institutions. Institutions are understood as “regularities of behavior” that are largely historically determined and also have close linkages to culture (e.g., Johnson, 1992). Such institutions reduce uncertainty and volatility and provide stability to the actors in the system. This is an instance where the emphasis of the NSI literature on nonmarket relations is crucial. Nelson and Rosenberg (1993) sketch how “technology” (i.e., firms as opposed to universities) has often played a leading role in terms of setting the research agenda, also for university researchers and other scientists not working in commercial R&D labs. It follows that the particular ways in which the university system is set up (i.e., the relative contribution of private funds, incentives for promotion, the system of quality control, and so on) play a large role in determining how efficiently this system works. Nelson’s narrower view which focuses mainly on organizations that support R&D contrasts with the broader view of Lundvall where those R&D focused organizations are one part of the larger system (Edquist, 1997). A recent critique on the NSI, based on the United States is provided by Hart (2009). Institutions are central to the NSI concept as they provide structure to as well as insights in the way in which actors (including organizations) behave within the system. Institutions in the broad sense are the habits and practices, or routines (as noted by Nelson and Winter, 1982) that shape the way things are done, how agents act and interact, and how innovation comes about and is perceived. For Edquist, organizations (which should not be confused with institutions) are the tangible and legally identifiable parts of the system that facilitate the innovation process through bringing actors together. Edquist and Johnson (2000, p. 50) present a taxonomy of the different types of institutions that matter for innovation systems. Their taxonomy distinguishes institutions on characteristics such as formal versus informal (where informal institutions extend to customs, traditions, and norms), basic (e.g., laying down basic arrangements on property rights, conflict management rules, etc.) versus supportive (the specific implementation of basic institutions), hard (binding, and policed) versus soft (more suggestive), and consciously or unconsciously designed. A common feature of all innovation systems is the fact that firms rarely if ever innovate alone. As “innovation scholars” had been at pains to point out for many years, there is a need for a constant interaction and cooperation between the innovating firm and its external environment, which in the “optimal” case leads to a virtuous circles of a better exploitation of available knowledge. As Nelson (1993, p. 10) noted: “to orient R&D fruitfully, one needs detailed knowledge of its strengths and weaknesses and areas where improvements would yield big payoffs and this type of knowledge tends to reside with those who use the technology, generally firms and their customers and suppliers. In addition, over time firms in an industry tend to develop capabilities . . . largely based on practice.” It is this interactive nature of innovation, combined with the nonmarket-based nature of the institu- tions that govern the interactions that raise the possibility of “systemic failure,” or, in other words, a low innovation performance due to a lack of coordination between the parts of the system. As argued below, this is the main ingredient in the concept of NSI that leads to policy prescriptions that are different from a policy approach based on market failure as reviewed in Steinmuller (2010). When researchers live in areas with a larger extent of social networks and have high norms, venture capitalists are more likely to invest in risky projects. The empirical application to 102 regions in the EU-14 (a homogeneous set of countries that have operated under similar judicial and financial-economic regulation for some time now) reveals that social capital is an important determinant of innovation, which explains on average approximately 15% of the change in income per capita in the 102 EU regions between 1990 and 2002. he main implication of the national systems of innovation concept from the point of view of policy is that it provides a much broader foundation for policy as compared to the traditional market failure-based policy perspective. In the market failure-based perspective, every policy measure must be justified both by the identification of some form of market failure, and by an argument that explains how the policy can bring the system closer to its optimal state. Government failure might be more serious than market failure, so not all market failures merit government interventions. In a systems view of innovation, markets do not play the overarching role of generating an optimal state. Instead, nonmarket-based institutions are an important ingredient in the “macro” innovation outcome. Due to the variety in such institutions, and due to the multidimensional nature of innovation, the innovation systems approach rejects the idea of an optimal state of the system as a target for policy to achieve. Innovation policy is, just like innovation, continuously on the run. This broad, almost philosophical outlook on policy has two major consequences for the foundations of actual policy measures. The first is that there is a broader justification of the use of policy instruments as compared to market failure-based policies. For example, R&D subsidies are linked in the market failure-based approach to a lack of incentives at the private level (firms). The subsidy instrument has the aim to lower private costs, thus bringing investment up to the level where social costs equal social benefits. In the systems approach, subsidies serve a more general purpose that includes influencing the nature of the knowledge base in firms, and to increase absorption capacity (e.g., Bach and Matt, 2005; David and Hall, 2000). Similarly, policies aimed at stimulating cooperation, for example between university and industry, would be motivated in the market failure-based approach by internalizing externalities, while in a systems approach, such policies could be aimed at influencing the distribution of knowledge, to achieve coordination (not provided by markets), or to increase the cognitive capacity of firms. The second implication is that the government or policymaking body is part of the system itself with its own aims and goals being endogenous. Therefore, policymakers have to function within the system itself, and this restricts them. As a (mere) actor in the system, policymakers are unable to design the system in a top-down way. In the market failure-based approach, this would be featured as “policy failure,” that is, the impossibility to achieve a first-best welfare solution by solving market failures. From the systems point of view, policies are necessarily adaptive and incremental. They are, in many cases, specific to the system in which they are set and would be ineffective in other settings. Their potency lies in the indirect effects that they have throughout the system, but such repercussions are hard to predict precisely, and therefore policies must be experimental in nature (Metcalfe, 2005). The set of instruments for innovation systems policy includes all instruments that are traditionally the domain of science and technology policy, but also education policy. In addition, industrial policies and regional policies are important ingredients in innovation systems policies. We discuss this wider economic policy dimension of the NSI concept in the remainder of this section. The catching-up process of Taiwan, Korea, and other East Asian tigers took place in a time frame when the international protection of intellectual property was much weaker than it is today (e.g., Fagerberg et al., 2010 in this volume). Abramovitz, who could be described, next to List, as another precursor of system of innovation thinking explained the successful catching up of Western Europe vis-a`-vis the United States in the postwar period as the result of both increasing technological congruence and improved social capabilities. As an example of the former he mentioned explicitly how European economic integration led to the creation of larger and more homogenous markets in Europe hence facilitating the transfer of scale-intensive technologies initially developed for US conditions. Improved social capabilities on the other hand were reflected in such other factors as the general increase in educational levels, the rise in the share of resources devoted to public and private sector R&D and the success of the financial system in mobilizing resources for change. In a similar vein the failure of many developing countries to exploit the same opportunities is commonly accounted for by their lack of technological congru- ence and missing social capabilities (e.g., the lack of a sound financial system, or a too low level, or unequal distribution of education). The central point here is that concepts such as “technological congruence” and “social capability” are important policy notions which might be helpful in addressing the systemic “success” or “failure” of science, technology, and innovation policies. From this perspective, four factors appear today essential for the functioning of an NSI. First and foremost, there is the investment of the country in social and human capital: the cement, one may argue, that holds the knowledge and innovation systems together. It will be incorporated in a number of knowledge generating institutions in the public as well as the private sector such as universities, polytechnics, and other skills’ training schools. It is the factor most explicitly acknowledged by Nelson. In combination with a low degree of labor mobility, it is also the factor which explains why within a European context of nationally, sometimes regionally, organized education systems, one can still not talk about a European system of innovation (Caracostas and Soete, 1997). With the development of “new growth” models in the economics literature, the role of education and learning in continuously generating, replacing, and feeding new technology and innovation has of course received much more emphasis over the last decades. An initial stock of human capital in a previous period is likely to generate innovation growth and productivity effects, downstream as well as upstream with lots of “spillovers” and positive “externalities” (e.g., Lucas, 1988 and the overview by Jones and Romer, 2009). Higher education is itself crucial for the continuous feeding of fundamental and applied research. Many new growth models have tried to build in a more complex fashion such impacts, giving prime importance not just to education itself, but also to its by-products such as research and innovation. The second central node of a system of innovation is hence not surprisingly the research capacity of a country (or region) and the way it is closely intertwined with the country’s higher education system. From a typical “national” innovation system perspective, such close interaction appears important; from an international perspective the links are likely to have become much looser, with universities and research institutions being capable of attracting talent worldwide. In most technology growth models, these first two nodes, higher education and research, form the essential “dynamo effects” (e.g., Dosi, 1988; Soete and Turner, 1984) or “yeast” and “mushroom” effects (e.g., Harberger, 1998) implicit in the notion of technological change. Accumulated knowledge and human capital act like “yeast” to increase productivity, while technological breakthrough or discovery suddenly “mushroom” to increase produc- tivity more dramatically in some firms/sectors than others. The third “node” holding knowledge together within the framework of an NSI is, perhaps surpris- ingly, geographical proximity. The regional clustering of industrial activities based on the close interactions between suppliers and users, involving learning networks of various sorts between firms and between public and private players, represents, as highlighted in Lundvall’s approach to national systems of innovation, a more flexible and dynamic organizational setup than the organization of such learning activities confined within the contours of individual firms. Local learning networks can allow for much more intensive information flows, mutual learning and economies of scale among firms, private, and public knowledge institutions, education establishments, etc. In a well-known study, Putnam (2000) compares the impact of Silicon Valley and Route 128 in the United States. He cites Silicon Valley in California where a group of entrepreneurs, helped by research effort in the local universities, contributed to the development of a world center of advanced technology. As he puts it: “The success is due largely to the horizontal networks of informal and formal cooperation that developed among fledgling companies in the area” (Putnam, 2000). Today, and despite the advent of Internet, this is still very much the case. 8 In addition to human capital, research, and the related phenomenon of local networks, and particularly interfirm networking, the fourth and last notion essential to any innovation system approach brings one back to Abramovitz “absorptive capacity” notion and cover the demand factors that influence the take-up of innovations and hence the expected profitability on the part of the innovator. Consumers and more broadly national citizens might be more or less absorptive to new designs, products, ideas, enabling rapid diffusion or very conservative and resistant to change and suspicious of novelty. The demand factors among countries and regions (and even suburbs) vary dramatically, and they are likely to influence also the ability of companies to learn and take-up innovations. The four key elements described above can be thought of as elements of a virtual innovation system. Ideally, each one will mutually reinforce the others providing an overall positive impact on a country or region’s competitiveness and sustainable growth path. By contrast, it is in the interactions between the four constituents that the systemic failures may be most easily identified. To illustrate the point, one may think of the Latin American case. In some of the larger countries, there is excellent tertiary education and research, but the graduates have tended in the past to take secure government lab jobs, which means that industry–public research links are weak. Research rarely flows to the private sector, but instead is targeted more toward the world research community. In short, the NSI literature broadens the scope and rationale for innovation policy, from specific policy fields and targets such as higher education, research, or innovation to the interactions between those fields. Targeting increases in R&D investment—a rather popular policy target: one may think of the European, so-called 3% Barcelona target—while the supply of researchers is not being addressed; or worse, in the case of Europe, is likely to fall due to aging population trends is, for example, unlikely to yield the expected results. One immediate, possible solution to this problem could be to encourage the immigration of high-educated people (blue card), which is used in the United States (green cards). The concept of national systems of innovation is itself, however, under erosion from two sides. First of all, there is of course the emergence of various new sorts of knowledge “service” activities, allowing for innovation without the need for particular leaps in science and technology, something that has been referred to as “innovation without research” (Cowan and Van de Paal, 2000, p. 3). While in many ways not new, and reminiscent of Smith’s reference to inventors as “philosophers . . . whose trade is not to do anything but to observe everything” as quoted above, innovation is now less linked to the typical manufacturing forward and backward linkages, but “fuelled,” so to say, by the Internet and broadband, by more open flows of information raising of course many information- search problems as it is now confronted with impediments to accessing the existing stock of information that are created by intellectual property right laws. Second and closely related, the “national” perspective on an innovation system approach appears under pressure given the globali- zation trends and the inherent limits of national policymaking in an area which is increasingly borderless. With the rise in service activities, the notion of a primarily industrial research-based systems of innovation policy approach, has become increasingly questioned (Freeman and Soete, 2009). Many authors already emphasized the changing nature of the innovation process itself in the 1990s. 9 Accord- ing to David and Foray (1995), innovation capability had to be seen less in terms of the ability to discover new technological principles, and more in terms of the ability to exploit systematically the effects produced by new combinations and uses of components in the existing stock of knowledge. Not surprisingly the new model appeared more closely associated with the emergence of various new sorts of knowledge “service” activities, implying to some extent, and in contrast to the Frascati R&D focus, a more routine use of the technological base, allowing for innovation without the need for particular leaps in science and technology, a feature predating somehow the industrial research lab of the twentieth century and something which had of course already been recognized by economic historians (Rosenberg, 1976, 1982). This view brings into the debate the particular importance of science and technology service activities as it now puts a stronger emphasis on access to state-of-the-art technol- ogies. This mode of knowledge generation, based in David and Foray’s (1995, p. 32) words “on the recombination and re-use of known practices”, does, however, raise much more extensive information- search problems as it is confronted with impediments to accessing the existing stock of information that are created by intellectual property right laws. Not surprisingly at the organizational level, the shift in the nature of the innovation process also implied a shift in the traditional locus of knowledge production, in particular the professional R&D lab. The old system was based on a relatively simple dichotomy. On the one hand there were the knowledge generation and learning activities taking place in professional R&D laboratories, engineering, and design activities, of which only the first part was measured through the Frascati Manual’s definition of R&D on the other hand there were the production and distribution activities where basic economic principles would prevail of minimizing input costs and maximizing sales. This typical sector-based innovation system perspective is still very much dominant in many industrial sectors ranging from chemicals to motor vehicles, semiconductors, and electronic consumer goods, where technological improvements at the knowledge-generation end still appear today to proceed along clearly agreed-upon criteria and with a continuous ability to evaluate progress. The largest part of engineering research and development consists of the ability to “hold in place”: that is, to replicate at a larger industrial scale and to imitate experiments carried out in the research laboratory environment. Most of the growth evidence of the last 10–15 years points to the particular importance of the international dimensions of knowledge accumu- lation in having brought about growth. This may be surprising in view of the particular attention given to European knowledge accumulation in the EU’s Lisbon agenda—and subsequently made explicit in the European Union 3% R&D Barcelona target. Undoubtedly, and as emphasized by David and Foray (2002), the emerging digital technologies: in particular the easy and cheap access to broadband, the worldwide spreading of Internet and of mobile communication have been instrumental in bringing about a more rapid diffusion of best practice technologies, and in particular more capital and organizational embedded forms of technology transfer such as licenses, foreign direct investment and other forms of formal and informal knowledge diffusion. To what extent is the NSI policy framework still useful within this much more globalized world? In many (small) countries, the globalization trends described above might well have undermined much of the relevance of national innovation policies, systemic, or not. Worse, it might even be argued that national systemic innovation policies have tended to miss emerging international trends, assuming that national weaknesses could only be addressed within the boundaries of national environments. Thus, it could be argued that in Europe, where the policy impact of the NSI literature was greatest, the NSI literature has barely contributed to the debates surrounding the creation of European research and innovation institutions such as the European Research Area, the European Research Council or the European institute on Innovation and Technology. As a result, the European policy debate has been characterized by continuous debates about the “rationale” for European research and innovation policies next to individual member states’ national systems of innovation policies. 10 In this sense therefore, the globalization of knowledge flows represents a real challenge for systems of innovation policies, developed primarily within a national context. Innovation performance of individual actors (firms, but also other organizations) is influenced by a broad set of institutions and patterns of interactions, which are specific to the historical context in which they emerged. Strongly connected to this view is the notion that innovation systems are not usefully assessed by using the traditional notion of equilibrium that implies optimality and welfare maximization. Differences between innovation systems exist, and are at the root of differences in aggregate and microeconomic performance, but in order to explain such differences, the innovation systems approach argues that historical analysis (in a broad sense) plays a more important role than economic theory. Fourth, the national innovation systems literature is one that is primarily aimed at analyzing policy, and, correspondingly, it has sought, in many cases successfully, policy influence. As we have argued, the notion of innovations systems opens up possibilities for reinterpreting and reengaging existing policy alternatives, such as industrial policy and trade policy. What it offers policymakers is a framework, not so much characterized by a different set of policy instruments, but rather by a wider set of justifications for policy, and a wider set of policy goals. Innovation systems offer the policymaker a tool for analyzing innovation processes and influencing them, without the strong restriction of innovation policy to market failures that characterizes the mainstream approach. This not only offers opportunities but also hosts threats. The opportunities are related to the broader set of processes that are embodied in the innovation systems approach, and which enable more channels for influencing innovation performance. The threats are related to a potential misjudgment by policymakers of how innovation systems actually work, and even the possibility that political hobby horses are implemented under the umbrella of a broad innovation systems approach. Finally, the innovation systems approach has managed to obtain a strong position in the literature and in policy circles, but its future depends on how well its proponents will be able to develop the approach further. Innovation systems have become a phenomenon that is most often analyzed in a qualitative way, or using an indicators scoreboard approach. While this has been useful in reaching the conclusions outlined above, it is also clear that this approach has its limitations in terms of being able to reach concrete conclusions and concrete policy advice. It is one thing to reach the conclusion that institutions matter, but it is quite another to be able to suggest a concrete assessment of how institutional arrange- ments influence innovation performance, and by how much. In order for the innovation systems approach to remain influential, it needs to address these concrete issues. This has, arguably, happened already to some extent in the Nelson tradition of innovation systems, in particular, in the literature on university–industry interaction and the role of university patents (e.g., Cohen et al., 1998; Mowery and Sampat, 2001). Such an empirically oriented approach to concrete issues might also be the way forward for the “European traditions” in innovation systems.

Chapter 28: Economics of Technology Policy (Steinmueller)

This section proposes a dichotomy between the theoretical foundations in economics for technology policy and the practice of technology policy (Mowery, 1995) which is informed by political considera- tions and economic interests. Policy is rarely dictated purely by economic analysis or theory; it often reflects assumptions that are contrary to those of economics. Underperformance resulting from externalities and the disincentive provided by free riding are likely to arise from several sources. Nelson helps to identify these by establishing a benchmark where underperformance would not be expected to occur, “To the extent that the results of applied research are predictable and related only to a specific invention desired by a firm, and to the extent that the firm can collect through the market the full value of the invention to society, opportunities for private profit through applied research will just match social benefits of applied research, and the optimum quantity of a society’s resources will tend to be thus directed.” (Nelson, 1959, p. 300). Departure from these conditions is likely to create a divergence between private incentive and the socially desired production of knowledge. The size of this divergence represents an opportunity cost of relying solely on market mechanisms which should then be weighed against the costs that might arise from intervention. In short, the market incentives created by the intellectual property system address what have become known as the “appropriability” problem (Arrow, 1962; Teece, 1986 )—the combined effects of unpriced externalities and free rider problems—which creates additional complexities in the economy and for technology policy. In recent years, the nature and extent of unpriced externalities have been questioned. New knowledge may not be employable by others without heavy investments in “absorptive” capability (Cohen and Levinthal, 1990) and it may not be reproducible without the direct assistance of the knowledge originator (Callon, 1994; Collins, 1974). Although the contributions in this area are either conjectural or anecdotal, a logical implication of a rigid application of the argument is that the originator’s exploitation of the knowledge is the best that can be hoped for from a social welfare viewpoint. Perhaps more credibly, this argument may simply be that the costs of imitation are high. In either case, however, the rationale for intellectual property protection as a solution to the “free rider” problem is diminished and the rationale for policies supporting the diffusion of new technologies is strengthened. As noted earlier, the second part of the conventional argument for technological policy is that market mechanisms may, from a social welfare viewpoint, misdirect the production and exchange of techno- logical knowledge. Using conventional economic assumptions, the possibilities for misdirection of knowledge production and distribution are limited to the divergence between social and private discount rates or to the absence of markets to translate social preferences into market demand. 5 Both issues are relevant if it is assumed that technological knowledge may be useful for future as well as present generations. For example, future generations might prefer a larger stock of petroleum reserves and a lower carbon dioxide content of the atmosphere, an outcome that could likely be achieved if the present generation were to make larger technological investments in alternative forms of energy usage and conservation. To make these investments, however, the current generation would likely have to divert resources from the growth or even the level of current consumption. The absence of a market by which future generations can compensate for this investment means that later generations are reliant on the provisions made for them by the current generation, provisions that are only likely to be made through government intervention. The same sort of reasoning applies to other possible interventions, many of which also have income distribution consequences. For example, current investments might be made in improving the protein content of tubers and grains so that future generations in poorer regions would have a lower incidence of protein deficiency related diseases. 7 This example involves the absence of capability to translate social preference into market demand because those most in need of the invention are least able to pay for it. The foregoing constitutes, for the most part, the traditional contribution of economic theory to the rationale for technology policy. It suggests straightforward policies for situations in which a higher rate of technological change is desired—most of which amount to increasing the private returns to research and development (R&D) investment. For example, one may compensate initiators (innovators) directly through subsidy to prevent the possibility the market inadequacies noted by Nelson and Arrow or provide a stronger set of IPR rules to increase returns (and investments in winner take all races) for new knowledge whose exclusive use may create market power and offsetting deadweight social welfare loss. 9 Beyond this, it is in the interest of the government to assure adequate incentives for research underlying the procurement of technology for public goods and for uses in which the government is a major customer (Dalpe et al., 1992). In these areas, the government has an important role as a “progressive” customer—taking a longer term partnership perspective with suppliers that provide resources for innovation rather than minimizing short-term prices of goods and services procured. With minor abridgement, these basic prescriptions are what might be described as “innovation environ- ment policies,” a benchmark from which additional arguments and assumptions are needed in order to justify greater or different types of intervention. This benchmark has, however, largely been ignored in the actual practice of advanced industrial countries, suggesting either that technology policy has a political salience which defeats rational economic calculation or that there is a need to revise the bases for economic calculation. The role of the state in allocating resources for the production of knowledge and exchange of knowledge expanded enormously following World War II and continued to expand for the remainder of the century (Mowery and Rosenberg, 1989). Instead of a remedy for market ills, technology policy became an expression of the collective will of societies, initially focusing on constructing a semblance of national security in the presence of states armed with nuclear weapons and, perhaps to alleviate the grimness of this task, to promote the peaceful uses of science and technology. The idea that it is within the power of the state to marshal the forces of technological change is certainly an appealing one in political discourse and was a centerpiece in optimistic assessments of the role of government by Bush (1945) and later by Wilson (1963), who observed the following 10 : “. . .the key to our plan to redynamise Britain’s economy, is our plan to mobilise the talents of our scientists and technicians, redeployed from missiles and warheads, on research and development contracts, civil research and development to produce the new instruments and tools of economic advance both for Britain and for the war on poverty in underdeveloped areas of the Common- wealth and elsewhere.” 11 With a larger role for the state, the vision of state involvement in technology policy became ever more expansive. Among the rationales offered was the responsibility of the state to respond to “social needs” and to construct “mission-based” policies for advancing the scientific and technological frontiers evoked by Vannevar Bush. For the most part, the expansion of state involvement in meeting social needs and launching mission-based policies evolved with only modest reference to economic justifica- tion (Mowery, 1995; Nelson, 1987). That is, in the historical context of the United States in 1960s, the arguments of Arrow and Nelson were largely secondary to issues that today we would call large technical systems (e.g., early warning systems) or infrastructure (e.g., further extension of rural electrification and telecommunication networks). Many of the areas in which the state became involved were considered to be outside the scope of the market, for example, the initial development of nuclear energy for nuclear submarines or space exploration. In other areas, such as agricultural research and provision of “agricultural extension” services, where the information needs of dispersed small-sized agents (i.e., farmers) could be seen as subject to market failure and to justify state intervention, policies, and programs were premised on arguments concerning rural development and the “upgrading” of farming practice. These policies served many of the OECD countries well during the post-World War II era of expansion and well into the 1970s, weathering the initial disruptions in world energy markets and indeed stimulating renewed and expanded missions related to energy supply. By 1980, however, economic conditions had begun to worsen. The long postwar economic expan- sion, which had been accompanied by greatly liberalized international trade, was followed by a period of uncertainty, following the oil shocks of the 1970s and widespread recession in the 1980–1983 period in the United States, European, and Japan (Artis et al., 1997). In the United States, in particular, this period was the deepest recession since the Great Depression. These events stimulated a renewed interest in the economic potential of technological change for growth and employment. In particular, discussion highlighted the potential of new “sunrise” industries to replace the “sunset” industries of earlier mass industrialization (Thurow, 1980). Earlier discussion of the issues surrounding which nation or nations were leading in developing such industries was revived and extended. “Tech- nology gaps” by which some nation—typically the United States or Japan—was seen as “forging ahead” to other nations’ detriment and requiring new missions of technological advance or organizational change. 12 Although many economists remained skeptical of the more expansive aims of the state in technology policy, the economic conditions in the early 1980s focused renewed attention on the possibility of state intervention to foster technological change supporting commercial objectives in the belief that it would improve conditions of growth and employment. 13 2.3. A more complex story: Endogenous and localized technological change The contest between theory and policy described in the prior subsection can be summarized briefly. On the one hand, throughout the 1960s and 1970s, there was a strong economic justification for the public support of science and a much weaker justification for public intervention in technology. On the other hand, policymakers were driven, for a variety of reasons, to implement a wide range of technology policies, a trend that gained momentum during the 1980s. Throughout this period, most economists were content to treat science (i.e., knowledge that was not appropriable through intellectual property) as exogenous to the economic system, except to the extent that its support as a public good was an element of fiscal policy. In addition during this time, most economists also took technological change as exogenous, while recognizing intellectual property institutions, public procurement, and certain public missions 14 as significant influences on its rate and direction. During the 1980s, however, there was considerable pressure to revise these beliefs in the light of the “competitive challenge” that was perceived in virtually every country. By the mid-1970s, the earlier work of other scholars, such as Denison (1962) and especially Kendrick (1961) revealing that, at a sectoral level, productivity growth was very unevenly distributed began to be considered as relevant to crafting a “useful” theory of innovation policy (Nelson and Winter, 1977). Complementary arguments by policy analysts such as Thurow (1980) about the possible contributions of the “sunrise industries” noted above, as well as specific structural challenges in specific sectors such as steel, consumer electronics, and automobiles were highlighted. The possibility of targeted (rather than horizontal) industrial promotion policies began to be appreciated, a change in economic analysis that paralleled the sorts of policies that were being developed. These developments were further enhanced by the “new growth theory” which sought theory and evidence that productivity growth was endogenous rather than exogenous in the operation of the economy (Krugman, 1979, 1986; Romer, 1986). Scholars pursuing the new growth theory advanced various conjectures about the interdependence of productivity growth with changes in the level of inputs or experience. Many of these conjectures 16 have yet to produce definitive policy implications. Collec- tively, however, they again suggest that it may be desirable to target industries where greater increases in productivity may be needed or expected. The second development which was to stimulate new approaches to technology policy was a reexamination of the traditional assumption of “perfect information,” 17 in which all economic actors were assumed to be simultaneously well informed about technology, or “production possibilities.” This assumption of widespread knowledge, also known as the proposition that knowledge is a global public good implies that, while some knowledge may be privatized due to IPR and therefore unavailable for use, all knowledge is in principle accessible to all actors. Fagerberg and Verspagen (2002, p. 1292) offer the following retrospective view on issues of the distribution of technological knowledge. “It emerged mainly because of the failure of formal growth theories to recognize the role of innovation and diffusion of technology in global economic growth (Fagerberg, 1994). These for- mal theories either ignored innovation–diffusion altogether, or assumed that technology is a global public good created outside the economic sphere, and therefore could (should) be ignored by economists. However, it became obvious for many students of long-run growth that the per- spective on which this formal theorizing was based had little to offer in understanding the actual growth processes. Rather than a global public good, available to everyone for free, it became clear to observers that there were large technological differences (or gaps) between rich and poor countries, and that engaging in technological catch-up (narrowing the technology gap) was perhaps the most promising avenue that poor countries could follow for achieving long-run growth. But the very fact that technology is not a global public good, i.e., that such technological differences are not easily overcome, implies that although the prospect of technological catch-up is promising, it is also challenging, not only technologically, but also institutionally (Gerschenkron, 1962).” Assuming imperfect distribution or diffusion of technological knowledge under market incentives opens up possibilities for governments to play a proactive role in improving the terms of trade relative to other nations. 18 The third development that opened new approaches to formulating technology policy was the resurrection and extension of the field of economic geography, a broad development, drawing upon elements of new growth theory and an increased appreciation of the technological sources of agglomer- ation or colocation. 19 In economics, the renewed attention to economic geography is often linked to the contributions of Paul Krugman, 20 but it also reflects a resurgence of empirical work within geography and economics disciplines. 21 Research in this area revived interest in the sources of agglomeration or clustering of innovative activities and fostered a lively debate including issues such as the regional disparities that clustering might bring within a country. For example, to what extent government policies might initiate or promote the growth of a cluster or enhance its prospects (Feldman and Kelley, 2006). This literature also recognized potential limits to clustering arising either from its own internal processes (e.g., congestion effects; Folta et al., 2006) and the need to consider allocation of efforts between different regions. 22 In each of these areas, there were new opportunities for the formulation of policy. A particularly troubling issue from the outset in this line of thinking was the extent to which it might encourage policies aimed at supporting localized development that would recapitulate earlier claims and policies concerning infant industries, import substitution, or other protectionist measures This very brief introduction to developments in understanding of productivity, technology transfer, and the localization of development illustrates three features of the relation between the “evidence base” and policy. First, each of these theoretical developments has enlarged the scope of potential policy intervention relative to the “innovation climate” prescriptions discussed above. Second, new theories have often provided a permissive license for the policy actions while the policy implications of theories are still very uncertain within the academic community. For example, the new growth theory has provided further justifications for interventions such as R&D tax credits and reduction of capital gains taxation, even while it remains uncertain what the relative contributions of investment in intangible and tangible capital might be to the growth of productivity within endogenous growth models. 24 Third, the timing and direction of economic research suggests that theories about the interaction between technol- ogy and the economy are subject to feedback effects from policy initiatives. (A stronger statement would be that the selection processes governing salience and reputation in economics are strongly linked to the political exigencies of the age, a point that both Galbraith and Mirowski have argued.) When offered as solutions to the “underinvestment” problem, horizontal policies are often imple- mented as a tax credit applied to either to a firm’s total R&D spending or to increments in this expenditure. 28 Since the “underinvestment” justification implies the desire for additional R&D effort, incremental funding is often incorporated in the design despite its apparent inequity for those firms who have previously committed to high levels of R&D. The obvious moral hazard problem is that firms might claim activities that they would otherwise conduct or have been conducting as R&D expenditure. Averting this hazard requires an auditable definition of R&D activity and active enforcement. More subtle forms of opportunistic behavior are, however, possible. Since engaging processes of self-selection is the aim of this funding scheme, it is not appropriate to regulate the allocation of expenditures by the firm, other than to assure they are an allowable expendi- ture. There are, however, opportunities for R&D expenditures on aims that are unrelated to any particular goal of the program such as productivity improvement. Some of these aims may be desirable from a public welfare viewpoint—for example, performing pollution abatement research to reduce potential penalties. Others are somewhat more uncertain, for example, producing new varieties of a product in an effort to raise rivals’ costs or research assessing the potentials of technologies that are not actually utilized to reduce the risk of being displaced by rivals. Still other aims may be questionable uses of public resources, for example, when groups of firms use the subsidy to enter “winner take all” contests such as patent races. 29 Collectively, these amount to potential leakages or “slips between cup and lip” in this form of funding. In recent times, protectionist measures aimed at bolstering infant industries or providing an incentive for import substitution have generally been proscribed through international trade agreements. While it is generally accepted that countries would seek to support and promote domestic industries, doing so by shielding them from domestic competition from imports or financing their competitive struggles with foreign firms was prohibited as being inconsistent with the general principle of free trade. Even if some merit might exist for supporting infant industries, as many countries have done historically, implement- ing this support through protectionist measures would complicate trade governance by blurring the boundary between mercantilism and efforts at industrial promotion. Acceptance of this argument is by no means universal. In the context of this chapter, the pivot of the controversy is the assumption that the prohibited actions are either ineffective or redundant to other policies that would be similarly effective. This assumption is supported to the extent that knowledge is a global public good or that the available processes of knowledge generation and distribution, that is, ones that do not contravene agreements (such as thematic funding), will suffice to allow entry into new areas of production or commercial activity. The assumption is contradicted if these conditions do not hold. While it is straightforward to find examples of countries entering new industries, contradicting a strong version of the infant industry argument, it is possible, although nearly impossible to establish, that prohibitions of protectionist measures have prevented entry into “important” industries (a broad counterfactual) or created “weak” entrants (for which there may be many other reasons). The same problems apply to the analysis for historical examples where it is claimed that protectionist measures were of critical importance in supporting domestic technology development. While the fact of such measures is undisputed, the counterfactual claim of what would have happened in their absence is largely speculative. For example, in the case of the United States, domestic pig iron-making capacity was aided by protectionist measures in the nineteenth century with greater effect prior to the Civil War than following—however, in neither period is it likely that the industry would have disappeared without the tariff (Davis and Irwin, 2009; Irwin, 2000). The counterfactual (without protectionist measures) international division of labor in steel making in the latter half of the nineteenth century, during the most active period of American industrialization, is thus a matter of pure speculation. he premise for finance-related technology policies is that private financial markets are too conservative or that risk-taking investment is institutionally underdeveloped in a particularly national context. The standard of comparison for reaching these conclusions is usually a comparison with the US market where venture capital organizations are most numerous and highly developed. Unfortunately, policies aiming to support greater availability of funding for innovation risks “like the United States” often neglect key features of US institutions and markets. 34 These include the active role of venture capitalists in selecting management teams and monitoring company activities, the existence of interme- diate or “mezzanine” funding from investment banks for those companies that have a good prospect for becoming publicly held, a very active market in the sale of smaller to larger companies and a well- established set of institutions for “initial public offerings” (IPOs) for companies at an early stage in their life. In other words, a very large and complex system is involved rather than a single type of institution or funding channel. Finance-related technology policy measures are commonly based upon the claim that the private sector has a lower valuation for investment in innovation than is appropriate from a social welfare perspective. As a general proposition, this claim is nearly impossible to substantiate. However, it is more straightforward to argue that a particular sector or activity is underappreciated by private sector. These may involve issues of intergenerational equity, for example, alternative energy technologies; health and safety, for example, development of pharmaceuticals benefiting small or poor populations, or public procurement; for example, educational technology. In such cases, providing incentives for private financial investment may overcome investor resistance through a signaling effect, or directly alter the level of innovation-related investment. Modern innovations often involve multiple technologies and the capacity to innovate may be influenced by the size and variety of the technically skilled workforce. The size of this workforce is often gauged using the measure, scientists, and engineers per 1000 population. Unfortunately, this measure is a fairly crude indicator for the qualities of such a workforce because of substantial international differences in the nature of scientific and engineering education. Several such differences appear to be important. A heavy emphasis on theory and disciplinary boundaries appears to reduce the number of individuals capable of bridging between disciplines and between theory and practice. The demonstration of a high degree of originality during the pursuit of higher degrees in scientific and engineering subjects may contribute to the pool of individuals capable of generating independent research directions or taking innovative initiative. Participation in world-leading experimental research seems to provide unique qualities in education and training as such research necessitates invention and thus generates sparks of insight that may be kindled into innovative flames. These types of distinctions are all highly qualitative and difficult to separate from other institutional and environmental influences. For example, a high degree of originality and participation in world- leading research during degree studies are likely correlated with a high level of support for university- based research. Nonetheless, countries in which university research has historically not been so well supported, such as Japan, seem able to devise other methods of creating the skill base needed for innovation, perhaps assisted by a relatively high number of individuals trained in the sciences and engineering before entering the labor force. Faced with these uncertainties, a common belief in many countries is that it would be desirable to increase the proportion of undergraduate enrolments in science and engineering relative to other university studies. The mechanisms for translating this “desire” into action are less developed. Histori- cally, one of the largest efforts to influence the size of the science and engineering workforce was the 1958 National Defense Education Act (NDEA) in the United States, a response to worries provoked by the Soviet launch of the Sputnik satellite in the previous year (Flattau et al., 2006). Direct government intervention in the market acquisition of intellectual property in support of innova- tion is relatively rare. It is, in principle, a second important channel by which technology policy could shape the rate and direction of innovation. A clear example of such a policy was the Japanese government’s historical efforts on behalf of domestic firms to regulate the licensing of foreign technology (Dore, 1986). This effort involved creating a specific unit of account so that the balance of trade in technology licenses could be monitored. Assistance in negotiating licenses was an important element in this policy (Lynn, 1998). In common wisdom, it is expected that innovations that are fundamentally new provoke skepticism, resistance, and caution (Rogers, 2003). As a result, there are often long delays between the introduction of an innovation and acceleration in the rate of its uptake. These observations, which are consistent with the empirical observation of the time path of adoption decisions, what is often called the diffusion process, can be reconciled with basic or conventional economic theory in several different ways, all of which involve “bending” standard assumptions. The least distortion is to assume that the innate characteristics of potential adopters differ in a way that orders the timing of their adoption decision. For example, potential adopters who might gain the most from the claimed improvements offered by innovation will be the first to adopt while those standing to gain less will postpone their adoption decision. A more significant departure from the standard assumption of perfect information is to assume that potential adopters have uneven knowledge about the benefits to be gained from adoption. In this case, adoption rates will parallel an information diffusion process and there are a variety of possible models that may be used to describe such a process of information diffusion (Geroski, 2000; Stoneman, 1987). Technology policy may aim to influence the rate of diffusion for several different reasons. First, if we assume that learning is the consequence of cumulative output, a faster rate of adoption will be paralleled by a more rapid rate of cost reduction generating increases in social welfare. 37 Second, the technology that is being diffused may improve productivity for its users or for other parts of the economy. In this case, more rapid diffusion will accelerate the overall rate of growth of the economy and social welfare improvement. 38 Third, it may prove desirable to slow the rate of diffusion if alternative and more beneficial technologies are likely to be offered in the future. For example, the diffusion of new packaging technology using materials that contribute to solid waste management problems might be discouraged to favor future prospects for more environmentally suitable technologies. This reason is identical to the second, except that the externalities from adoption are negative rather than positive. The “systems of innovation” perspective is becoming more influential in the practical formulation of policy. There are numerous reasons for this, encompassing the extended debate on national technologi- cal competitiveness noted earlier which focused attention on national policy differences, increased attention to specific contribution of universities to science-based industries such as biotechnology and the growing prominence of particular regions such as Silicon Valley in the United States, Cambridge and Oxford in the United Kingdom, and Baden-Wu ̈rttemberg in Germany. By their very nature, systems of innovation approaches to technology policy are complex in rationale and implementation. In general, however, the consideration of intervention begins with a perception of a dysfunction in the existing systems (e.g., the absence of apparent technology transfer from a public institution into the private sector) or questions about comparative performance (e.g., where is our Silicon Valley? or how are we doing at developing frontier technologies?). When the processes of technological innovation are viewed as a system of interconnected actors with a distribution of authority and expertise, it is possible to identify particular functions or capabilities may be underdeveloped or even missing. Within a theoretical model of perfect information or complete markets for technological knowledge, this possibility does not arise. However, if markets are incomplete and knowledge is imperfectly distributed, then it is possible that knowledge of great value in other contexts may be “trapped” in a particular organization. It is, of course, an empirical question whether either of these theoretical models is an adequate approximation of reality. Markets for technology certainly do exist and information and knowledge is distributed through market exchanges. However, it is also likely that information asymmetries are widespread and that important issues may arise not only in making arranges for exchange of existing technology, but also in identifying potential applications and articulating potential needs. This line of reasoning provides some structure for identifying portions of systems of innovation of particular concern. We know, for example, that information asymmetries are generally larger in markets characterized by numerous small- and medium-sized actors, that differences between organizational culture impede communication, and that ideas from outside of organizations are often subject to the “not invented here” syndrome (Cohen and Levinthal, 1990). 44 These “systemic failures” can be addressed by devising appropriate organizations. For fragmented markets, organizations whose mission is to generate, acquire and promote technological change can be established with a specific “outreach” and coordination missions. Historically, there has been mixed success with these sorts of programs and there are difficult problems in aligning what a larger centralized organization sees as priorities and the priorities assigned by their clients (Etzkowitz and Leydesdorff, 1998). For differences in organizational culture it is possible to devise “intermediate” organizations with a specific mission to engage in applied research of immediate value, to provide relevant consultancy services, or to focus on issues of information dissemination. Germany’s Fraunho- fer-Gesellschaft is an exemplar of such a complementary institution for the case of industrial research. It aims to conduct applied research of “direct utility to public and private enterprise” (Fraunhofer- Gesellschaft, 2003, p. 5). Information dissemination activities are often incorporated in other policy initiatives. Thus, the US cooperative extension service is also an example of an organization serving the role information dissemi- nator. In recent years, a growing share of information dissemination activities are delivered through the Internet with services such as Business.Gov (the name for services in the United Kingdom, the United States, and Australia) offering advice about establishing new businesses, source of financing, and the delivery of various business-related services, primarily directed at small and medium-sized enterprises. 46 It would be incorrect to say that organizations such as those identified in preceding three paragraphs were created with a clear “systems of innovation” perspective in mind. Instead, the “systems of innovation” perspective provides a way to interpret the services provided by public institutions and an approach to comparing their performance. Despite the variety and size of these types of initiatives, no government institution in any country appears to be capable of the systematic analysis of all the initiatives operating in their own country, let alone performing an international comparative analysis. 47 Since such an inventory would be a necessary first step in developing coordination policies, it may be said that a “systems of innovation” approach to public administration is still in its infancy. In benchmark economic analysis, when technology is not taken to be exogenous to the economic system, it is treated as a commodity whose production and sale is subject to same principles of supply and demand as other commodities (Arora et al., 2006). As we observed throughout this chapter, this approach may ignore the informational content of technological knowledge and create an artificial scarcity for a good whose marginal cost of reproduction may be negligible and is certainly less than the original costs of generation. The other side of this paradox, however, is that without the capacity to earn rents (profits larger than those for other, less risky, activities), the private sector may choose not to invest in the generation of new technology. Even if public sector officials could be provided similar incentives to private actors, public sector funding for creation of technological knowledge is politically problem- atic due to necessity of accepting frequent failures. One path out of this thicket that has attracted increasing attention is the potential to encourage private actors to form “clubs” to pursue technological discovery and development (Romer and Griliches, 1993). It may be necessary to provide some cofunding for such endeavors to overcome transaction costs and signal the perceived value of a cooperative research effort. While similar to funding provisions often employed on the supply side in the cofunding requirement, this type of policy takes explicit account of the systemic nature of innovation—for example, connecting potential users with producers, linking to public research organizations or universities, and taking into account vertical relationships between different segments. A good example of such as club is the Interuniversitair Micro-Elektronica Centrum, now known as IMEC, a Belgian R&D effort supporting integrated circuit, nanotechnology, and related research. Established in 1984 as a nonprofit organization and funded in part by the Flemish authority in Belgium, IMEC currently receives less than 20% of its budget from government. 48 IMEC aims to conduct industrially relevant research 3–10 years ahead of industry needs and has a “tiered” access structure for access to industrial results based on the level and timeliness of funding by partner companies. In the United States, Sematech and SRC play similar roles (Grindley et al., 1994). Long-term success is not guaranteed for such endeavors—an advanced computer development effort, the Microelectronics and Computer Consortium (MCC) (Gibson and Rogers, 1994), was eventually unable to find a common agreement among its partners for research projects and has been dissolved after 20 years of operation. it should be apparent at this point that there are no panaceas in technology policy. All known designs have potential flaws or limitations. Perhaps the most common fault is simply that the policy misses the target—it does not deliver what is intended and, as a consequence, it may be viewed unfavorably in comparison with other government expenditures. Although in recent years, this has not constrained the willingness of policymakers to launch new initiatives in many of the wealthier countries, it does raise concerns about whether anything can be learned from the experience of planning and implementing technology policy, including the choice of design, that might constitute an advance or improvement. Before examining potentials for improvement it is useful to reflect on the contribution of uncertainty to the problems of planning a technology policy program. The invention of the integrated circuit is a case in point. The need for a compact electronic module comprised different electronic components was known by the US Department of Defense, which funded several thematic research programs that sought to produce a device (Kraus, 1973). Arguably, these programs fell victim to a definition of objective that was “too narrow,” a hazard in defining thematic research programs noted above. Nonetheless, these programs suggested to the actual innovators of the integrated circuit, Jack Kilby and Robert Noyce, that a demand existed for a device which could integrate electronic components. As Braun and Macdonald (1982) note, prominent members of the existing electronics industry were entirely unimpressed that a few transistors and resistors could be assembled on a single “chip” of silicon crystal. Thus, while the thematic program failed completed in meeting its objectives, its existence signaled the potential value of the integrated circuit to a “lead user” (the military) despite the skepticism of incumbent players in the electronics industry. Viewed in this light, the US Department of Defense programs were a failure as thematic research and a spectacular success as a signaling device. Yet, there is no evidence that signaling was intended or considered as a program aim and an evaluation of the program based on its terms of reference would conclude that it had been unsuccessful. We have so far considered two fundamental aims of technology policy. First, technology policy might be aimed at expanding and accelerating the rate of technological change in order to raise productivity and hence social welfare. Second, the direction of technology might be pointed toward social needs such as defense, education, health, or the environment. It is useful, at this point, to introduce a third possible aim for technology policy. This third aim of technology policy is simply to improve the processes of technology generation, diffusion, and utiliza- tion. It is in the nature of contemporary discourse that many will find the previous sentence incom- plete—it begs the question of “for what?”—to which a possible answer is—for its own sake. The perspective that it is acceptable to undertake an activity without an explicit “goal” is simply, as a human creation and institution, technological progress is more than an “instrument” to some other purpose but rather is a purpose in itself like art, music, or religious observance. For those who are sympathetic to this view, all of the instrumental and socially beneficial features of technology are collateral gains or by- products of the pursuit and exercise of knowledge, Feynmann’s “the pleasure of finding things out” (Feynman, 2007). It is appropriate to add this category since its omission seems to only serve the purpose of diminishing the human meaning of the pursuit of science and technology, a meaning that impinges on economic analysis not only through the two possibilities that we have already identified, but also through the choices made by individuals about their life work and their choices with regard to society’s collective purposes. Considering this issue also enriches the reinterpretation of the policy designs. The technology policy planning process involves consideration of goals, capabilities of sponsor, capabilities of performer, and a control structure. A control structure is needed both to limit the opportunistic behaviors described in the previous section, to provide the possibility of steering the policy during and after implementation. The control structure also generates the information needed for evaluation—assuring accountability and recording the lessons gained from experience. Each of the policy designs described in the previous section suggests a somewhat different configuration of these components. Table 2 illustrates the variety of configurations required. In Table 2, the entries in the “capabilities of sponsor” column are meant to reflect the minimum level of capabilities required—having very sophisticated capabilities is likely to add further benefit. The category “very sophisticated” means that the sponsor has to have a working knowledge of industrial history and dynamics including a knowledge of capabilities and limitations of the existing actors, the potential for entry including possibilities arising from international competition, and a thorough knowledge of technological opportunities and trade-offs. Very sophisticated capabilities in industry analysis are fairly rare within government and only certain nations, for example, Japan, have benefited from systematic development of these capabilities. Even when these capabilities exist somewhere in government, they need to be applied by the sponsor, typically a specific agency within the government that may or may not be the one with these capabilities. A typical approach is to “hire in” industry expertise on an ad hoc basis in an attempt to raise sophistication levels. There are two problems with this. First, without a sophisticated understanding of the issues it is difficult to absorb or question such expertise or even to properly write the requirements specifications for such services. Second, existing industry expertise is often generated by providing consulting services to the large existing players in an industry and will therefore reflect a bias toward the predominant clients of the experts. These two problems interact—in an attempt to avoid the second problem expert panels are constructed. Expert panels, in turn, often involve divisions of opinion that can, in principle, be very useful to sophisticated sponsors. In practice, a preference for clear recom- mendations often leads to a consensus report and the loss of variety produced by the deliberation process or an entirely deadlocked outcome and a report consisting of platitudes. 49 Even for very sophisticated sponsors, monitoring of evaluation and policy effects is very important because existing understanding may be inadequate to the complexities brought about by the policy. An influential comparative study by Ergas (1987) derived two basic typologies of technology policy as revealed in national practice—mission oriented and diffusion research. These typologies are useful for thinking about the nature of policy planning and are at a higher level of abstraction than the design models considered in the previous section. Ergas takes the view that technology policy is less a matter of design than a reflection of the evolution of different national practices resulting from interactions between historical events, governmental structure, and persistent patterns of technological specializa- tion. The countries that Ergas characterized as mission-oriented countries (the United States, the United Kingdom, and France) pursue technology policy in terms of “big problem” issues such as defense, health and education and do so in a context of striving for international strategic leadership. The diffusion-oriented countries—Germany, Switzerland, and Sweden in Ergas’ study aimed to make the best use of technology within existing patterns of specialization specifically with the aim of assisting domestic firms to be internationally competitive. Evaluation of technology policy as opposed to evaluation of specific programs is surprisingly uncommon. 52 Between 1985 and 1995, the United States was deeply engaged in a debate about international competitiveness prompted by concerned about the future of the US electronics and automobile industries (Graham, 1992). Part of this debate involved attempts to assess whether the US science and technology were “fit for purpose” in a number of areas (Shapley and Roy, 1985). Of particular concern was the loss of international market share in industries where the United States had traditionally been dominant such as in machine tools (Holland, 1989) and semiconductors (Howell et al., 1988). Several of the studies published during this debate specifically examined the role of science and technology policy, observing the absence of coordination and the distance between university and industry research. 53 The competitiveness debate raised the profile of technology policy to an unprece- dented level with a joint policy statement by the President and Vice President early in the Clinton Administration (Clinton and Gore, 1993) and a number of efforts to connect policy rhetoric with action (Ham and Mowery, 1995). During the Bush Administration, however, technology policy issues received less focused attention and the historical pattern of highly decentralized technology policy efforts prevailed. The absence of a sustained interest in the critical evaluation of technology policy is not peculiar to the United States. In many countries, it is not unusual for an “ad hoc” review to be made such as the UK’s Strategic Decision Making for Technology Policy (Council for Science and Technology, 2007). It would, however, be unusual for the following recommendation of this report to be implemented: “Government should set in place mechanisms to repeat this process at appropriate intervals, likely to be approximately every 3 years. The decision on which technologies to focus upon would be made on the basis of these periodically updated reviews” (Council for Science and Technology, 2007, p. 8). The problem is immediately apparent in the second sentence of this statement. Throughout the government, this sentence is likely to be read as a threat to the control and initiative of a plethora of ministries and government agencies who view the technologies that they are supporting, or intend to support, as “strategic.” Despite the “bottom up” approach of this particular study, the report itself appears under the signature of 17 individuals, few of whom have direct responsibilities for policy implementation. While it is conceivable that such a recommendation might be followed, ministerial politics make it unlikely. Such efforts may, however, be useful in signaling within government the need for higher priorities in specific areas and lead to alteration in funding priorities. This is a somewhat different purpose and outcome than promised by the report’s title and undoubtedly hoped for by those who engage in creating such reports. The difficulties of implementing a coordinated or strategic technology policy seem to be the result of the pervasiveness of science and technology issues throughout government and society. In this respect, it seems no more likely that a systematic technology policy could be established than a systematic social policy, housing policy, or health policy. All such areas are simply too large and comprised too diverse a collection of interests to be brought under any systematic process of governance or regulation. What might be hoped for in the future is the development of more transparent and complete inventories of technology initiatives being undertaken. If accompanied by a more systematic organization of evalua- tions and commentaries from within and outside government, there would be a better opportunity to learn from experience and to create a more substantial body of literature to serve as a reference in future planning, implementation, and policy evaluation efforts. At the outset, a “benchmark” theory based upon Arrow and Nelson’s arguments regarding the potential underperformance of markets in supplying adequate amounts of investment in new knowledge was constructed. In modifying this framework to address issues of technological as opposed to scientific knowledge, the centrality of intellectual property and appropriability issues was identified. While variants of this benchmark are commonly employed in governing technology policy initiatives, it was argued that the political salience of technology policy has led to policy outrunning theory. Developments in economic theory related to sectoral imbalances in productivity advance, the implications of asymmetric holdings of information, and the role of localiza- tion in the generation and distribution of knowledge have provided opportunities to reconnect policy with theory. In normative terms, technology policy should be built upon firmer theoretical and empirical foundations rather than relying upon ad hoc expertise or political predictions. Constructing such foundations is the principal challenge for this area in the coming years.

Chapter 29: Military R&D and Innovation (Mowery)

This chapter surveys the role of military R&D in innovation. Government military establishments have for generations exerted an important influence on technological change in most industrial economies. Indeed, many scholars argue that the military has influenced innovation since antiquity. Nevertheless, although the influence of military activity (waging wars, acquiring weapons, training personnel) on technological change has been pervasive for centuries, the channels through which military activity has influenced innovation have changed significantly, just as the structure and scale of national military establishments and the industrial societies within which they operate have changed. There is very little comparative work on the influence (or lack of same) on innovation of military R&D programs supported by other NATO governments, which raises fundamental questions about the generalizability of the US experience that forms the foundation of this survey. One of the greatest gaps in the vast literature on military R&D and innovation is the modest scope of comparative work. An interesting and little-remarked result of the large-scale public funding of “organized innovation” by postwar military establishments was the growth of a considerable body of research on the economics of defense R&D. Expenditures by the US Department of Defense (DoD) also supported pioneering work by economists and other social scientists on the process and management of innovation within complex systems. Some of the seminal early work on the “economics of R&D,” including important work by Arrow, Nelson, Scherer, and other scholars was either inspired or directly sponsored by the military services in the United States during the 1950s and 1960s, laying the foundations for much of the research summarized in this Handbook. Although the market failure rationale retains great rhetorical influence in justifying public invest- ments in R&D programs, casual empiricism suggests that its influence over such public investments is modest at best. Most OECD nations’ R&D investment budgets are dominated by programs that serve specific government missions, such as defense, agriculture, health, energy, and other activities. “Market failure” underpins less than 50% of public R&D spending in most of these economies. As Figure 1 points out, in none of these nations does “nonmission” R&D account for as much as 50% of central-government R&D spending, and in most of the countries included in Figure 1, “mission-oriented” R&D spending accounts for more than 60% of R&D. The United States is an outlier, with large R&D programs in defense and health bringing the total “mission-oriented” R&D budget to well over 90% of federal-government R&D spending. Also noteworthy in Figure 2 is the relatively small share of central-government R&D spending accounted for by the “Bush-Arrow” form of R&D spending, nonmission-oriented R&D. This class of public R&D investment accounts for nearly 30% of reported central-government R&D spending in France and Germany, but is well below 20% in the United Kingdom and Canada, and barely exceeds 5% in the United States. The governance of many of these large public investments in mission-oriented R&D also bears little resemblance to the idealized portrait of the “Bush social contract” 2 articulated in Guston and Keniston (1994). Rather than “scientists” choosing the fields in which large investments of public R&D funds were made, allocation decisions were based on assessments by policymakers of the research needs of specific agency missions in fields ranging from national defense to agriculture. Indeed, at least one important postwar program of defense-related R&D investment, the investments by the US Defense Advanced Research Projects Agency (DARPA) that sought to create academic “centers of excellence” in the embryonic field of computer science, peer review played a minimal role (see Langlois and Mowery, 1996). The mission-agency programs of which military R&D has been the largest within most governments’ postwar R&D budgets thus pose a fundamental challenge to the prevailing welfare-economics justification for public R&D spending. The extensive literature on science and technology policy recognizes the important role of mission-agency R&D spending, but provides no framework for considering the reasons for such large-scale investments of public funds or for comparing and evaluating the design and effects of such programs. Ideally, a handbook devoted to the “economics of innovation” would devote a chapter to the general topic of mission R&D. The sheer scale and diversity of the programs included within this category, however, mean that any such chapter could provide no more than a superficial treatment of programs in fields ranging from agriculture to space exploration without exceeding the space constraints for a handbook chapter. This chapter’s examination of military R&D nevertheless highlights some important features of mission R&D that deserve closer scrutiny and comparison across governments. The military operations of states, city-states, and other political organizations in Europe and elsewhere long have influenced technological innovation, as authors such as McNeill (1982) and others have noted. Nevertheless, the effects of war on technological innovation have been a subject of considerable controversy, with some historians arguing for its positive influence on innovation (Kaempffert, 1941), and others (Milward, 1977; Nef, 1950) adopting a more skeptical view that considers the counterfactual case more explicitly. 3 With some important exceptions (Ruttan, 2006), most economic historians assess the effects of war on technological innovation as largely negative. Paradoxically, one of the primary reasons for the limited effects of war on technological change is the tendency for hostilities to engender a more conservative approach by the military services to technology management. As Milward (1977) and others have pointed out (indeed, this point is acknowledged at several points in Ruttan’s discussion), mobilization for war since at least the mid-nineteenth century has involved a surge in military demand for existing weapons and systems that available in a crisis situation and are compatible with established tactics and strategies. Wartime mobilization therefore relies on the increased production of weapons that were largely designed and developed prior to the outbreak of hostilities. 4 The pressures of wartime mobilization focus R&D and related investments in weapons development on improv- ing reliability and performance of existing systems, rather than developing radically new technologies. In addition to its effects on the focus of R&D and innovation, of course, Mokyr has highlighted eloquently the tendency for wartime’s “collateral damage” to retard innovation in most modern and premodern economies. 5 The distinction between “the effects of war” and “the effects of military R&D” on innovation is critical. This chapter deals primarily with the latter topic, in an effort to avoid the confusion that has been created by the occasional merging of the two topics in other accounts. Although the scale of military operations grew dramatically during the late eighteenth and early nineteenth centuries, as mass armies were mobilized by the major European powers, the technologies underpinning military operations did not experience significant change, and the military lagged behind civilian applications in numerous fields. In fields where civilian as well as military markets were significant, at least some important innovations, such as turbine propulsion, were adopted by the British Navy only after they had proven successful in civilian applications (see McBride, 2000). 9 The new weapons (such as the airplane and the submarine) that proved so lethal during World War I, as well as the mobilization of national economies on an unprecedented scale for hostilities, transformed the technological underpinnings of the military services of the industrial economies but had surprisingly modest consequences for the level and structure of military R&D investment. In both the United States and Great Britain, for example, private-sector R&D was of secondary importance during World War I. State-owned armories remained significant suppliers of weapons, and in the United States, wartime R&D spending was limited in scope, largely controlled by the uniformed services, and performed mainly in military arsenals and laboratories. 10 In Britain, shortages of optics and chemicals resulted in the creation of government-controlled enterprises such as British Dyestuffs (which merged with Brunner Mond to form Imperial Chemical Industries in 1926), and new government research facilities in aeronautics (the Royal Aircraft Establishment) were created or greatly expanded. Demobilization after 1919 sharply reduced military expenditures on R&D and procurement in Britain and the United States during the 1920s. British rearmament programs during the 1930s focused on expanding produc- tion capacity for designs of weapons systems (notably, aircraft) that relied heavily on government laboratories for their design, and military R&D contracts (in contrast to procurement funding) remained modest. In the United States, military R&D spending remained low through the 1930s. 11 By 1940, total federal expenditures on R&D amounted to $83.2 million (1930 dollars), 39% of which was accounted for by the Agriculture Department. The military share of the federal R&D budget, that is, R&D spending by the agencies included in the postwar DoD, amounted to $29.6 million, 35% of the total. By 1945, however, US military R&D spending had grown to more than $1.3 billion. The Manhattan Project, whose budget exceeded the R&D budget for the agencies included in the DoD during 1944–1945, was an engineering project of unprecedented scale and complexity that created an entire R&D infrastructure of federally funded laboratories, many of which were operated by US universities or corporations. US defense-related R&D spending has also been dominated by development expenditures throughout the postwar period, as Figure 5 reveals. “Development” expenditures have rarely accounted for less than 80% of DoD R&D spending during 1956–2005, while “basic research” has constituted less than 5% of DoD R&D spending. Although comparably disaggregated data are available for few other economies (see below for similar figures on UK defense R&D spending), it is likely that they would reveal a similar dominance of development spending. Development expenditures that are focused on specific weapons systems almost certainly produce fewer spillovers of knowledge into civilian applications than might flow from comparable expenditures on basic or applied research (see below for further discussion). Development programs in US defense-related R&D are also largely funded through contracts, rather than research grants, reflecting their tight focus on well-defined objectives. Both of these characteristics are important for empirical evaluations of the economic effects of defense-related R&D spending (see below for further discussion). There is little obvious trend in the US data in Figure 5, although the figure suggests an uptick in the share of development spending after the 9/11 terrorist attacks. Some of this recent increase in the share of development spending within the overall DoD R&D budget reflects the effects on DoD R&D spending of the extensive overseas combat deployments of US troops in Afghanistan and Iraq, consistent with the tendency for combat-related R&D spending to focus on near-term objectives. As is true of US defense-related R&D spending, British defense R&D is dominated by development activities. According to Schofield and Gummett (1991), approximately 80% of British government defense-related R&D spending is devoted to development (although other analyses suggest that the definition of “development” used in this accounting includes a broader array of activities than are included in the OECD’s Frascati manual (2002) definition), and “Within the research element [of its R&D budget], MoD [the U.K. Ministry of Defence] does not admit to performing basic research, according to the Frascati definition. It does, however, perform ‘strategic research’ and ‘applied research’. . .” (p. 83). How and why does military R&D affect innovation in the broader economy? There is no widely accepted theoretical framework for evaluating the effects of military R&D, beyond a general consensus that these effects are more likely to be significant in peacetime than during war. The products sold to military buyers rarely are employed in unmodified form in the civilian economy, and therefore do not contribute directly to improvements in the productive efficiency of the economy, although the Large sums expended on R&D and related activities assuredly do support income and employment. Much of the civilian innovative impact of military R&D and procurement ultimately depends on the extent of indirect benefits that are associated with the application to civilian uses of knowledge or technologies originally developed with military R&D funds. The extent of these indirect benefits remains controversial, since they are difficult to measure and since their magnitude depends on the policies followed by military agencies managing R&D and procurement programs. Moreover, the indirect nature of these benefits means that the potential opportunity costs of these military R&D and procurement programs are large but hard to measure, not least because the counter- factual case is so difficult to construct. For example, should one compare the effects of defense-related R&D and procurement in a specific technological field with the hypothetical results of comparable expenditures, allocated among a different set of performers and/or R&D activities, devoted to the same technological field? Should the counterfactual case instead consider the implications of comparable resources being devoted to R&D and related activities in different technological fields? Or should these benefits be compared with those resulting from similar expenditures of public funds on other activities entirely? One mechanism through which defense-related R&D investments can aid innovation is military funding for new bodies of scientific or engineering knowledge that supports innovation in both defense- related and civilian applications. Such investments may also support important institutional components of national innovation systems, such as universities, that provide both research and trained scientists and engineers. This channel of interaction is likely to produce the greatest benefits from defense-related investments in basic and applied research, rather than development. A second important channel through which defense-related R&D investment affects civilian innova- tive performance are the classic “spinoffs,” where defense-related R&D programs yield technologies with applications in both civilian and defense-related uses. This channel of interaction can benefit from defense-related investments in technology development, as well as research. But the civilian “spinoffs” associated with defense-related investments in “D” appear to be most significant in the early stages of development of new technologies, since these early phases often exhibit substantial overlap between defense and nondefense applications. As technologies mature, civilian and military requirements frequently diverge, and the civilian benefit from such “spinoffs” declines. A third important channel through which defense-related spending on new technologies can advance civilian applications is procurement. As in other areas of “mission-oriented” R&D, defense-related R&D investment is often complemented by substantial purchases of new technologies. Procurement may affect defense firms’ R&D spending directly (see below for discussion of the work of Lichtenberg, 1984), and defense procurement can affect the development of new technologies. The US military services, whose requirements typically emphasize performance above all other characteristics (includ- ing cost), have played a particularly important role during the post-1945 period as a “lead purchaser,” placing large orders for early versions of new technologies. These procurement orders enabled suppliers of products such as transistors or integrated circuits to reduce the prices of their products and improve their reliability and functionality. 15 Government procurement historically has allowed innovators to benefit from learning by increasing the scale of production for early versions of the technology. The scope for “pure” knowledge-based benefits from defense R&D is limited by the composition of most national defense R&D programs, which as I pointed out earlier, are dominated by “development” spending. But in the United States, defense-related “R” investments (including basic and applied research activities as defined by the US Department of Defense) have accounted for a significant share of federally funded R&D in such fields as computer science (35% in fiscal 2001) or engineering (more than 30%; all figures from American Association for the Advancement of Science, 2002). Defense-related research spending contributed to the creation of a university-based US “research infrastructure” during the postwar period that has been an important source of civilian innovations, new firms, and trained scientists and engineers. Indeed, the restructuring of the US national innovation system between the 1930s and 1950s (see Mowery and Rosenberg, 1999) increased the scale and importance of university-based research, relying on a large federal research budget in basic and applied fields of science and engineering to create the “Cold War University” (Leslie, 1993; Lowen, 1997). 16 There are numerous examples of technological “spinoffs” from defense-related R&D spending in the postwar United States, including the jet engine and swept-wing airframe that transformed the postwar US commercial aircraft industry (see below for further discussion). Major advances in computer networking and computer memory technologies, which found rapid applications in civilian as well as military programs, also trace their origins to defense-supported R&D programs. By contrast, light-water nuclear reactor technologies first developed for military applications proved poorly adapted to the civilian sector (Cowan, 1990). Defense-related procurement was particularly important in the postwar US information technology (IT) industry. In other areas, however, such as numerically controlled machine tools, defense-related demand for applications of novel technologies had detrimental effects on the commercial fortunes of US suppliers and the US machine tool industry (Mazzoleni, 1999; Stowsky, 1992; see below for further discussion). The “spinoff ” and “procurement” channels of interaction are most significant when defense and civilian requirements for new technologies overlap significantly and/or when defense-related demand accounts for a large share of total demand for a new technology. As a result, the influence of defense- related R&D and procurement on innovation within a given technology often declines as the technology and/or the supplier industry mature. Moreover, in some cases, such as IT, defense applications not only exercise less influence on the overall direction of technical development, they may lag behind those in the civilian sector, reflecting the reduced influence of defense-related demand and R&D investment on the innovative activities of private firms. This phenomenon has been particularly noteworthy in the IT sector in the United States, and some scholars (Alic et al., 1992; Samuels, 1994; Stowsky, 1992) have argued that the military services need to reform both R&D and procurement programs so as to exploit advances in civilian applications more rapidly. Much of this critical work has focused on the effects of military R&D in the United States and Great Britain, stressing the tendency of military programs to distort the innovative efforts of private firms, leading them to focus on technical performance at the expense of reliability, cost-effectiveness, or low- cost production technologies (among other critical accounts, see Best and Forrant, 1996; Dertouzos et al., 1989; Walker, 1993). In effect, this critique argues that the economic benefits of defense-related R&D are reduced by the specific requirements of military R&D and procurement programs. As I noted earlier, some qualitative evidence suggests that spinoff benefits decline as technologies mature precisely because of the growing divergence between the requirements of civilian and military applications. But quantitative evidence on these arguments remains elusive. Other critiques of the effects of US military R&D and procurement programs in such sectors as semiconductors or computers that were prominent during the 1980s and early 1990s argued that these programs have supported the growth of industries populated by relatively small firms with limited financial resources and production capabilities that were unable to compete effectively in civilian markets with large Japanese and South Korean firms (Borrus, 1988; Dertouzos et al., 1989; Florida and Kenney, 1990). Like the “distortion” argument of the previous paragraph, this critical assessment implicitly appeals to a counterfactual argument, but the details of the alternative world are not developed. Nor do most such accounts present detailed evidence, beyond the correlation during specific historical periods between high levels of military R&D and competitive problems in high-technology industries. Neither critique can be dismissed, but the specific ways in which these negative effects have been realized, their links (if any) to the structural characteristics of military R&D programs in the United States and elsewhere, and more specific measures of the nature of these negative consequences are lacking. Levy and Terleckyj (1983) and Griliches and Lichtenberg (1984) examine the productivity effects of publicly and privately funded R&D at the industry level for the 1949–1981 (Levy and Terleckyj) and 1959–1976 (Griliches and Lichtenberg) periods in the United States. Both studies conclude that the contributions of federally funded R&D to productivity growth (measured respectively as labor productiv- ity growth and total factor productivity growth in the two papers) are small and frequently indistinguish- able from zero. Neither paper separates defense from nondefense R&D spending, although Levy and Terleckyj (1983) separate R&D contracts from other forms of federal R&D spending, and find that contract R&D contributes more significantly to measured productivity growth than does noncontract R&D, the contribution of which is indistinguishable from zero. Reflecting the fact that defense-related R&D is dominated by development funding, and that the majority of development work is funded through contract R&D, the Levy–Terleckyj study is one of a small number that find a positive productivity effect of a class of public R&D investment that is dominated by defense-related R&D spending. The study also concludes that the contribution of federal contract R&D to labor productivity growth is smaller than that of privately funded R&D. In contrast to most quantitative analyses, Levy and Terleckyj measure “IR&D,” and conclude that the contributions of IR&D to labor productivity growth are nonsignificant. Another set of studies separates government-funded R&D in defense-related and nondefense fields in cross-national analyses of economic performance and industry-funded R&D during the postwar period. Guellec and van Pottelsberghe (2001) controlled for the share of public R&D spending devoted to defense in an empirical analysis of the effects of public and private R&D spending on total factor productivity growth in 16 industrial economies (including the United States, Great Britain, and France) during 1980–1998. They found that defense-related governmental R&D spending had a negative effect on productivity growth, in contrast to nondefense government R&D spending, which had a small positive influence on productivity growth. A second study (2003) compared the effects on industry- funded R&D of government R&D funding (separating defense-related R&D spending), tax credits, and intramural R&D performance in 17 OECD economies (including the three nations listed above). Consistent with the results of their 2001 study, the authors found that defense-related R&D spending by governments tends to reduce industry-funded R&D, as does intramural R&D (defined in this study as defense-related R&D performed in public laboratories and in universities). These empirical analyses reach varied conclusions on the links among government-funded R&D, government-funded defense R&D, productivity growth, and industry-funded R&D investment. Both the longitudinal studies of US productivity growth and the cross-sectional country-level studies are also affected by the unusual position of the United States. The period included in many of these studies is one during which at least 50% and during some years as much as 70% of government-funded R&D in the United States was defense-related. As David et al. (2000) point out, empirical analyses of US post-1945 data tend to show substitution between public and private R&D more consistently than studies of other countries. The inclusion or exclusion of the United States in cross-national studies of the broader relationship between publicly and privately funded R&D, or that between publicly funded R&D and economic performance, thus may affect these studies’ results. The sensitivity of the studies’ results to the inclusion or exclusion of the United States in turn is likely to reflect the influence of defense-related R&D spending by the US government. The nonmarket character of military R&D programs that was discussed earlier further complicates interpretation of these results—the economic benefits flowing from R&D investments in defense are largely indirect, which means that they are difficult to capture within the “knowledge production.” Much of the qualitative discussion of “crowding out,” particularly studies of postwar Britain such as those mentioned earlier, emphasizes the potential effects of defense-related R&D in raising the costs of R&D activity for private firms as well as defense contractors. One of the most important mechanisms through which this type of crowding out may operate is the market for scientists and engineers. Goolsbee’s study (1998) of the effects of federal R&D spending during 1968–1994 found that federal R&D spending raised the wages of scientists and engineers. Although he did not separate the effects of defense-related R&D spending on the demand for scientists and engineers, his results suggest that these salary effects were greatest in engineering fields most heavily affected by defense spending (e.g., electrical and aeronautical engineers). Goolsbee’s data cover a period during which the federal share of national R&D spending declined from more than 60% in 1968 to less than 40% in 1994, while the defense-related share of federal R&D spending increased from 52% to 57%. Both overall federal R&D spending and defense-related R&D spending thus declined as a share of total national R&D spending, suggesting either that Goolsbee’s analysis understates the effects of federal R&D spending on salaries or that other factors not included in his specifications increased earnings. Nor does Goolsbee demonstrate that the increased salaries of scientists and engineers led to a reduction in the productivity or output of non military R&D investment, something that is implied by a “crowding out” argument. Lichtenberg argues that publicly funded R&D contracts in the defense field “. . .do not descend upon firms like manna from heaven. . .”, but instead respond to defense contractors’ investments of their own funds (some of which investments benefit from IR&D subsidies from the US Department of Defense) in R&D. Although he does not separate federal contract R&D that is specifically defense-related, it is likely for reasons noted earlier that the majority of the contract R&D funds in Lichtenberg’s data are defense-related. Lichtenberg’s empirical analysis allows for the possibility that firms may increase their privately financed R&D spending to enhance their prospects in procurement competitions. When the share of firm sales directed to federal customers is included, Lichtenberg (1987) finds that the effects of federal R&D on firm-level R&D spending are not significant. As a result, the effects of federal procurement spending cannot be divorced from those of federal R&D spending, and the “true” effects of public R&D spending on long-term private R&D investment are overstated in analyses that do not control for the endogeneity of contract R&D and procurement competitions. By explicitly incorporating the details of program structure, Lichtenberg is able to control for otherwise unobserved differences among firms receiving federal R&D contracts. Although Lichten- berg’s studies cover only US firms, the relationships among public R&D funding, private R&D funding, and procurement contracts highlighted in this work seem likely to apply in other nations with large defense R&D budgets, and suggest that “crowding out” may be a real possibility. Another recent empirical analysis of the cross-national determinants of national innovative perfor- mance provides an additional basis for skepticism about the effects of defense-related R&D spending on national performance. Furman et al. (2002) find that industry-funded R&D as a share of overall national R&D spending and the fraction of national R&D performed by universities are significant in explaining cross-national differences in patenting, a measure of national innovative performance that is open to criticism but is relatively comparable across nations. Their results are relevant to this discussion because neither of these characteristics of national R&D spending may be associated with high levels of defense- related R&D investment. Indeed, the scale of public investment in defense-related R&D appears to be negatively correlated with the fraction of national R&D investment funded by industry in a comparison of OECD economies. Indeed, during the 1960s and 1970s, both Great Britain and France (see Chesnais, 1993; Kolodziej, 1987) promoted “national champions,” large firms created through state-supported mergers, that enjoyed privileged positions as suppliers of both contract R&D and weapons systems. Walker (1993) argues that the attractions of these noncompetitive defense contracts led a number of large British firms that might otherwise have been active innovators in civilian markets to focus their efforts on defense, in another form of “crowding out.” Military sources have provided the majority of the funds for R&D investment in the US commercial aircraft industry during the postwar period; according to Mowery and Rosenberg (1989), military- funded R&D accounted for more than 74% of the total R&D investment in the industry during 1945– 1982, and federal funds never accounted for less than 60% of annual R&D investment in this industry during 1985–2000 (see National Science Board, 2006). The electronics revolution that spawned the semiconductor and computer industries can be traced to two key innovations—the transistor and the computer. Both appeared in the 1940s, and the exploitation of both was spurred by Cold War concerns over national security. The transistor had important potential applications in military electronics and computer systems, and federal funds, largely from the DoD, the AEC, and other defense-related agencies, accounted for nearly 25% of total industry R&D spending in the late 1950s. The bulk of this defense-related R&D spending during the 1950s was allocated to established producers of electronic components, who were not among the pioneers in the introduction of innovations in semiconductor technology. Paradoxically, the firms responsible for many of the key early innovations in semiconductors did so without military R&D contracts, relying instead on support from procurement contracts (Kleinman, 1966, pp. 173–174). 23 One of the most important technological advances in the early semiconductor industry, the integrated circuit (IC), resulted from R&D undertaken within Texas Instruments, a transistor producer, with little or no DoD R&D funding. The firm’s development of the IC was motivated by the prospect of substantial procurement contracts, rather than the availability of R&D funds. Malerba’s discussion of the development of the Western European and US semiconductor industries emphasizes the impor- tance of the large scale of military R&D and procurement programs in the United States, as well as the focus of defense-related R&D on industry performers: “. . .the size of American [R&D] support was much greater than that of either the British or the European case generally, but particularly during the 1950s. Second, the timing of policies was different: while the United States was pushing the missile and space programs in the second half of the 1950s/early 1960s, Britain was gradually retreating from such programs. Third, American policies were more flexible and more responsive than British policies. Finally, research con- tracts in the United States focused more on development than on research, while in Britain, as well as in the rest of Europe, such contracts focused more on research and proportionately more funds were channeled into government and university laboratories. These last two factors meant that most R&D projects in Britain, as well as in Europe, were not connected with the commercial application of the results of R&D.” (1985, p. 82) As nondefense demand for semiconductor components came to dominate industry demand, defense– civilian technology “spillovers” declined in significance and reversed direction. By the late 1970s, “military specification” semiconductor components often lagged behind their commercial counterparts in technical performance, although these “milspec” components could operate in much more “hostile” environments of high temperatures or vibration. Concern among US defense policymakers over this “technology gap” resulted in the creation of the DoD Very High Speed Integrated Circuit program (VHSIC) in 1980, which sought to advance military semiconductor technology more rapidly. Originally planned for a 6-year period and budgeted at slightly more than $200 million, the VHSIC program lasted for 10 years and cost nearly $900 million. Nonetheless, the program failed to meet its objectives, demonstrating the limited influence of the federal government within a US semiconductor market that by the 1980s was dominated by commercial applications and products. The Internet was invented and commercialized primarily in the United States, although scientists and engineering in other industrial economies (especially France and the United Kingdom) made important contributions to computer-networking technologies during the 1970s, and the key advances behind the creation of the “World Wide Web” were invented at CERN, the European nuclear physics research facility. Nonetheless, US entrepreneurs and firms led the transformation of these inventions into components of a national and global network of networks, and were early adopters of new applications (See Mowery and Simcoe, 2002, on which this discussion draws). The DoD played a critical role in funding the development and diffusion of early versions of the technology in the United States. During the early 1960s, several researchers, including Leonard Kleinrock at MIT and Paul Baran of RAND, developed various aspects of the theory of packet switching. 26 The work of Baran, Kleinrock, and others led the US Department of Defense Advanced Research Projects Agency (DARPA) 27 to fund the construction of a prototype network. The resulting ARPANET is widely recognized as the earliest forerunner of the Internet (National Research Council (NRC), 1999, Chapter 7). By 1975, as universities and other major defense research sites were linked to the network, ARPANET had grown to more than 100 nodes. US dominance in computer networking did not result from a first-mover advantage in the invention or even the early development of a packet-switched network. French and British computer scientists also contributed important technical advances to packet-switching and computer-networking technologies and protocols during this period, and publicly supported prototype computer networks were established in both France and the UK by the early 1970s. But its size and inclusion of a diverse array of institutions as members distinguished the ARPANET from its British and French counterparts, and accelerated the development of supporting technologies and applications. In addition to their size, the structure of these substantial federal R&D investments enhanced their effectiveness. In its efforts to encourage exploration of a variety of technical approaches to research priorities, DARPA frequently funded similar projects in several different universities and private R&D laboratories. Moreover, the DoD’s procurement policy complemented DARPA’s broad-based approach to R&D funding. As had been true of semiconductors, the award by DARPA of development and procurement contracts to small firms such as BBN helped foster entry by new firms into the emerging Internet industry, supporting intense competition and rapid innovation. The cross-national empirical work on the economic effects of defense R&D has yielded a mixed verdict that may reflect the influence of the very large US programs in this area for the period covered by these studies. But the high level of aggregation at which most of this work has been undertaken means that we do not understand the causal relationships that underpin the empirical results, and the lack of illumination of these relationships contributes to a much broader failure to better understand the overall relationship between publicly and privately funded R&D and innovation (see David and Hall, 2000; David et al., 2000 for a more detailed discussion)