High Technology and the Economics of Standardization

Robin Cowan

When we observe an economy in the midst of an extended period of rapid and far-reaching technical change, we can usually identify episodes of disruption and episodes of stabilization. The technological system is disrupted as new technologies and techniques appear, and common ways of performing tasks are no longer acceptable or profitable. Users perceive the system losing its coherence as new techniques, demanding new skills and organizational structures, exist side by side with old, familiar ones. Social structures are strained and disrupted as the adoption of new technologies forces changes in labour patterns and practices, and introduces new incompatibilities and bottlenecks. By contrast, episodes of stabilization build coherence into the technological system–coherence both among the new technologies, and between new and old technologies. As the system matures, particular technologies evolve less rapidly. Users of these technologies are then able to adapt other parts of the system to overcome the bottlenecks and incompatibilities created by these technical changes. An integral part of this stabilizing process is technical standardization, which takes place either through an entire market adopting a single technology, or through modification and adaptation of existing technologies. As bottlenecks are alleviated, and standardization occurs, the strains on the economic and social systems in which the technologies are embedded are reduced, and they too become more stable.

The stabilization that takes place, and the technological standardization that is an integral part of it, are both results of a complex web of forces; social, political and economic. A thorough understanding of the process would involve knowledge of all of these types of forces as well as the ways they affect and are affected by each other. This paper will not attempt such an analysis. Rather, it will restrict attention entirely to the economic forces that act to produce technological standardization. In recent years economists have developed a relatively general theoretical framework for studying features of the standardization process. This framework focuses on two types of forces, both of which, I will argue, have become stronger and more pervasive as a change in the technological system spreads through the economy. This implies that the benefits of standardization are currently more widespread through than they have before, so once a standard has emerged, it is more difficult to change than has been the case in the past.

Technical standardization is closely related to technology choice. In both situations there are several possible ways to perform a task, and the point of analysis is how one of the candidates comes to dominate the choices of the users. In both cases, there are typically benefits (and costs) of various types and various magnitudes that accrue when all users choose to perform the task the same way. This amount of similarity between standardization and technology choice is enough that insights gained from the study of one problem are often applicable to the other. It is also enough, though, to raise the possibility of terminological confusion. In the literature "standard" is used to mean both "some well-specified way of performing a particular task, which may or may not be being used", and "the way everyone performs that task, whether by habit or by edict." I will use "standard" in the former sense. In this sense, standards bear strong similarities, though with some differences, to technologies.

Two Modes of Standardization

Standardization takes place in two different ways: market exclusion, and joint modification. Market exclusion refers to a process in which initially several distinct technologies or standards are available in the market, but as time passes and the market evolves, the market share of one of the technologies increases and approaches 100 percent. The other technologies are effectively excluded from the market, and standardization has taken place. This has happened in the home refrigerator market, where gas and electric refrigerators were introduced at about the same time, but the gas technology has been eliminated from the market (Cowan 1983). It has also effectively happened in the nuclear power reactor market where, outside Canada, gas-graphite and heavy-water technologies have been excluded from the market by the dominant light-water technology (Cowan 1990). When standardization occurs through market exclusion, there are no degrees of standardization or compatibility; standardization either takes place–all but one technologies are driven from the market, or it does not–several incompatible technologies share the market.

By contrast, when standardization takes place through joint modification, degrees of standardization or compatibility are admitted. In this process more than one technology survives in the market, but users of different technolgies desire interconnectivity. This leads either to modification of the technologies, or to the development of gateways which allow interconnectivity through a translating technology. In either case several technologies survive but interconnectivity is achieved none-the-less, though at the cost of technical modification. Clearly, the degree to which they are modified, or the degree to which the gateway technology is effective, will determine exactly how interconnected the technologies become. It may not be cost effective to develop technological solutions to implement every possible aspect of interconnection. Thus degrees of standardization can emerge.

Joint modification (including the development of gateway technologies) only takes place when several technologies survive and users desire some form of communication or connection among them. In this situation, users have two choices: they can either modify their technologies; or they can agree (tacitly perhaps) on a standard technology, requiring that those not already using it switch. Joint modification will take place if the cost of modifying the technologies is less than the cost of this switching. Costs of modification include the development and installation of the technical changes, whereas switching costs include costs of acquisition of new physical and human capital as well as the loss of any function that was unique to the abandoned technology. At the global level, switching costs will exceed modification costs if there is a large installed base of users, if the cost of capital acquisition is very high, or if the current technology is unique in providing a very valuable function. In any of these cases, we observe a group of users who are effectively locked in to a technology because the costs of switching away from their technology are too high to bear. Thus, prior to any standardization by joint modification, we will observe mounting forces of technological lock-in within a group of users, something often associated with market exclusion. We now turn to a discussion of those forces.

MarketExclusion

In economics, the theoretical literature of standardization has focused almost completely on market exclusion. It has been concerned with two questions: "Under what conditions will a market standardize?" and, "Given that standardization takes place, will the market necessarily standardize on a good, or perhaps even the best standard?" In short, the answers are that in a wide variety of circumstances the market will lock in to one technology; and that in an equally wide variety of circumstances it need not be the best. There are two distinct types of forces that generate these results: increasing returns to adoption; and the reduction of technological uncertainty.

Increasing Returns to Adoption

Increasing returns to adoption are said to exist when the net benefit to the user of a technology increases as the degree of adoption of that technology increases. The mechanism by which increasing returns to adoption generates market exclusion is straightforward. As adopters appear, they choose among the competing technologies in order to maximize their (net) benefits from the adoption decision. If adopters are homogeneous–they all have the same tastes, the same current stock of physical and human capital, and they all want to perform the same tasks with the technology–then the process is trivial. The first adopter chooses that technology which will maximize his benefits. The second adopter either follows, and is the second adopter of the same technology, or is the first adopter of a different technology. Increasing returns to adoption implies that the second adopter of any technology receives greater benefits than does the first adopter of that technology. Clearly, the second adopter will follow the first, as will all subsequent adopters. Only one technology is ever adopted, namely the one that yields highest immediate benefits when the adoption process begins. This need not, of course, be the technology that yields highest global benefits. If, for example, one technology has high initial payoffs but little scope for improvement, and a second technology has slightly lower initial payoffs but considerable scope for improvement, the former will be exclusively adopted. Exclusive adoption of the second technology, by contrast, while making early adopters worse off, would make later adopters better off, and may be globally preferred. (Indeed, if side-payments were possible, all might be made better off.)

The simplicity of this result stems partly from the assumed homogeneity of the adopters. But adopters can be heterogeneous in their tastes, their capital stocks, their skills, or the tasks to which they wish to put the technology. When this is the case the process is more complex, and takes longer to work itself out. Inherent preferences over the various technologies will differ from agent to agent, as different technologies are better or worse suited to their needs and circumstances. Early in the process, before any technology has built up a large installed base, agents will choose on the basis of their inherent preferences. Many of the available technologies will be adopted as adopters with different tastes make their choices. Simply from the order of adoptions, though, one of the technologies will eventually gain a lead in installed base. At this point, differences in the degree to which increasing returns can be captured emerge, and begin to figure in adopters’ decisions. Adopters with weak inherent preferences now choose the heavily-adopted technology (over their inherent favourites) and these choices reinforce its position. Adopters with somewhat stronger inherent preferences will now be attracted by the (even stronger) increasing returns. If inherent preferences of adopters are not wildly different, and if increasing returns to adoption continue to operate after many adoptions, increasing returns will come to dominate the decision—making criteria, and market exclusion will occur. Again, there is no guarantee that the technology preferred by most adopters will dominate. If early adopters are large and have peculiar needs or tastes they may force all subsequent adopters to follow their lead. This has happened in nuclear power technology. The earliest and biggest adopter (in the West) was the U.S. Navy. It needed a compact, reliable reactor quickly, and chose light water, sponsoring considerable development work on that technology. Now 80% of power reactors operating use descendants of this technology in spite of much evidence that it is inferior to others. (See Cowan 1990.)

There are several sources of increasing returns to adoption, not all of which apply to every technology. But all of which act to increase benefits to or reduce costs of adopting a technology as that technology is more heavily adopted.

Learning by doing refers to the decline in unit costs as the number of units produced increases. As production experience increases, how to organize production efficiently comes to be better understood, and production costs fall. Thus the (social) cost of later adoptions is lower than early adoptions of any particular technology.

Learning by using refers to learning about how to improve the design and use of a technology. Early designs are typically far from optimal, as designers do not know exactly what the capabilities of the new technology are, or exactly the uses to which it will be put. As adopters use the technology, they communicate with the manufacturers, and the design evolves to increase the benefits from its use.

Increasing returns to scale are said to exist if cost of production falls as the scale of production increases. If, for example, the production technology has very high fixed costs, then spreading these costs over a larger number of units produced will lower the average cost of production.

Network externalities differ from the previous sources of increasing returns in that to capture them requires using a technology that many other people are currently using. Many technologies are useful in their own right, but increase in value as other people use them, "joining the network." As more people subscribe to the same electronic mail network, the more valuable it becomes to any user because he can communicate (share information) with more people.

The base unit/peripheral technological structure is a source of increasing returns not completely distinct from network externalities. The idea here is that to be useful computers need software, cameras need film, and pens need ink. If many people use the same type of computer, camera or pen, then the markets for software, film and ink will be thick–there will be many buyers and sellers. A thick market for peripheral goods implies low transactions costs and, generally, much variety.

The presence of any of these features in a technology is a source of increasing returns to adoption, and will act to encourage standardization by market exclusion. As more agents use a technology the greater is the benefit from using it, and so the stronger are the incentives for other agents to adopt it.

The second force acting to produce market exclusion is the reduction of uncertainty about the properties and relative merits of competing standards or technologies.

Uncertainty

Typically, competitions for market share take place when technologies are relatively new. This is the time, of course, when there is the most uncertainty about them–about their characteristics, the functions they will perform, and the functions users would like them to perform. Early adoptions take place within this uncertainty, but, through the information they provide, contribute substantially to the formation of beliefs about all of these issues. Reduction of uncertainty through strengthening the beliefs of future adopters is enough to generate market exclusion.

Consider a very stylised characterization of the adoption process, in which the adoption of a technology yields information about the characteristics of that technology but not of any other. (Of course even this will yield information about the relative merits of the various technologies.) Agents adopt the technology that they believe will yield the highest net benefits in expected value. As different agents adopt different technologies, observations are made on the performances of the technologies, and beliefs about their relative merits are updated, becoming firmer and firmer. Further, these beliefs are thought by the adopters to be becoming more accurate, and so uncertainty is thought to decline. Eventually opinions will be strong that one technology is superior to the others. Experimentation with other technologies will stop; opinions about them can never change, while opinions about the much-used technology become stronger and stronger. Thus one technology is selected simply through beliefs converging on the opinion that is best. This opinion can be erroneous, of course. If a good standard has bad luck early on–perhaps it was badly implemented, or perhaps it is a technology with low initial payoffs but a steep learning curve–opinion forms that it is bad and so it is set aside while an inferior one is taken up. If the latter has reasonably good luck, agents will stick with it. As the good one is left idle, it has no way to prove its superiority—opinions about it do not change, and there is never incentive to switch to it.

This process will take place in much more general settings than the one just described. The essence, though, remains the same; adopters have beliefs about which of the technologies is best, but when the technologies first emerge these beliefs are relatively weak. Early adoptions provide information with which beliefs are updated, and as the process proceeds opinions gain strength. Eventually beliefs that one technology is superior are strong enough that it is exclusively adopted.

If adopters are homogeneous this process guarantees market exclusion, and the dominance or survival of a single technology. It does not necessarily do so if adopters are heterogeneous, however. Even if all adopters’ opinions about what each technology will do (and how profitably it will do it) converge, this is consistent with adopters having different, strong opinions about which is superior for them. If this occurs, several technologies will co-exist and market exclusion will not take place. The presence of increasing returns to adoption would make co-existence less likely, of course, as preferences of adopters would have to be sufficiently different to resist being overwhelmed by the growth of increasing returns in other technologies.

It is worth pointing out that when these two characteristics, increasing returns and technological uncertainty, occur together, technology policy, aimed at maximizing discounted net benefits from adoption and use of the technologies, becomes very difficult. Without uncertainty about the future payoff streams of the competing technologies, a policy maker, knowing that market exclusion will eventually take place, can simply make a decision at the outset regarding which technology, as dominant one, will maximize social welfare. The policy maker then simply forces its adoption. (See Cowan 1991.) Without increasing returns, there is a difference between the optimal policy and the policy that the market implicitly follows (see below). This difference is typically small, though, and it is likely to be the case that the cost of implementing the optimal policy exceeds the gains from it. (See Gittins and Jones 1979.) When increasing returns to adoption and technological uncertainty both exist, the optimal policy can be very different from the market "policy", but the policy maker faces severe problems. (This issue has been discussed by David (1987) and Cowan (1991).) Increasing returns implies that once the market has "chosen a path" it becomes increasingly difficult to change that path, so policy is most effective early in the process. This is the time, though, at which uncertainty is greatest and the probability of making errors is highest. Policy making becomes fraught with difficulties.

Information Technology and High Technology

Technologists frequently observe that we are currently undergoing a shift of what Perez (1983) refers to as the techno-economic paradigm, from one centred around Fordist mass production based on cheap energy, to one centred around high technology based on advances in micro-electronics (Freemann and Perez 1988). From the point of view of the economics of standardization and technical choice this shift is important in two ways: first simply as a shift; and second as a shift to high technology.

Any large scale shift from one type of technology to another is accompanied by significant increase in technological uncertainty. Making minor changes to existing products or processes involves little uncertainty–it is typically clear how to do it and how well it will work. Developing an entirely new product or process, by contrast, is subject to considerable uncertainty, both about how best to do it, and whether or not it can be done profitably. In a shift of techno-economic paradigm, a new technology type presents considerable potential for the introduction of new products and processes throughout the economy. Thus in the same time period, many industries are confronted with this uncertainty, and at a global level the ability to predict the future of the technological trajectory is dramatically reduced. Because every technology is embedded in a system, increased uncertainty about the future of the technological system implies increased difficulty in assessing the value of any particular technology. A further difficulty arises because we have little experience on which to make judgements about how any particular technology in the new type will itself evolve.

For older technologies we believe we have reasonably good information about, for example, learning curves (the "1/3 law" states that unit labour costs decline as L = aN1/3 where L is labour cost and N is the number of units produced), and about scale effects (the "2/3 rule" for example, observes that production costs are proportional to surface area, whereas output is proportional to volume). (See Scherer and Ross (1990), p. 98.) These generalizations follow years of empirical studies on existing technologies. The novelty of high technology (relative to previous technologies) and the variety of uses to which it is put, demand, at the very least, considerable caution in applying laws and rules about learning and scale that have been generated from retrospective work. All of this, of course, can be encapsulated as uncertainty about learning curves and so uncertainty about payoffs.

What is important from the point of view of standard setting, whether it is done by the market, a policy maker or a committee, are the relative merits of the competing technologies or standards. In a new technological paradigm, problems arise in making these comparisons. To compare two technologies, one needs to know three things: the functions one would like the technologies to perform; the properties the technologies have; and finally, how well these properties fit with the functions to be performed. Problems arise immediately, of course, if there is more than one function to be performed, as it will be necessary to weight the importance of each function. Only after such a weighting has been established will it be possible to compare technologies that differ in the effectiveness with which they perform different functions. This problem exists in many contexts in which one is attempting to generate a scalar measure of a multi-dimensional object. It is probably not fatal. More serious problems arise, however, when one considers that the technologies under consideration are changing rapidly. It may be easy to ascertain the properties that the technologies have now, but very difficult to predict what properties they will have in the future. Technology choice, or standard setting, is dynamic in the sense that there is concern not only with how today’s technologies will perform the desired functions, but also with how tomorrow’s technologies will. This creates difficulties because today’s choices will affect the nature of tomorrow’s technology. When technologies are evolving rapidly, it is difficult to see how any of the technologies would evolve if it were chosen as the standard. Thus an important aspect of the evaluation of a technology is made very difficult by the large scope for technological development. The problem is somewhat less severe when dealing with standards as opposed to technologies, as a standard says nothing directly about the underlying technology; any black box that complies with the standard will do. It is likely to be the case, though, that technologies that are not efficient in complying with the standard will be less heavily adopted than those that are. Furthermore, the very existence of the standard (if it is effective) rules out technological developments that do not comply. Standard setting, like technology choice, can rule out some technological developments and encourage others. Inability to predict how technologies might evolve makes this aspect of evaluation very difficult.

More important than the difficulties of predicting the evolution of these technologies’ properties is the problem of predicting the functions we will want it to perform. Because any technology or standard is part of a technical-social-economic system, the functions we try to perform with it will be determined in large part by the state of the larger system. The degree to which, for example, we wish to use railroads to move people (as opposed to goods) will be determined not only be the state of road and air travel technology, but also by the degree to which people wish to travel, which will no doubt be affected by, among other things, their wealth, the amount of time at their disposal, and perhaps the propensity of family members to migrate away from each other. Clearly, given the path dependency of the system, predicting the technological trajectory of a new techno-economic paradigm is not really feasible. Thus even if we know exactly what we want the technologies to do now, and know precisely what their capabilities are today, and so have, apparently, a good idea of their relative merits, the instability of the surrounding technological complex must make these judgement very tentative.

Increased uncertainty implies that standardization, or technological lock-in, when left to the market will take place more quickly. When agents are very uncertain about which of two options is better, the observation of a single adoption can have a large impact on their beliefs. Each observation contains a large amount of information, and so, when beliefs are updated, causes a large shift in perceptions. Thus when there is a large amount of uncertainty, a small number of observations can cause the perceived relative merits of the technologies to differ substantially. But with observation comes reduced uncertainty (the variance of the prior falls) and so each adoption will have a smaller impact on beliefs than did its predecessors, so it is difficult to overcome the impressions made by early experiences. If early experience indicates that one technology is superior, it quickly becomes difficult to change this belief.

From this observation about the effects of increased uncertainty follows another. In situations of technological uncertainty, particularly when accompanied by increasing returns to adoption, the market will undersupply experimentation and so will standardize too quickly. When there is uncertainty about the relative merits of technologies, there are two effects that follow from every adoption: there is an immediate playoff to the adopter; and, through observing that adopter’s experience, information is generated which is used to update beliefs about the value of that technology. (If increasing returns are present, a third effect, namely that the technology captures further returns, also exists.) Trying a technology "just to see what it is like" has social value because of the information it produces. This information allows future adopters to make better-informed decisions. But this benefit does not accrue to a private adopter, so the adopter has no incentive to take it into account when deciding which technology to adopt. There is, then, a divergence between the path followed by a market and the path that a policy maker would like to follow. The market of adopters will undersupply technological experimentation; it will discard technologies too rapidly, after cursory (if any) inspection, and so will standardize too early.

There is a second important effect of technological uncertainty which is more static in nature. Given a particular goal, users of a new technology may be unsure how to make it achieve that goal efficiently. This is a situation in which information networks can be very important, as users share knowledge about how a technology works, what its strengths, weaknesses and quirks are. This is clearly visible in the micro-computer market, as a tremendous amount of information is passed among people in nearby offices. It is also seen when the technologies are large and potentially dangerous–if something goes wrong, the existence of other users who may have had similar problems is an important form of insurance. (See Cowan (1990) p. 552.) Even if the technology is one that operates solely in isolation, with no interconnectivity with other technologies, uncertainty about its properties and about how it functions can be an important source of network externalities.

Both of these features, increased uncertainty and the formation of information networks, accompany any new and novel technological change. At the time of a paradigm shift, they can be observed throughout the economy, and so there is an economy-wide increase in the forces driving standardization by market exclusion. While this is true of any paradigm shift, the fact that the new paradigm is centred on high technology, based on the micro-chip, may itself contribute to these forces. The growing presence of the new information technologies is particularly important.

Information Technology, High Technology and Market Exclusion

The new technologies have several features that differentiate them from older technologies, and several of these features suggest an increase in the degree and prevalence of increasing returns to adoption.

Cost Structure

One pertinent feature of high technology, and of the new information technology in particular, is that it tends to have a cost structure different from older industries. There tend to be very high fixed costs, and relatively low variable cost in production. High fixed costs have two sources: high research and development costs; and capital intensive production processes.

Table 1. Leading Industries by R&D Expenditures

Canada

R&D/Sales (%)

Manufacturing

 

Telecommunications equipment

17

Aircraft and parts

17

Other electronic equipment

12

Electronic parts and components

6

Pharmaceuticals

3

Services

 

Computer services

12

Engineering and scientific services

12

Electric power

1

   

United Kingdom

R&D/Value Added (%)

Aerospace

29

Office machinery and EDP equipment

25

Pharmaceuticals

24

Electrical and electronic

17

(radio and electronic capital goods)

51

Motor vehicles

7

   

United States

R&D/Sales (%)

Health carea

8.6

Office equipment and servicesb

8.1

Electrical and electronic

5.4

(semi-conductors)

9.3

Leisure goods

4.9

Telecommunications

4.7

Average of all Industries

3.4

Notes: a Health care includes pharmaceuticals.
b Office equipment includes computers and related industries.

Sources: Statistics Canada (1987), Soete (1987) and Business Week (1990). Differences in orders of magnitude arise from the different definitions used.

Microchip-based industries, and the information technology industry in particular, typically account for a very large share of total industrial research and development expenditure in an economy. In Canada in 1987, for example, telecommunications equipment, business machines and computer services together accounted for 29 percent of all industrial R&D expenditures (Statistics Canada, 1987). (The figure rises to 41 percent if the aircraft and parts industry is included.) In the U.K., electronics, telecommunications and computers comprise about 30 percent of the total (Soete, 1987). These figures, of course, could simply be the effect of rapid technical change, rather than any particular inherent R&D intensity in the products or processes of the industry (ie. generating many, low—R&D—intense products or processes would produce a high total R&D expenditure). A measure that takes this possibility into account is research and development spending as a ratio of total sales or value added. Looking at Canada, the U.K. and the U.S., we get a very consistent picture. Aerospace, telecommunications, electronic equipment and components, and pharmaceuticals are all much more R&D intensive (by this measure) than are other industries. Table 1 shows these industries, and that industry ranked closest behind.

One indication of the capital intensity of an industry is its minimum efficient scale (MES). This is the size of operation (of either plant or firm, depending on what is being measured) beyond which unit costs do not fall significantly. It is often used as a measure of increasing returns to scale. Comprehensive comparisons of different industries indicate that microchip-based industries have relatively large minimum effiecient scale. Pratten (1971) estimated that the MES for the production of electronic data processing equipment and of electronic capital goods in general, was 100 percent of the U.K. market in 1971. This was true of only three of the sectors he studies, airplanes and hardback books being the other two. More recently, the European Economic Commision (1988a) presents less complete data on minimum efficient scale as a proportion of European Community output. Of the 69 industries included in this study, the median MES was about 2 percent of 1983 EC output. By contrast, MES for public telephone exchange (PBX) manufacture, televisions and videos were estimated at 4, 9 and 20 percent respectively. Minimum efficent scale for mainframe computers is more than the U.K. market, and a "very large share of the world market". Similarly, MES for microprocessor manufacture was greater than the 1975 U.K. market. (see EEC 1988a, Table 5.1.) These data are relatively old, especially given the speed of change in these industries, but direct estimates of capital requirements and expenditures indicate that, if anything, high-tech industries have become relatively more capital intense, raising their minimum scale. In the semi—conductor industry, since the late 1960’s, both the minimum capital expenditure required for production, and capital expenditure as a percent of sales have grown dramatically. Minimum capital requirement grew 32 precent per year between 1967 and 1982; capital expenditures grew similarly rapidly, reaching 21 percent of sales in the early 1980’s (OECD, 1985 p. 38-39). These figures suggest that minimum efficient scale for high industries is very large, and larger than many older industries, implying that degree of adoption will have a larger effect on incentives for consumers.

What is striking about these data is how much of a difference there is, in terms of research and development spending, between industries based extensively on the microchip, and the vast majority of other industries. With large expenditures on R & D come increasing returns to scale, since R & D need only be done once. Similarly, the initial capital expenditure, which, if large, generates a large minimum efficient scale, need only be made once. Clearly, the larger the number of users, the lower the average costs, as the fixed costs are spread over more users. A consumer then, has a strong incentive to adopt the technology that is capturing these increasing returns to scale.

Learning by Using

A closely related issue is the degree of learning by doing present in the new technologies. Airframe construction exhibits considerable learning by doing and is often used as an example of an industry in which the phenomenon is strong. Estimates of the magnitude of learning vary somewhat (often due to their being estimates of slightly different things), but there seems to be consensus that doubling output will lower average cost by about 20 percent. Estimates of learning curves for semi-conductors suggest that they are significantly higher than for airframes which are themselves thought to be well above average. One estimate suggests that doubling output will reduce average costs by 25 to 30 percent (OECD, 1985 p. 43), another (Golding 1972) found a similar figure. Pratten (1971) reports that to manufacture a prototype of an integrated circuit costs about $500, but that within ten years, if it has gone into production, the costs will be between 50 and $1. Again, this type of learning is an important source of increasing returns to adoption that seems to be very strong in industries that rely heavily on the micro-chip.

The Importance of Information

A third important feature of the new technologies is the degree to which they are information dependent. (Here "information" is defined very broadly.) Information is sometimes embedded in the artifact itself (computer ROMs, or the firmware of some numerically controlled machine tools), in which case the technology is subject to increasing returns from high fixed costs, as discussed above. The information is generated only once but is reproduced many times and appears in every artefact. It is also the case, however, that many of the new technologies that do not stand alone, but depend on a secondary input which is, in effect, marketed information (computers need software and data; video games need cassettes; recording devices need media). This is an example of the base unit/peripheral structure where, again, the peripheral (often software) input has large increasing returns in production. The ease with which information once produced is reproduced, implies strong increasing returns to adoption, operating either on the product directly, or through the market for its peripheral inputs.

Information technology is, by its very nature, subject to network externalities, and there is an increase in the degree to which information technology is being used. The telefax has diffused much more widely than telex, whose function it has absorbed. The personal computer, while useful in its own right, is becoming a widely used communication tool, and telefax, teletext, and ISDN have appeared as new information technologies. There has been a marked change in the type of information technologies we use, as well as in the size of their potential markets, as IT plays a larger and larger role in many of the tasks we perform. As the nature of information technology is to transfer information among users, being part of a large network is very valuable. It is valuable because there will be many people with whom one can communicate, for example through electronic mail, but also because a large network will be better served by passive information vendors for whom providing information to a large network is more profitable than providing it to a small one.

Technological advances that have dramatically lowered the cost of transmitting information have, in effect, forced a globalization of information technology. No longer are networks national or continental, they now span the world. This is intimately connected with the globalization of the market for goods since the transactions costs of international trade are directly related to the cost of communication. As communication costs fall, it is much easier to co-ordinate the movement of goods and financial capital. The demand for standardization in information technology is thus stronger and more wide-spread, implying further that the scope for capturing increasing returns to scale in manufacturing has increased.

This globalization also signals a significant change in the politics of standardization. No longer can a government simply play the role of referee in a competition among local firms to set the national standard. Now, since gains from standardization are geographically much more widespread, national governments have vested interests in standardization process.es To the extent that a government thinks it important that local IT—producing firms survive and grow in the world market, it will have reason to promote the local standard in the world competition. This may become a difficult political challenge for international standards setting bodies.

These features of high technology, and information technology in particular, are widely recognized. Standardization is commonly thought to be very important in this area, for the reasons discussed above. This presents a problem from the users’ point of view, namely the risk of adopting (early) a standard that fails to become dominant. In principle, this is not a terminal problem since the possibility of switching technologies, or engaging in joint modification always exists. The costs of either of these options are thought to be very high, however. (One very large component of the cost is the human capital investment that is necessary after the adoption of any new technology or standard.) A similar risk exists on the supply side: Because of the very high initial capital and R & D costs of production, a producing firm will face significant losses if its standard fails to dominate, or becomes marginalized. These considerations have led to a certain amount of paralysis as agents wait to see how standards are developing. (ISDN presents a good example of this situation.) As a result there is a demand for before the fact standardization.

Before The Fact Standardization

Virtually all of the recent economic theory on standardization and technology competitions has been explicated in the context of the market. In this theory technologies exist as artefacts which are stable, well-defined, and available on the market. In a world like this, standardization is effectively choice among existing types of artefact. On the face of it, however, this description does not match very well with many of the standardization processes we see today. Specifically, many standards are set by committee before artefacts embodying those (or other) standards are available on the market. Examples of this type of standardization abound in information technology: OSI; ISDN; telefax and so on. Before—the—fact standardization can happen in different ways, though very often committees of producers and users play an important role. Clearly, a thorough understanding of the process demands careful investigation of the dynamics of these committees and their umbrella bodies such as CCITT and ISO.

As the standardization process shifts away from choice among artefacts it begins more and more to resemble product development. Because artefacts do not exist in large numbers, the candidate standards or technologies under consideration are still relatively fluid when standardization is taking place. They can thus be changed or developed to meet concerns about strengths and weaknesses of the various competitors. This gives the process some important attributes of product development. The workings of the committee will in turn resemble a research and development laboratory: there are many possible "projects" that will achieve the desired end, and the task of the management is to find the best project, and adopt it as the final product. Besen and Saloner (1989) point out that much standard setting fits this description, and that standards committees describe their function as one "in which experts seek the best technical solution..." to the problem of facilitating defined functions (p. 181).

There is another significant parallel between standard setting by committee before the fact and an R & D lab. This is the importance of time. Before—the—fact standardization is seen as desirable because the forces discussed above are perceived as being very strong. The absence of a single standard, and the possibile proliferation or co-existence of several incompatible competitors is seen as fatally damaging to the economic potential of the technology. This type of standardization then, can be seen as creating a market, and the sooner it happens, the sooner profits can be made in that market, (and the less damage will occur due to several incompatible standards having obtained large installed bases). It seems reasonable to suppose that time is valued in these processes, and that in general a standard set sooner is more valuable than one set later. In this case, by re-describing the models slightly, some of the insights of economic theory can be brought to bear.

Abstracting from the game-like activities of committee members, before—the—fact standardization can be thought of in the following way. The goal of the process is to set a single standard, and thereby create a strong and stable market (or perhaps several markets) for the technology under consideration. There is a further goal, however, namely to create a "good" standard, since the better the standard, the more profitable will be market participation. The "goodness" of a standard is difficult to define but it has at least two qualities. First, because market participants discount the future, a standard that emerges sooner is preferred to one that emerges later; and second, the standard must satisfy various technical requirements. If the competing standards are well-defined and thoroughly developed, then the task of the committee is merely to decide which best satisfies the technical requirements. If, however, they are not thoroughly developed, then "merely" deciding becomes a complex task in technological exploration and development. The committee must allocate its time among the different standards that can be explored. But note that each time a candidate standard is examined, there are two effects: information is generated about how good it is technically; and it is one step closer to being thoroughly explored (and found acceptable).

This is the description of an increasing returns, path dependent process. As one standard gets ahead, in terms of providing solutions to technical difficulties and being thoroughly explored, there will be a tendency to stick with it. This draws attention to the importance of early, possibly small, key events. Defining the desired technical functions of the final standard (the definition of "good" referred to above), particular technical choices (that the facsimile technology should employ the telephone delivery system), or even simply the decision of which part of the problem to address first, can tilt the process in favour of one or another candidate. Any of these apparently innocuous events will have an effect on the perceived relative merits of the candidate standards. Because learning about a technology is cumulative, it is relatively easy for a candidate standard to maintain an early lead in terms of being developed to satisfy the technical requirements. If time is important, this gives decision makers a reason to continue working on that standard.

It is now clear how this relates to the discussion about decreasing technological uncertainty as a force generating standardization. Early in the process there may be many competing standards under consideration. Many will be discarded early on (perhaps even before the formal committee meets) as simply unable to satisfy some important criterion. The serious contenders are explored, and as exploration proceeds, estimates of how good they are change (both absolutely and relatively.) Because time is an issue, though, we would expect to see a shifting of resources as opinions change. Thus a candidate that appears good early on will be further explored and developed. The closer it gets to completion the more difficult it becomes to switch attention to a competitor, as that would mean putting off the date at which the standard is finally set. As was pointed out above, there is no guarantee that this sort of process will generate the best solution, or even a good one. Early decisions can lock the process onto sub-optimal technological paths.

Conclusions

Standardization by market exclusion typically takes place early in the life cycle of a technology. When a technology appears in several variants, there is frequently a competition among them before one emerges to dominate the market. The longer this takes, the more difficult is becomes for a single, final standard to emerge, as too many of them have large bases of committed users. Several forces combine to drive the standardization process, and these forces are strongest and most pervasive during a shift of technological paradigm. This is particularly true of the current shift towards high tech. The rapid emergence and diffusion of new and novel technologies creates scope for capturing increasing returns; it also raises the level of technolgical uncertainty. These combine to increase gains from standardization, so the market forces underlying the process will be very strong. Recent theory indicates that generally a standard will emerge, but that there is a real possibility that it will not be the best possible.

Occasionally, technolgical uncertainty and increasing returns to adoption are so strong that the market is stalled, since without a strong standard market participation is too risky to be profitable. In this situation, committees of producers and users form in order to set a standard before the technology reaches the market, and facillitate its diffusion. To the extent that the committee process can be characterized as the search for the best technical solution to a problem, theory indicates that a standard will emerge, but that again, it may not, in fact, be the best possible. In both the market and committee processes, a technology that appears good early in the process can, simply with that head—start, become the final standard.

Unfortunately, it is when the gains from standardization are largest, that the process, whether market or committee, is most likely to make a mistake. The technological uncertainty makes it very difficult to tell which standard should be preferred, and the strong increasing returns implies that initial choices are very important in determining the final outcome. Policy is very difficult in these situations, since even the most knowledgeable policy maker is prone to being locked in by initial decisions. It is the case though, that if several technologies appear on the market, and the market makes the standardization choice, it will experiment less than is optimal, and will standardize too quickly. Policy, then, should be aimed at slowing the process down, in order to reduce the likelihood of backing the wrong horse.

Bibliography

Arthur, W. Brian (1988). "Competing Technologies: An Overview" in Technical Change and Economic Theory G. Dosi, C. Freeman, R. Nelson, G. Silverberg and L. Soete eds. London, Pinter.

Arthur, W. Brian (1989). "Competing Technologies, Increasing Returns, and Lock-In by Historical Events" Economic Journal, Vol. 99, Mar.

Besen, S. and G. Saloner (1989). "Compatibility Standards and the Market for Telecommunication Services", in R.W. Crandall and K. Flamm eds. Changing the Rules: Technological Change, International Competition, and Regulation in Communications Washington, D.C., Brookings Institution.

Boyer, R. and J. Mistral (1983). Accumulation, Inflation, Crises, Paris, P.U.F. second edition.

Business Week (1990). "R&D Scorecard", June 15.

Cowan, R. (1990). "Nuclear Power Reactors: A Study in Technological Lock In" Journal of Economic History, Vol. 50, Sept.

Cowan R. (1990a). "Standards Policy and Information Technology", paper Commissioned by the Committee for Information, Computer and Communications Policy, Organization for Economic Cooperation and Development, Paris.

Cowan, R. (1991). "Tortoises and Hares: Choice Among Technologies of Unknown Merit", Economic Journal, Vol. 101, July.

Cowan, Ruth S. (1983). More Work for Mother: The Ironies of Household Technology from the Open Hearth to the Microwave New York, Basic Books.

David, Paul A. (1987). "Some New Standards for the Economics of Standardization in the Information Age" in Dasgupta, P. and P. Stoneman eds. Economic Policy and Technological Performance. Cambrige University Press.

David, Paul A. and Shane Greenstein, (1990). "The Economics of Compatibility Standards: An Introduction to Recent Research", Economics of Innovation and New Technology Vol. 1.

Dosi, G. and L. Orsenigo (1988). "Coordination and Transformation: An Overview of Structures, Behaviours and Change in Evolutionary Environments", in Technical Change and Economic Theory G. Dosi, C. Freeman, R. Nelson, G. Silverberg and L. Soete eds. London, Pinter.

Dosi, G. (1982). "Technical Paradigms and Technological Trajectories–A Suggested Interpretation of the Determinates and Directions of Technical Change", Research Policy Vol. 11(3).

European Economic Commission (EEC) (1988a). Research on the ‘Cost of Non-Europe’, Basic Findings, Vol. 2, Studies on the Economies of Integration, Luxembourg, Commission of the European Communities.

European Economic Commission (EEC) (1988b). Research on the ‘Cost of Non-Europe’, Basic Findings, Vol. 10, Telecommunications Equipment and Services, Luxembourg, Commission of the European Communities.

Foray, Dominique (1989). "Les Modèles de compétition technologique: Une revue de la littérature" in Revue d’Économie Industrielle Vol. 48.

Freeman C. and C. Perez (1988). "Structural Crises of Adjustment, Business Cycles and Investment Behaviour", in Technical Change and Economic Theory G. Dosi, C. Freeman, R. Nelson, G. Silverberg and L. Soete eds. London, Pinter.

Freeman, C. and L. Soete eds (1987). Technical Change and Full Employment, Basil Blackwell, Oxford.

Gittins, J.C. and D.M. Jones (1979). "A Dynamic Allocation Index for the Discounted Multiarmed Bandit Problem", Biometrika, Vol. 66.

Golding, A.M. (1972). "The Semi-Conductor Industry in Britain and the United States: A Case Study in Innovation, Growth and the Diffusion of Technology" D.Phil. Thesis, University of Sussex.

Katz, Michael L. and Carl Shapiro (1985). "Network Externalities, Competition and Compatibility", American Economic Review, Vol. 75.

Klepper, G. (1990). "Entry Into the Market for Large Transport Aircraft", European Economic Review, Vol. 34, pp. 775-803.

Nelson, R.R. and S.G. Winter (1977). "In Search of a Useful Theory of Innovation", Reasearch Policy, Vol. 6, pp. 36-76.

Organization for Economic Cooperation and Development (1985). The Semi-Conductor Industry: Trade Related Issues.

Perez, C. (1983). "Structural Change and the Assimilation of New Technologies in the Economic and Social System", Futures, Vol. 15(4) October, pp. 357-75.

Plowden Report (1965). Report of the Committee of Inquiry into the Aircraft Industry, Cmnd. 2853, HMSO.

Pratten, C.F. (1971). Economies of Scale in Manufacturing, Cambridge University Press, Cambridge.

Rosenberg, Nathan (1982). Inside the Black Box: Technology and Economics, Cambridge University Press, Cambridge.

Scherer, F.M. and D. Ross, (1990). Industrial Market Structure and Economic Performance , Third Edition, Houghton Mifflin Company, Boston.

Soete, L. (1987). "The Newly Emerging Information Technology Sector", in C. Freeman and L. Soete eds. Technical Change and Full Employment, Basil Blackwell, Oxford.

Statistics Canada (1987). Industrial Research and Development Statistics. Ottawa.

Stoneman, P. (1976). Technolgical Diffusion and the Computer Revolution: The UK Experience. Cambridge University Press, Cambridge.

Wright, T.P. "Factors Affecting the Cost of Airplanes", Journal of the Aeronautical Sciences, Vol. 3, 1936.