This paper was refereed by the Journal of Electronic Publishing’s peer reviewers.

Abstract

Standardization is a poorly understood discipline in practice. While there are excellent studies of standardization as an economic phenomenon, or as technical a phenomenon, or as a policy initiative, most of these are ex post facto and written from a dispassionate academic view. They are of little help to practitioners who actually are using and creating standards. The person actually creating the standards is working in an area of imperfect knowledge, high economic incentives, changing relationships, and often, short-range planning. The ostensible failure of a standard has to be examined not so much from the focus of whether the standard or specification was written or even implemented (the usual metric), but rather from the viewpoint of whether the participants achieved their goals from their participation in the standardization process. To achieve this, various examples are used to illustrate how expectations from a standardization process may vary, so that what is perceived as a market failure may very well be a signal success for some of the participants. The paper is experientially, not empirically based, and relies on my observations as an empowered, embedded, and occasionally neutral observer in the Information Technology standardization arena. Because of my background, the paper does have a focus on computing standards, rather than publishing standards. However, from what I have observed, the lessons learned apply equally to all standardization activities, from heavy machinery to quality to publishing. Standards names may vary; human nature doesn’t.

The Standardization Playground

The arena in which standardization is played out is reasonably large. Nearly every industry is affected by standards, some more so than others. I believe that it is probably safe to say that the more an industry depends on interoperability for sales and growth, the more standards will be in evidence. Additionally, the more developed a society is, the more standards become necessary. It has been observed that “…(s)tandards are one of the hallmarks of an industrial society. As the society becomes increasingly complex and its industrial base begins to emerge, it becomes necessary for the products, processes, and procedures of the society to fit together and to interoperate. This interoperation provides the basis for greater integration of the elements of society, which in turn causes increased social interdependency and complexity.”[1]

Nearly two decades ago, The Economist published the following in its Survey of Information Technology.

The noisiest of those competitive battles (between suppliers) will be about standards. The eyes of most sane people tend to glaze over at the very mention of technical standards. But in the computer industry, new standards can be the source of enormous wealth, or the death of corporate empires. With so much at stake, standards arouse violent passions.[2]

This statement—echoed in one form or another in most literature on the subject of standardization—is even more applicable today in the IT industry. With the advent of the Internet and the World Wide Web (WWW), open standards and standardization are becoming more a part of the infratechnologies, a term used by NIST to describe a superset of technologies (the technological infrastructure), which "…provide the technical basis for industry standards."[3] As Martin Libicki notes, "(w)ith each passing month, the digital economy grows stronger and more attractive. Much, perhaps, most of this economy rests upon the Internet and its World Wide Web. They, in turn, rest upon information technology standards."[4]

In 1992, the Office of Technology Assessment noted that:

Other goods, like education and standards, are impure public goods. These combine aspects of both public and private goods. Although they serve a private function, there are also public benefits associated with them. Impure public goods may be produced and distributed in the market or collectively through government. How they are produced is a societal choice of significant consequence. [Emphasis mine][5]

With this being the case, one would expect that there would be substantial information on the practice of standardization; however, this is not so. Professional standardization organizations, such as ANSI or the ASTM, offer training courses on how to “do” ANSI or ASTM standards, focusing primarily on how to follow the rules of these organizations in creating conformant standards. Similarly, there is no overarching organization that regulates standards or standardization. Each nation has a National Standards Body (NSB) that serves that nation; the procedures and processes for these organizations are all different. The only common thread is that each NSB is a member of either the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), the International Communications Union (ITU), or the Codex Alimentarius. These national bodies produce the bulk of the world’s standards, covering everything from animal traps to salt to tripe to chocolate to electrical safety. However, in the Information Technology arena, a majority of the specifications are now produced by consortia, a less formal standardization organizational structure.[6]

Consortia came into being for several trenchant reasons, and have continued to be created over the past 20 years at the rate of about 100+ a year.[7] The most inclusive list of consortia that I’ve found is a “subscription-only list” that also rates consortia on several axes of concern to participants.[8] There are other free lists as well—but all point to one thing—there are an awful lot of these things floating about, all creating specifications.[9]Arguing nomenclature, however, is pointless—all of these entities respond to market needs and requirements and create something that the market can and does use. Definitions usually include process descriptions, economic conditions, or policy statements. To forestall these, I’ll use the following definition for standardization and standards.

Standardization is the product of a personally held belief that the market has the ability to understand and chart a valid future direction through the use of collective wisdom, to understand the impact of change on itself, and to adjust to that change. The specific change agents utilized in this process are collective technical descriptions of how things ought to be and function, called standards.
A standard, of any form or type, represents a statement by its authors, who believe that their work will be understood, accepted, and implemented by the market. This belief is tempered by the understanding that the market will work in its own best interests, even if they do not coincide with the standard. A standard is also one of the agents used by the standardization process to bring about market change.[10]

This definition then does not draw distinctions about the creation mechanism of standards, and allows all sorts of standards—from strict NSB de jure to consortia specifications to proprietary de facto standards to be considered. Basically, I treat standardization and standards as an exercise in setting a market direction, and consider them a tool for change.

This then brings up the question, who creates standards? Basically, non-rational human beings create standards. I use this phrase advisedly, since most economic behavior (and hence, studies of standards) tends to assume a rational economic model. I have watched too many standardization efforts become a complex contest between corporate wills and a need to maintain a facade of control to believe that rational economic decisions are made. It should be remembered that a participant in a standardization effort wears many different hats simultaneously—hats that cover professional pride (doing what’s right), corporate or organizational goals (doing what’s right for your company), standardization organizational goals (doing what’s right for the organization and in scope or charter and following the rules), a national interest (doing what is right for your country’s industrial, social, or legal policies), and personal friendships (doing what’s right to make you feel good and for social and professional strokes). When you put two dozen people with conflicting emotions, goals, backgrounds, and personal motivation in a room, ask them to decide on a complex interface whose future characteristics may or may not impact the market, and then provide minimal guidance and no enforceable deadline, one is hard pressed to describe the outcome as a rational economic decision. When you toss in a rapidly changing external environment, competing organizations doing the same thing, and a generalized need to cooperate rather than compete, you have the basis of some interesting decisions that create standards.

With this as background, the question should correctly be, how do standards succeed? The simple answer is that no one really knows. Martin Weiss and Marvin Sirbu published a paper in 1990 that examined several of the technological success factors that influence standardization. Their findings were summarized thusly.

The results suggest that the size of the firms in the coalition supporting a technology and the extent to which they support their position through written contributions are significant determinants of technological choice in the standards decisions studied. The market share of the firms in the coalition was found to be significant only for the buyers of compatible products, i.e., the monopsony power was significant, not the monopoly power. In addition, the technologies whose sponsors weighted market factors more highly than technical factors were more likely to be adopted in the standards decision studied. The proponents of both the adopted and non-adopted technologies were found to have equal belief in the overall technical superiority of their technical alternative, even after the decision. The installed base of a technology and process skills were not found to be significant predictors of the committee outcome.[11]

The interesting thing to note here is that the primary success factors were (1) the willingness of firms to commit written technological contributions to the standards committee and (2) sponsors who understood how the market worked. Participants who advanced just ideas or merely attended, or those who didn’t understand the market, were usually much less successful at making their standards efforts succeed. With this as background, we can now begin to look at how standards begin and are created, because, very simply, if you don’t know how they come into being, understanding how and why they fail is difficult, and mostly relies on the term “bad luck” to explain away a host of complicated interactions that appeared to have gone badly for someone somewhere.

Steps to Standardization

In the late 1980s, I was trying to make a case for the idea that standardization was a discipline; I needed to create a structure that would help to make it easier to study and catalogue them. The idea was good, but there was little take-up on the overarching idea, since, as has been noted earlier, standards tend to be “eye glazingly” dull. However, as part of the activity, I did put together a proposal called “The five stages of standardization,” which existed for a while in the planning documents of the Accredited Standards Committee X3. Later, I added more substance to the idea, which ended up as a chapter in a book on standards published in 1995.[12]

I basically identified five stages that needed to occur in the creation of a standardization activity. I tried to write them so that they were independent of processes—that is, so that they did not correspond to the actual processes of any particular organization, but rather reflected things that had to happen to make a standard come into being. The stages were:

  • Pre-conceptualization
  • Conceptualization
  • Discussion
  • Writing
  • Implementation

The first stage—pre-conceptualization—takes place constantly in the industry. It is the stage where someone has an idea for standardizing something. It is in this stage that the basic decision is made to either share or to keep proprietary (but not, as some think, to patent or to copyright). If the decision is made to share, then the activities that lead to a standardization effort start; if the decision is made to keep proprietary, then a different path is taken. When discussions begin, all sorts of activities can happen, and usually, most of them result in the idea being abandoned. However, sometimes an audience is found for the idea—often someone who has needs similar to the “creator” of the idea. The audience can be technologists, politicians, marketers, consultants, or anyone who can help champion the idea.[13]

Once there is a critical mass, there is the question of adding organization to the effort. It is at this time that the idea of “venue shopping” begins. There are quite a few standards organizations available for use. Depending on the focus of the standard’s supporters, an organization is either chosen or created—this is the conceptualization stage. It is in this stage that the concept is ratified to make sure that it is capable of being standardized (that is, it fits within the scope and framework of the organization chosen), that it is technically possible (as opposed to technically viable), and that there are necessary supporters willing to commit resources. Since someone must actually pay to create standards (in some coin), there must be someone willing to foot the bill. Additionally, the organization that is charged with creating the specification must have assurances that there will be something in this for the organization, either in publishing revenues, increased membership, increased dues, or, occasionally, marketing and positioning benefit. So, if all the conditions are met—that is, the organization can provide a home—then standardization begins and goes into the discussion stage.

In this stage, unless there is a clear and easy problem being solved, the situation slows slightly. It is in this stage that people finally begin to put together what it is that they are trying to specify, and where the implications, ramifications, associations, and all the other “…ation” modifiers come to light. It is in this stage that opposition first begins to surface inside of the activity itself (never mind the external opposition and comments). This is where trade-offs occur, where all the deals are done to make positions change and where strengths and weaknesses become either heightened or obscured. Once the majority of issues are solved—if they are—the activity proceeds to the next stage—actually writing the specification.

In writing a specification, there are two schools of thought, and the decision on which option to pursue should have been decided in the discussion stage. The schools are to write from an installed base or current implementation (standardizing existing practice), or write a “future focused” standard (anticipatory standardization). Both have strengths and drawbacks, and both have the ability to fail, but for much different reasons.

Once the specification is completed, the most difficult activity begins—the implementation of the specification. And this is where most people who write about standards—and most standards organizations—fail. Implementation of a specification should occur in a product, service, legislation, or policy or other concrete market-visible offering. In many cases, the successful implementation of a specification (somehow and somewhere) can be taken as a victory for the standard—but it, too, is a point of possible discussion. Nearly every standard that encompasses compromise has a loser who considers the effort a failure.

Since completing the original paper in 1995, I’ve added several more requirements to the process, which have bearing on the success or failure of the specification. Three of the main requirements are: a reference implementation, a set of test suites to validate implementations, and a form of Intellectual Property Rights policy that is tied to the specification. Not having these will not cause the standard to fail, but they may have a tremendous impact on the legitimacy and acceptance of the specification and the technology contained therein.

With this description of several of the attributes that help a standards gestation, we can now turn to look at potential failure modes and what makes a standard fail.

Failure Modes

Because there is no clear-cut definition of what makes a standard successful, there are similarly no clear definitions of what makes a standard a failure. To begin this portion of the paper, then, it will be helpful to catalogue the various ways that a standard can be seen to fail—or conditions that lead to the market calling a standard a failure (there is a difference). In this section of failure mode listing, it is necessary to distinguish between the standard and the standardization activity.

    Major categories of Failure:
  1. The standard fails to get started.
  2. The standards group fails to achieve consensus and deadlocks.
  3. The standard suffers from “feature creep” and misses the market opportunity.
  4. The standard is finished and the market ignores it.
  5. The standard is finished and implementations are incompatible.
  6. The standard is accepted and is used to manage the market.

I’d like to examine each of these failure categories, and, where I can, cite notable cases doing so. Much of the description and content within these sections is subjective and experiential, rather than empirical. As I said in the Abstract, I am occasionally neutral, but I will admit a bias toward action. Each category will be initiated by a description of the general characteristics of the category (so that it can be extended to instances other than the ones I cite) and then some examples given, and where necessary, analogies stretched.

1. The standard fails to get started (pre-conceptualization, conceptualization).

If standardization is viewed as a market response to a changing condition, then this failure can be viewed as basic failure to understand the market. It is roughly akin to “I gave a party but nobody came.” There are two ways that this category can be fulfilled—the first is when the subject should be standardized (and there is a push to do so) but where the principals cannot get organized and the second is by actually initiating a standardization activity and not gaining momentum.

Initiating a standardization activity (the result of which is a standard) is a reasonably complex and time-consuming activity. Standards activities, despite the general perception, don’t just “happen” like spontaneous combustion.[14]

The first failure—pre-conceptualization—occurs when the initial idea or technical proposal is circulated to multiple other possible sponsors, many of whom review it. This process can last from between a month to several years, depending on the size, complexity, and vision of the founder(s), as well as market demand for the potential offering of the standards group. During this time, if there is an existing specification documenting the proposal, the specification undergoes review by potentially interested parties. Depending on the standardization venue sought, the spec can either be publically available or can be limited to review under nondisclosure. Also, the proposed specification can, during this time, be subject to constant modifications in an effort to gain sponsors who will support its initial offering. This is shuttle diplomacy at its best.

The first failure here is one of, “who cares?” When this happens, it is up to the person or organization making the proposal to decide to either drop the idea or to go it alone. If the option to drop occurs, this is not really a standardization failure, since no one saw the originator’s vision for standardization, and harking back to the definition, it is the market’s ability to structure that makes a standard. However, if the originator pursues the idea, and the idea succeeds and becomes a tremendous (or at least a breakeven) success, there is a problematic area. This can’t really be seen as a failure of standardization, since the idea was credible and succeeded; I’d chalk it up as a failure of the standardization processes available to the originator, or else the originator’s inability to make a viable case to members of various SSOs.

There is another way to fail in this stage, and I’ve seen it directly happen only once. In this case, a product manager of a large, European-based Multi-National Corporation (MNC) was proposing to create a consortium to deal with one of the more arcane technologies in the spatial environment. There was considerable enthusiasm for the consortium, and the proponent put together a business plan for the consortium—showing the number of members necessary, the revenue stream, and the goals and strategy. Unfortunately, in the business plan (reviewed by her corporate legal team), there was a section that examined the market share of each of the proposed members and how the consortium would permit market stabilization and increased market share. The problem was that the lawyers of the MNC were European, and the structure that they were using to create the consortium was based on the NCRPA (see note 7). One of the sacrosanct rules of standards development is that no business or market data can be shared by consortium members. When the business plan containing market share data was seen by the potential partners, everyone immediately (and formally) notified the sponsoring organization that they were unwilling to participate or to even discuss the matter further. The standardization activity failed, not for lack of interest or lack of need, but because pursuing it would have made the sponsors potentially liable for anti-trust and anti-competitive litigation, especially if the standardization activity had disadvantaged a competitor.[15]

2. The standards group fails to achieve consensus and deadlocks (conceptualization failure).

The issue with any group of people getting together to write a technical specification is the nature of their expectations. Because the group has usually been constructed of people who want to participate and who bring their own preconceived notions to the table, it is very easy to find out that what you thought you signed up for is not what you have, in reality, joined.

I’ve found that this happens most often with those standards groups that are created to “oppose” another standard or another technology. In this case, the supporters of what is supposed to be a “counter standard” all gather and propose a way of achieving what “their solution” to the problem is—and usually, the whole issue turns on the fact that what should be opposed varies from person to person and from organization to organization. This is worse when a consortium is hastily thrown together, or when the need to “get members” causes incomplete disclosure of exactly what it is that is being opposed. The most memorable instance of this that I recall was the announcement by 88open, a consortium formed by Motorola to promote its 88000 RISC chip. The chip was Motorola’s attempt to propose an alternative to Intel, but the announcement by Sun of a competing consortium, Sparc International (which still exists), forced the hand of 88open and caused a premature launch. The partners had not quite agreed on exactly what the mission, goals, and intent of the consortium was to be, and the disagreements became apparent on the stage during the kick-off press meeting. It did not bode well for the consortium being able to pursue a positive goal and achieve a positive outcome. Gladstone once noted that, “To be engaged in opposing wrong affords, under the conditions of our mental constitution, but a slender guarantee for being right.” This is absolutely true in standardization.

Another type of failure that occurs here is the inability of the members of the standardizing organization to agree on the terms and conditions for standardization. There are several types of “deal breakers” that cause this to happen, including disagreement on the officer positions (Chair, Vice Chair, Editor), disagreement on the amount of change allowed (installed base argument), or disagreement on the terms and conditions of handling the Intellectual Property Rights in the contributed technology or the finished specification. One of the most notable “failures” of this type occurred twice when Sun Microsystems attempted to standardize the Java™ language. The effort is analyzed in a paper by Dr. Tineke Egyedi entitled “Why Java Was Not Standardized Twice.” [16] Sun submitted Java sequentially to two standardization organizations—the International Organization for Standardization/International Electrotechnical Commission Joint Technical Committee 1 (ISO/IEC JTC1) and to Ecma International. In both instances, the specification was withdrawn before standardization could begin, both times because Sun and the proposed standardization entity could not agree on the terms and conditions of standardization. In the case of the JTC1 submission, Sun withdrew the specification when it realized that it (or JTC1) had misunderstood the maintenance procedures associated with the Publicly Available Specification (PAS) process.[17] In the case of the Ecma submission, Sun withdrew when Ecma and Sun could not come to terms on the intellectual property terms and conditions of the Sun submission.

As Egyedi points out, research literature “… suggests that dominant market players, whose products have become a de facto standard, have few incentives to standardize. … With an eye to long term advantages, they may give away a technology or enter into coalitions with rivals to enlarge their user base…. However, the step towards formal standardization is seldom taken. In this respect, the initiative to standardize Java™ seems to be rather unique.”[18] At the time that she wrote this article, Egyedi could only cite Adobe’s standardization of PDF as a similar example.[19] Since the article was published, however, other instances have occurred, the most notable being Microsoft’s standardization of Office Open XML in Ecma international and ISO/IEC JTC1. In both the Adobe and Microsoft cases, it appears that the committees and the submitters came to terms before the process began and the document was submitted for standardization.

The larger question when one examines this issue, however, is to determine if the failure to standardize was, in fact, a failure. From the market’s point of view, the intent was to gain some control over the direction and future of Java. In response to the negative publicity that surrounded the withdrawal of the specification, Sun was forced to take remedial steps to fix market perception. They did this with the creation of the Java Community Process (JCP), which was a Sun-managed and “open” process (with Sun still holding an absolute veto power). The JCP allowed competitors and partners a sandbox in which to develop specifications using all of the various forms of Java, but under the watchful eye of Sun. So, the standardization effort did force Sun to be slightly more open than it might have been otherwise.

On the other hand, Sun also benefitted from the abortive standardization. When the initial standards activity was being proposed, Java was still relatively new to market, and Microsoft had just forced Netscape to standardize JavaScript in Ecma. Sun, realizing that pressure was mounting for some action, initiated a standardization strategy where they sought to maintain control of the evolution of their technology. They weren’t powerful enough in the standards committee they selected to accomplish this, so they were forced to withdraw and start their own, rather expensive process. However, Sun’s standardization activities gave Sun a three-year period (1997 to 2000) in which to grow the nascent installed base of Java code from 5 million lines of code to over 500 million lines of code (numbers are suspect, but these are claims that I heard), which gave Java an installed base that qualified it for a de facto standards status. Because Java standardization was being dangled out as a carrot, competing development was slowed, since a competitive product would have to be a standard as well—and Java had all of that activity tied up as it went forward. Only when Sun withdrew Java absolutely and completely from the standards race did Microsoft introduce C# (announced in 2000) to the market and into a standards organization to offer a unique competitive advantage to differentiate it from Java.

So, in this scenario, the question that needs to be decided is if “having a standard” was as good a potential result as having Java as it exists today. From Sun’s point of view, was the PR nightmare (and there was one) worth the ability to control and protect Java for three years of high growth that would not have been possible if it had undergone standardization?[20] From the market point of view, was the control that could have been gained over the future of Java within the standardization process[21] outweighed by the creation of the JCP and the inherent stability of a “stewardship” that Sun claimed?[22] While these might be interesting research questions, they reflect the trade-offs made by organizations in standardization practice, where corporate good is weighed against public good. And something that looks like a failed standards effort may, in fact, be seen as a standardization win by at least some of the participants.

So, if there is consensus on the scope, structure, terms and conditions, and utility, the specification and associated standardization activity can move forward, which brings us to the next failure mode.

3. The standard suffers from “feature creep” and misses the market opportunity (Discussion and writing stages).

This failure mode is quite common—basically, the technologists who are writing the specification tend to try to put too much in. It is not uncommon in standards activities to try to cover a lot of ground to cover all the issues—not merely the ones that the market needs to be solved, but rather the ones that the authors believe that the world wants. Additionally, when consensus is sought, there is a tendency to try to please (or at least not irritate) everyone. This attitude of trying to please everyone and failing to do so led to the “rough consensus and running code”[23] mantra that marks the Internet Engineering Task Force’s approach to specifications.

The most frequent use of feature creep in a standards committee is by organizations that have an implementation that is very similar to the proposed specification except for “a little bit extra here. If only the sponsors of the original specification could see their way to including this additional feature in the spec, there would be another willing member of the group to pursue the specification.” Do this ten times, and suddenly you have a bloated spec or a spec that just plain can’t work. And eventually everyone gets tired of trying to standardize something that no one really wants anymore. And the spec fails.

The solution to this type of activity is to break the spec into different smaller specifications. The most classic case of this of which I am aware was the standardization by the IEEE of Local Area Networks and Metropolitan Area Networks (LANs and MANs). The initial proposals for standardization of a LAN encountered significant problems because the IEEE only wanted one standard for LANS and there were three competing technologies offered by large, powerful, and determined providers. The issue was that each of the proponents had a different market segment in mind—but none of them could or would explain the rationale of their designs in terms of users, markets, or use. Rather, they were locked into technical arguments.

Finally, Gary S. Robinson, then of DEC, proposed splitting the committee into various segments—each with a technical solution that the sponsoring company felt that they could engineer and implement. Working both inside and outside of the IEEE and with European standardizers and companies, Gary basically forced the IEEE to provide separate committees for each technology and hence for each different market. Part of the lasting testament to his vision is that the 802 committee has over 30 separate standards committees in the MAN/LAN area—each serving a different segment of the market. Unfortunately, there are too few people like Gary in the industry, and this is why the spec creep continues.

On a more devious and darker note, the use of spec creep to disable a standard is a tactic that is not unknown. “Killing with kindness” or “helping them to death” is more difficult in the standards arena, but it does happen occasionally. In this failure mode, a company or group of companies who might be disadvantaged by the specification help by careful (and technically correct, occasionally) redefinition of terms, redefinition of market requirements, and introduction of complementary technologies, all of which have to be carefully thought out before the specification can be released. Addition of other constraints (internationalization, testing, reference implementations) can all add significantly to the time necessary for a specification if these were not part of the original plan. By following this path, a competitor can achieve several things. First, there is the potential denial of the market advantage that the originator gains from “first mover status” of having technology it has created (and possibly implemented) as the basis of the standard. By slowing the standard, a competitor has a chance to create its own implementation of the standard to compete. If that route isn’t chosen, the competitor also has the ability to offer a different and nonstandard (usually proprietary) solution, while denigrating the standards process (usually as being too slow, offering too many solutions of dubious value, and/or stopping innovation).[24]Finally, the newly proposed solution can then be offered for standardization in a better (read different) organization,[25]with a promise of correcting all the ills in the failed standardization effort. It is a game that is dynamic, with changing allies and audiences, and changing environments and participants. If the standard survives all of these challenges, it enters the stage where it is being written.

4. The standard is finished and the market ignores it.

The ideal time to standardize has been described as follows: “For a standard to be usefully formed, the technology needs to be understood: technological interest needs to be waning. But if political interest in a standard becomes too large, the various parties have too much at stake in their own vested interest to be flexible enough to accommodate the unified view that a standard requires.”[26] This is the ideal situation, and describes a state that rarely, if ever exists in the market.

In reality, the choice usually left to the standardizers is either to standardize in anticipation of the market (anticipatory standardization) or to standardize after the specification has been implemented (standardize current practice). Anticipatory standardization began in the 1980s and was a result of “…the increasing pace of change and consequent shortened product life cycle [which] had begun to affect the entire ICT industry [so that it] … began to develop ‘anticipatory standardization’[27]. ‘In contrast to this historical tradition of standards sanctioning an existing well-defined product, [anticipatory] standards…may precede products…. technologies can be developed in committee during the development of the standard, leaving the disposition of intellectual property rights uncertain.[28]’”[29] The problem is that neither solution is entirely elegant. The first (anticipatory standardization) suffers from the anxiety that the creators of the specification are not correct in their belief that the problem they are solving and the technology they are deploying are both viable and a fit to the market. On the other hand, standardizing after the fact forces the writers of the standard to standardize the technical mistakes of the implementation—or else they break the user base. As stated, neither one is wholly satisfactory.

If the standard is published after a piece of technology is moving to obsolescence, the market usually ignores the effort. A case in point was a standard called ECMA 234 “Application Programming Interface for Windows (APIW),” published in December 1995.[30] This specification was a reverse-engineered interface specification for the Microsoft 16-bit software called Windows 3.x. The reverse engineering initiative had started in 1993 or thereabouts, and was heavily sponsored by Sun Microsystems in an attempt to open the interfaces to the Win3.1 OS by Microsoft. By the time that the standard had emerged from ECMA (in 1995), Microsoft had initiated deployment of the 32-bit operating system called Windows 95. Needless to say, the market reception for ECMA 234 was less than stellar. Additionally, ECMA had problems pushing the specification into ISO, since it had never satisfactorily cleared Intellectual Property Rights (IPR) assertions against the ECMA standard. The standard was a failure because it dealt with obsolescent technology that no one was deploying any longer and about which the market didn’t especially care.

A more significant and much more controversial standard was the entire set of Open Systems Interconnection standards (OSI) published by ISO/IEC and the International Telecommunications Union, C omité C onsultatif I nternational T éléphonique et T élégraphique, (ITU-CCITT). This immense body of standards for creating a new way of interconnecting computers took nearly 10 years of work on the part of ISO/IEC JTC1 and the ITU. It was mandated by multiple governments (most notably the US Government OSI Protocol [GOSIP]), made a major focus of development by many large IT companies, and failed completely in the market with the advent of the TCP/IP stack of the IETF. The problem with the OSI protocol was that it was a technically driven revolutionary change to current practice, was highly complex, and required a great deal of expertise to implement correctly. Additionally, reference implementations and test suites were lacking when initial deployments were made, which caused severe interconnection problems. The IETF with its concept of “rough consensus, running code, and dual implementations” provided a much simpler solution to the problem that could be implemented by all vendors.

On the positive side, however, there is the belief that the OSI protocol was significant success in that it did teach the industry at large how to structure, create, and standardize protocols—as well as paving the way to break from proprietary protocols (IBM SNA, DECNet, and so on).

One of the more interesting failures of this sort was the creation of Open Software Foundation (OSF) in 1988 to create an “open” UNIX. Its creation was sparked by the AT&T and Sun Microsystems agreement on UNIX SVR4. The creators of OSF (IBM, HP, DEC, Apollo, Groupe Bull, and several others) gave OSF a staggering amount of money (rumored to more than $10 million) to create an operating system called OSF/1, which was supposed to be the equivalent of UNIX™ (as well as other supporting technologies). OSF was jokingly referred to by Scott McNealy (CEO of Sun) as standing for “Oppose Sun Forever” and the wars between the UNIX factions and OSF factions became legendary in the industry. It was a classic case of creating a product the customers didn’t want, but which was sought by the originating vendors to avoid paying a licensing fee to AT&T for UNIX. Bits and pieces of the technology remain scattered in all of the various UNIX’s that were created, but only DEC implemented the full OSF/1. It failed in the market as a coherent competitor to UNIX. Basically, it was an idea whose time had come and gone, and the proprietary offering (UNIX SVR4) won. However, the saga doesn’t end there, since there were various flavors of UNIX, and the UNIX vendors started the second round of UNIX wars, each offering the “true” version of UNIX. This leads us to the next failure mode.

5. The standard is finished and implementations are incompatible.

This—for a standardization effort that has survived the gauntlet of a process—is a common problem that is endemic to all of standardization. The severity of the problem depends in large part on the decisions of anticipatory standardization versus current practice standardization made in the writing stage of standardization. If the specification was based on an existing implementation AND that implementation is a de facto (or heavily used de jure) standard, then the problem is largely mitigated, since other implementations, to have any value, must interoperate with the deployed implementation. This limits severely the options available to break the specification.

However, most standards offering don’t have the luxury of a large and monolithic installed base to force compliance with the specification. In these cases, the way to enforce the precise implementation of the specification (usually in its totality) is to make it part of a network. IETF specifications are this way, as are nearly all communications specifications. A unique telephone, while cool, is worthless unless it interoperates with the telephone network. In this case, commoditizing the infrastructure allows competitive advantage to be added on top of the infrastructure—that is, it is additive to the standard.

However, the problem occurs when there are deliberate variations in the implementation of the standard, causing different behaviors of the product embodying the standard. This can be done in hardware but is more common in software, where attributes can be interpreted in multiple ways. For example, in a 19-inch, rack-mount storage system, 19 inches is pretty hard to miss. In software standards, there is almost always ambiguity, usually through omission. If an attribute is poorly (or sometimes, not at all) defined in the specification, or if the statement lends itself to ambiguity, there is a possibility that the implementers will choose a different response or implementation than that which was originally intended. (This is one of the reasons that many standards require reference implementations, so designers can see how it was “supposed to work.”) This can be benign, or it can really be damaging.

As an example, the current language of the World Wide Web is HTML4. The major browsers can implement HTML 4 slightly differently; for the end user, there are rarely any problems. This is generally because websites (and website designers) know what the browsers all have in common and create their websites using this commonly accepted code. For those who remember the “Browser Wars” between Netscape and Microsoft in the mid-1990s (“This site best viewed by xxx”), the time now seems to be reasonably calm.

However, the advent of the next generation of HTML (HTML5) will pose a problem for a while. Unlike HTML4, there is less consensus on the core of common features for the current generation of browsers. This lack of unity is complicated by the very intense business struggle occurring, as browser vendors all strive to gain market share and to capture increasingly diverse markets, such as the mobile browser market and the digital media market and the “so on and so on” markets. There is high incentive to fracture the standard if it advantages your product set and simultaneously disadvantages competition. The key is whether or not a company can establish itself as the de facto implementation of a formal standard and force competitors to play catch up. This idea of first embracing standards and then loading them with proprietary extensions is known as “Embrace, extend, and extinguish,” also known as “Embrace, extend, and exterminate,” is a phrase that the U.S. Department of Justice found was used internally by Microsoft http://en.wikipedia.org/wiki/Embrace,_extend_and_extinguish - cite_note-3 to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences to disadvantage its competitors.[31] The use by Microsoft was deemed egregious, but the tactic has been employed by nearly every company who has led a standardization effort using its own technology over the past several decades. It is common to standardize that which is necessary for functionality and to reserve for your own implementations of the standard more specialized and user-desired features.

The issue is “…perhaps exacerbated by ‘displaced monetization,’ where deployed software is free, and those sponsoring the deployment of the software gain value indirectly. E.g., all of the browsers and plugins are free, with the vendors making the browsers available instead benefitting from their ability to deploy and/or sell something else: advertising revenue, tooling, infrastructure, services. In such a situation, control over extensibility might yield an advantage.”[32] With displaced monetization, the use of the standardized platform is indirectly tied to a proprietary offering. It is a more invidious method of controlling a standardization activity because the standard is separate from, but derives value from, the proprietary activities. Over time, the two become tightly linked, and the standard, while still open, has no true value except as it is associated with the proprietary offering.

By controlling extensibility, and by not releasing the extensions or added value features to a standardization activity, the organization(s) effectively privatize the specification. While claiming (rightly so) adherence to a standard, the proprietary extensions become an accepted part of any interoperating implementation.

Returning to the central thesis, however, the question is whether this indicates the failure of a standardization activity. To the successful organization(s), the answer is no. They used a standard for economic gain. To those who were disadvantaged or whose truly conforming implementations were broken, yes, the standard is a failure. Again, the issue of failure in standardization is one seen differently by different audiences.

6. The standard is accepted and is used to manage the market.

This failure mode is the most problematic of all of the failure modes presented. It is problematic because, until recently, it wasn’t considered a failure mode, but rather it was a sign of success. The essence of this “failure mode” lies in the belief that standards are an impure public good and that they have societal implications and responsibilities. This is generally not the case in the United States, where standards are left largely to the private sector. The situation in Europe (and in China) is not quite so simple, however.

The most common method of using a standard to manage a market is to assert and prove that one owns Intellectual Property Rights (IPR) in the standard. If a company is part of a standards body, there is usually a covenant of some form that requires that you license your IPR on at least Reasonable and Non-Discriminatory (RAND) terms. If you’re not part of the standards group and your IPR is used, the field is wide open for royalties.[33]

Allowing intellectual property in standards has been (and continues to be) the norm in most of the world and most of standardization. In most industries (and in the IT industry prior to 1995), standards participants were usually companies who were large and could afford to participate in standards—and had either cross-licensed their technologies or were building their patent portfolios. Generally, standards were all “gentlemen’s agreements,” based on a principle of mutual and assured countersuits.

The outbreak of royalty-free standards is a recent phenomenon, dating from the late 1990s and early 2000s. This time frame coincides with the growth of the World Wide Web and the democratization of standards and standards participants. The Internet allowed everyone to play, and the Internet enabled small developers—who would never have been able to participate in NSBs a chance to participate in consortia and then in open source. Writing an application required no great amount of funding, and, if you were lucky, you could score big with your idea.

Many smaller participants were horrified to find out that they (1) had to pay for copies of NSG or international standards and (2) that they had to pay a royalty to use some of these standards. The most astonishing claim for royalties came from ISO, which, in a letter to Oracle, stated:

Moreover ANSI is the official distributor for ISO publications and has full authority to sign copyright agreements on behalf of ISO and consequently to collect the related royalty payments.
We make a distinction between implementation and commercial use and this is made very clear on our Web site http://www.iso.org/iso/en/prods-services/iso3166ma/02iso-3166-code-lists/index.html where we explain that:
"The short country names from ISO 3166-1 and the alpha-2 codes are made available by ISO at no charge for internal use and non-commercial purposes. The use of ISO 3166-1 in commercial products may be subject to a licence fee."
Consequently if you load the list of country codes in a commercial product, thus giving an added value to your product, it is normal that ANSI asks you for the payment of a royalty fee.[34]

While this is one example (and one which is largely ignored), a much more contentious example comes from the H264/MPEG-4 video codec, one of the most widely deployed formats for the recording, compression, and distribution of high-definition video. It is widely used, especially on the Internet. However, H264/MPEG4 has a royalty attached to it and any product that uses it (from Blu-ray discs to digital TVs to Flash and Silverlight) must pay a royalty to MPEG-LA, the firm that handles the patent pool for the 26 companies that assert patent rights in the standard. The royalties are a sticking point with many in the IT community, and there have been several attempts to create a royalty-free video codec. The Chinese, for instance, have created a Chinese national codec.

The Audio Video Coding Standard of China (AVS) video standard is a streamlined, highly efficient video coder employing the latest video coding tools and dedicated to coding HDTV content. …AVS will therefore provide low-cost implementations.
AVS has also been designed in such a way that its technology can be licensed without delay and for a very reasonable fee. This has required some compromises in the design but the benefits of a nonproprietary, open standard, and the licensing cost savings easily outweigh the small loss in efficiency.[35]

Evidence of the national interest in this activity is shown by the fact that the Audio and Video Coding Standard Workgroup of China (which created the standard) was authorized by the Science and Technology Department under the former Ministry of Industry and Information Technology of People’s Republic of China in June 2002, largely in response to the royalty payments that were being demanded on Chinese products that embody the MPEG standards. The Chinese asserted that royalty payments of up to $15 per unit were being charged to Chinese firms building DVD players—causing several to go into bankruptcy.

Additionally, several other codices have been touted as being royalty free (such as Ogg Theora and VP8, the codec at the heart of Google’s Web M) and are gaining adherents. MPEG LA has asserted IPR claims against these two codices—but has not specified what IPR it owns that these two competitors infringe. As of this writing, the question remains open.[36]

So, the question that is unanswered here is whether the network effect of the standard to promote interoperability and commonality outweighs the cost to users that the standard costs. If MPEG2 and MPEG4 had not happened, would the market for video have fragmented among a dozen competing technologies and never really grown, or would another offering that was unencumbered have happened? Is the cost of the royalties that are being charged for use of this standard worth the benefits that it brings to the market?

In one sense, the idea of IPR in standards is troubling (if standards are an impure public good), yet absent a commercial motive, there would be very few implemented standards.

Conclusion

A failure of standardization is a concept that haunts most commercial organizations who engage in standardization. There is rarely a “standard by acclamation”—or, if there is, the standard is probably one that is either so fundamental (metrology) or so trite that argument about it and the ability to capitalize it are both nearly nonexistent.

However, all other descriptions of “standardization failure” must be judged in light of each participant’s expectations. Let me give an example.

The Open Document Format (ODF) is a XML-based file format developed by Sun Microsystems to compete with Microsoft Word. ODF was to be used for representing electronic documents such as spreadsheets, charts, presentations, and word processing documents. ODF was standardized in OASIS in 2002 and then in ISO/IEC JTC1. It is an international standard, and the last version (ODF 1.2) was just released.

The standard is less than a roaring success. It is rarely used, and uptake appears to be less than expected. Its primary advocates appeared to have been companies that support royalty-free standards and open-source advocates. Some policy successes have been achieved, but Microsoft Word and PDF remain the most widely used file formats in general.

From a pure business point of view, ODF has failed to capture the hearts and minds of the using public. By a use metric, it failed. Sun also failed, and the impetus for pushing the format also disappeared in the acquisition by Oracle.

On the other hand, the appearance of ODF did force Microsoft to offer its file format Office Open XML for standardization in ISO. In this, it achieved a secondary purpose that Sun had—the opening of Microsoft’s lock on file formats. From Microsoft’s point of view, the standardization of ODF forced them into standardizing OOXML—and, in the long run, they appeared to have suffered no significant harm. For that matter, they may have even come out looking a little better, since the OOXML standardization battle highlighted the roles of ISO and JTC1, both of whom now view Microsoft positively.[37]

The market was given a choice—possibly not a good one, but nonetheless an option was created. There was no great movement to the new format, so the market decided. If you recall from the definition of standardization: A standard is also one of the agents used by the standardization process to bring about market change.

In the case of ODF, the standard did bring market change. It nudged Microsoft slightly in the direction of being more open, it produced a competitive offering that may cause file formats to improve, and it gave a momentary voice to a social and political movement. It educated a larger group of people about standards and the standards process. It emphasized the point that file formats need to be backward and forward compatible, and that file formats are important not just to their creators.

Depending where you sat, ODF was an implementation failure, a social and technical success, or a wake-up call on document retention or preservation. Whatever it was, it did effect a change in market perceptions of file formats, and the need for standards, and the nature of the international standards politics.

To sum, standardization can (and does) have multiple outcomes depending on the individual and position and expectation set. If you accept the definition of standards as change agents, the only true standardization failure is one that has no impact on the market. These happen rarely. If you look to standards as a social, technical, economic, political, and legal activity (which is what they are), they are a subtle and strategic activity that can have a dramatic effect on business, society, and culture. And, as the world becomes a smaller and smaller place, and the level of interconnectivity increases, standards will play a larger and larger role, both in structuring and causing change.



Carl Cargill is a Principal Scientist at Adobe Systems, Advanced Technology Labs, where he is working on structuring Adobe’s standardization activities for the next decade, as well as attempting to provide standardization theory that applies in the “massively connected world.

Cargill has been a leader in standardization—both in practice and in theory—for over 25 years. He has written two books (Information Technology Standardization: Theory, Process, and Organizations and Open Systems Standardization: A Business Approach), multiple chapters in other books on the subject, and the “Standards” entry in the Third Edition of Van Nostrand Reinhold’s Encyclopedia of Computer Science. He was the Editor-in-Chief of StandardView, ACM's journal of Standardization, and has written scores of articles on the subject of standardization and its practical applications. He has testified several times before Congress, and has been on both Office of Technology Assessment and General Accounting Office panels as an expert on standardization. He has contributed to both the EU and Chinese studies on standardization as a social and governmental policy tool.

Prior to Adobe, he was Sun Microsystems’ Director of Corporate Standards, where he managed Sun’s standardization strategies, activities, and portfolios. He was the Director of Standards at Netscape and a standards strategist at both Sun and Digital Equipment Corporation. While at Sun, he founded and funded the Standards Edge series of books and conferences, which were instrumental in changing the understanding of standards and standardization.

He has served on the Boards of W3C, Object Management Group, Open Mobile Alliance, The Open GIS Consortium, The Open Group, Enterprise Grid Alliance, ECMA, and OSGi. During the rest of his career, has was a product strategist, marketing manager, pricing manager, program manager, and was, at one time, an Air Force intelligence officer.

His interests include Medieval History and the study of magic (as distinguished from quantum mechanics).

He holds a Bachelor in Arts from the University of Colorado in Medieval European History (1969) and a Masters in the Science of Administration (Management Engineering) from the George Washington University (1975).

Select Bibliography

Cargill, Carl F. 1989. Information Technology Standardization: Theory, Process, and Organizations. Bedford, MA: Digital Press.

Cargill, Carl F. 1995. “A Five-Segment Model for Standardization,” in Standards Policy for Information Infrastructure, eds. Brian Kahin and Janet Abbate. Cambridge, MA: MIT Press, 289–320.

Cargill, Carl F. 1996. Open Systems Standardization: A Business Approach. Upper Saddle River, NJ: Prentice Hall.

Cargill, Carl F. 2002. “Intellectual Property Rights and Standards Setting Organizations: An Overview of Failed Evolution, Submitted to the Department of Justice and the Federal Trade Commission.” March 27, 2002; A FTC/DOJ Hearing on Standard-Setting Practices: Competition, Innovation and Consumer Welfare. http://www.ftc.gov/opp/intellect/detailsandparticipants.shtm#May%201%3A.

Cargill, Carl, et al. 1997. “Special Issue: JAVA,” StandardView: The ACM Standards Journal, 5(4).

Cerni, Dorothy M. 1984. “Standards in Process: Foundations and Profiles of ISDN and OSI studies.” Technical report. National Telecommunications and Information Administration; Institute for Telecommunications Sciences, 325 Broadway, Boulder, CO, 80303. (NTIA Report 84-170).

Egyedi, Tineke M., 2001. “Why Java™ Was Not Standardized Twice,” IEEE Proceedings of the 34th Hawaii International Conference on System Sciences, January 3–6, 2001. http://www.computer.org/portal/web/csdl/abs/proceedings/hicss/2001/0981/05/09815015abs.htm.

Global Standards: Building Blocks for the Future, TCT-512. 1992. Washington, DC: Congress of the United States, Office of Technology Assessment. http://caselaw.lp.findlaw.com/casecode/uscodes/15/chapters/69/sections/section_4301.html.

Greenstein, Shane and Victor Stango, eds. 2007. Standards and Public Policy. New York: Cambridge University Press.

Leech, David P., Albert N. Link, John T. Scott, and Leon S. Reed. 1998. The Economics of a Technology-Based Service Sector: A Planning Report for: National Institute of Standards and Technology, 98-2. Arlington, VA: TASC, Inc.. http://www.nist.gov/director/prog-ofc/report98-2.pdf.

Libicki, Martin C. Scaffolding the New Web: Standards and Standards Policy for the Digital Economy. Santa Monica, CA: RAND, xi. http://www.rand.org/publications/MR/MR1215/.

National Cooperative Research and Production Act of 1993, U.S. Code §§4301, et seq. http://caselaw.lp.findlaw.com/casecode/uscodes/15/chapters/69/sections/section_4301_notes.html.

National Technology Transfer and Advancement Act of 1995, Public Law 104-113. http://ts.nist.gov/ts/htdocs/210/nttaa/113.htm.

Ralston, Anthony and Edwin C. Reilly, eds. 1993. Encyclopedia of Computer Science. New York: Van Nostrand Reinhold USA.

Schoechle, Timothy D. 2009. Standardization and the Digital Enclosure: The Privatization of Standards, Knowledge, and Policy in the Age of Global Information Technology. Hershey, PA: Information Science Reference.

“Survey of Information Technology,” The Economist (February 23, 1993).

Weiss, Martin B. H. and Marvin Sirbu. 1990. “Technological Choice in Voluntary Standards Committees: An Empirical Analysis,” Economics of Innovation and New Technology, 1(1-2), 111.

Weiss, Martin B. H. and Michael B. Spring. 1992. “Selected Intellectual Property Issues in Standardization,” Department of Information Science, University of Pittsburgh, Pittsburgh, PA 15260 and presented at the Twentieth Annual Telecommunications Policy Research Conference, Solomons, MD, September 12–14, 1992, 1.

Notes

    1. Anthony Ralston and Edwin C. Reilly, ed. Encyclopedia of Computer Science, Third Edition (New York: Van Nostrand Reinhold USA, 1993) s.v. “Standards”return to text

    2. The Economist, February 23, 1993.return to text

    3. David P. Leech, Albert N. Link, John T. Scott, and Leon S. Reed. NIST Report: 98-2 Planning Report The Economics of a Technology-Based Service Sector (Arlington, VA: TASC, Inc., January 1998), ES-8. return to text

    4. Martin C. Libicki. Scaffolding the New Web: Standards and Standards Policy for the Digital Economy (Santa Monica, CA: RAND), xi. http://www.rand.org/publications/MR/MR1215/.return to text

    5. U.S. Congress, Office of Technology Assessment, Global Standards: Building Blocks for the Future, TCT-512 (Washington, D.C.: U.S. Government Printing Office, March 1992), 14, footnote 23.return to text

    6. The debate about “consortia or NSBs as legitimate standards producers” is one of those debates similar to the angels and the head of pin. Fun, but reasonably pointless, since both exist and both seem to work.return to text

    7. The legal basis for most consortia (at least those based in the U.S.) is the National Cooperative Production Amendments of 1993, Pub. L. No. 103-42, amended the National Cooperative Research Act of 1984, Pub L. No. 98-462, renamed it the National Cooperative Research and Production Act of 1993, and extended its provisions to joint ventures for production. The Standards Development Organization Advancement Act of 2004, Pub. L. No. 108-237, extended the provisions of the NCRPA to standards development organizations.return to text

    8. The list is owned by a firm called 79 Brinkburn. http://79brinkburn.co.uk/(accessedApril 21, 2011).return to text

    9. Where possible, I will refer to NSB creations as Standards and consortia creations as specifications. If the term “standards” appears, it is a generic description of the results of a standardization process.return to text

    10. Carl F. Cargill, Information Technology Standardization: Theory, Process, and Organizations (Bedford, MA: Digital Press, 1989), 41–42.return to text

    11. Martin B. H. Weiss and Marvin Sirbu. “Technological Choice in Voluntary Standards Committees: An Empirical Analysis,” Economics of Innovation and New Technology, http://www.informaworld.com/smpp/title%7Edb=all%7Econtent=t713641545%7Etab=issueslist%7Ebranches=1#v11, no. 1-2 (1990): 111. return to text

    12. Carl F. Cargill, “A Five-Segment Model for Standardization,” in Standards Policy for Information Infrastructure, ed. Brian Kahin and Janet Abbate (Cambridge, Mass.: The MIT Press, 1995), 79–99.return to text

    13. For examples of each: for technologists—any computer language standard; for politicians—the “V” chip and Privacy standards; for marketers—“Open Anything” (where the tip-off is the use of the phrase “open,” which a standard is, by definition); for consultants—ISO 9000, Quality standards.return to text

    14. It is worth noting that when standards activities are initiated, unless they are identified with a specific company (OOXML and Microsoft, Java and Sun, PDF and Adobe), the ubiquitous “they” are blamed (rarely credited) for starting something. In many cases, to paraphrase Walt Kelly, “They is us.” And this is nowhere more true than in a corporation where discovering that someone within your company has either started or endorsed what appears to be a counterproductive standardization effort is nearly a monthly occurrence.return to text

    15. A simple test of the legality that is used when something like this is brought up is “Could you defend this in a jury trial in a rural community?” Many of the arguments used to defend standardization are complex and problematic—and there is always the suspicion that something illegal is happening.return to text

    16. T. Egyedi, "Why Java Was Not Standardized Twice," HICSS, 5 (2001): 5015, 34th Annual Hawaii International Conference on System Sciences (HICSS-34). http://www.computer.org/portal/web/csdl/abs/proceedings/hicss/2001/0981/05/09815015abs.htm. return to text

    17. See Carl Cargill, et al., in “Special Issue: JAVA,” in StandardView: The ACM Standards Journal, 5, no. 4, December 1997.return to text

    18. Egyedi, 1.return to text

    19. Egyedi, 1, note 1.return to text

    20. The standardization process would have cast a degree of uncertainty into the user base, since standards committees are notoriously poor at both maintaining schedule and avoiding feature creep. If Java had been infected with uncertainty during this time, the question of how it would have evolved can legitimately be raised.return to text

    21. In the formal standardization arena in which JTC1 was being discussed for standardization, the companies that were most prominent at the time were Microsoft, HP, Oracle, IBM, Intel, and Siemens. The question is whether they would have provided a better home for the standard (since they would have “managed it” via a standards-based consensus process in multiple committees). return to text

    22. As an interesting side note, the JCP survived Sun’s sale to Oracle and continues to exist as a standardization venue, with over 1,000 members participating in one form or another (and usually paying dues).return to text

    23. P. Hoffman, RFC 4677, “The Tao of the IETF,” The Internet Society, 2006. http://www.rfc-editor.org/rfc/rfc4677.txt.return to text

    24. This is where you hear the line that, “The great thing about standards is that there are so many from which to choose.” Normally, the person or organization spouting this pap wouldn’t ever think of saying, “The bad thing about roads leading to Rome is that there are so many of them.” The intent of standards is not to make the insecure secure. It is to allow the market to exercise a structured choice.return to text

    25. This is where venue shopping comes into play.return to text

    26. Phase Relationships in the Standardization Process (August, 1990). http://nighthacks.com/roller/jag/resource/StandardsPhases.html (accessed April 30, 2011).return to text

    27. The idea of anticipatory standardization was first postulated in Carl F. Cargill, Information Technology Standardization: Theory, Process, and Organizations (Bedford, MA: Digital Press, 1989). It identifies standardization that occurs prior to productization, but after proof of concept. (footnote in original text)return to text

    28. Martin B. H. Weiss and Michael B. Spring, Selected Intellectual Property Issues in Standardization, Department of Information Science, University of Pittsburgh, Pittsburgh PA 15260, September 1992, and presented at the Twentieth Annual Telecommunications Policy Research Conference, Solomons, MD, September 12–14, 1992, p 1. The paper (at http://www2.sis.pitt.edu/~spring/papers/stdip1.pdf) argues convincingly that the nature of anticipatory standards has an impact on the nature of IPR in standardization. (footnote in original text) return to text

    29. Carl. F. Cargill, “Intellectual Property Rights and Standards Setting Organizations: An Overview of Failed Evolution, Submitted to the Department of Justice and the Federal Trade Commission,” presented at the March 27, 2002; A FTC/DOJ Hearing on Standard-Setting Practices: Competition, Innovation and Consumer Welfare. http://www.ftc.gov/opp/intellect/detailsandparticipants.shtm#May%201%3A.return to text

    30. See http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-234-v3.pdf for a copy of ECMA 234, including the history and rationale for ECMA 234.return to text

    31. http://en.wikipedia.org/wiki/Embrace,_extend_and_extinguish (cited without the footnotes contained in the original citation; accessed on May 4, 2011).return to text

    32. Larry Masinter, RE: Feedback on Internet Media Types and the Web, W3C. http://lists.w3.org/Archives/Public/www-tag/2010Nov/0057.html (accessed May 4, 2011).return to text

    33. Or, in the case of Rambus and JEDEC, you could be in for a while and drop out before you had to license your technologies. The FTC and Rambus underwent seven years of litigation before it was decided that Rambus didn’t violate any law, but rather took advantage of a loophole in the IPR policies of JEDEC, a standardization consortium.return to text

    34. Chabot Jacques-Olivier, Director, General Services and Marketing, ISO, Tuesday, February 18, 2003, 4:18 AM. http://xml.coverpages.org/INCITS-in030467.html (accessed May 5, 2011).return to text

    35. Wen Gao and others, “AVS — The Chinese Next-Generation Video Coding Standard,” Audio and Video Coding Standard Workgroup of China. http://www.avs.org.cn/reference/AVS%20NAB%20Paper%20Final03.pdf.return to text

    36. The issue of what codec to use is one of the most difficult issues that currently plague the entire industry. H264 is royalty encumbered and this is anathema to many who currently participate in web and publishing standards. The inclusion of video in publishing—with the advent of electronic publications and the success of Kindle™ and I-Pad™—has complicated the publishing arena with IPR and associated issues. TheInternational Digital Publishing Forum (IDPF), the standards organization associated with EPUB, opted not to include a recommended (or normative) codec (much the same as W3C’s HTML5 Working Group) because of the tension in the market over VP8 and H264 IPR claims, as well as the widespread use of H264. There does not appear to be an easy way of settling this issue in the industry at this time, and it will continue to plague standards developers for the next several years. Also, as a side note, my feeling is that it will get worse before it gets better because of the large amounts of potential revenue involved.return to text

    37. This statement will no doubt cause controversy, but the reality is that during and since the battle, Microsoft participation in standardization—from assisting countries in joining JTC1 to increasing its membership and engagement in JTC1 itself—as a well as emphasizing the role of ISO and IEC in international standardization has increased significantly and positively.return to text