Untitled

5 downloads 51614 Views 478KB Size Report
May 23, 2001 - Since the first version of the kernel was released for free download on October 5th ... Indeed, the development of Linux is characterised by a set of interrelated ... No more all-nighters to get a nifty program working? ... included.11 The architectures supported are many, such as all the Intel based, Apple Pow-.
Of penguins, innovation and technological systems∗ Riccardo Leoncini IDSE–CNR Via Amp`ere 56, 20131 Milano [email protected] 23/5/2001

Contents 1 Introduction

3

2 Brief Linux history

6

3 The Linux network development

11

4 The system perspective

17

5 The Linux technological system

20

6 Concluding remarks

29

7 References

30



This paper has been developed from another one co-authored with G. Antonelli and N. De Liso, where the main ideas of this paper have been exposed in a very concise form. I am grateful to both of them for hints and suggestions that led to this draft. The present work was partly developed within the Progetto d’Interesse Nazionale “Infrastrutture, competitivit` a e livelli di governo”, I am grateful to participants to the workshop “Infrastrutture istituzionali e sistemi tecnologici: nuova economia, crescita e sistemi locali di produzione” held in Lecce, 19-20 Gennaio 2001. The usual caveat applies.

1

Abstract The aim of this paper is to show from a system perspective how Linux has developed to become a major player in the market for operating systems. After having established itself as an effective stronghold in the segment of web based computers, it is now targeting the market for personal computers. The impressive development of Linux in term of both quality/reliability and impact on the market has also posed a serious challenge to heterodox economics. In fact its main characteristics are that it is produced by a set of decentralised developers under a license allowing everybody to accessing the source code and reutilising it for personal purposes and that it is given up for free. Despite these characteristics (or better because of them), Linux has reached a considerable commercial status, with increasing rates of growth and commercial penetration. According to a system view the development of Linux follows a pattern of co–evolution of several tightly related aspects, in particular, the development of Linux is based on a series of innovation (technological, economic, organisational and institutional) that work only insofar as they develop into the coherent framework furnished by the technological system.

2

1

Introduction

The last few years have seen Linux, the operating system (OS) whose logo is a penguin, moving from the relative obscurity of an hacker’s toy/development tool to a software product which is now gaining the spotlight of the business community and the press highlights. From a free software available to anyone willing to work for improving it, Linux is now well under way to become a mainstream commercial product. This change is even more impressive once realised that Linux is a free software, for which no one is obliged to pay for installing it. Since the first version of the kernel was released for free download on October 5th 1991, this OS has attracted a huge amount of people, from the first very skilled hackers to newbie only curious to see how a free OS might look like. The rapid diffusion of Linux has so far mainly been in the area of Internet services (such as web, ftp and mail servers). However since the development of reasonably stable and easy–to–use graphical interfaces1 in recent years, Linux is addressing quite aggressively the standalone desktop market. Although no reliable data is available, a quick search on the net gives Linux at around 5-7% of the market dominated by Microsoft, with impressive rates of growth. 1

The best and most diffused of which are respectively the GNOME (http://www.gnome.org) and the KDE (http://www.kde.org) desktops.

3

Therefore, since then it is now not uncommon to find PCs (in increasing numbers) with more than one resident OS, and PC vendors selling pre–installed Linux boxes. The serious challenge that Linux, which is however still in a developmental phase, could in the medium term pose to Microsoft Windows-based monopoly has of course attracted many people and company with the most various background and the most diverse motivations. This has created a hype around Linux and its main contributors. The main question is to explain how a set of ‘competitive’ developers could evolve from ‘nowhere’ to eventually build a very strong commercial product. Why on earth, should some rational maximising agents give away for free the products they have been working on so hard? Why on earth a project as complex as the development of Xfree86,2 which is commercial by definition (since no hacker obviously needs it for his/her work), has been produced and undergoes regular developments? These are some questions that, as will be clear from what follows, are in need of a more articulated answer than a orthodox one. In fact, from introductory textbooks, if a producer decides the quantity to put on the market by equaling marginal costs to marginal revenue, what can be said of products that are mainly characterised by zero3 marginal costs? Two polar cases can be canonically distinguished: monopoly and perfect competition. In the former case marginal revenue can well equal marginal zero costs. The monopolist still keeps a positive price, which is directly related to the elasticity of demand. The more rigid the demand the higher the surplus the monopolist extracts over the marginal costs. In the latter case, the market is characterised by large number of producers. In this case, there is not an equilibrium quantity, the efficient price is zero and thus the good should be delivered for free. The easiest way to cope with this situation is to justify (usually ex-post) the existence of monopolistic firms in the various segments of the new information technologies.4 Indeed, 2

Xfree86 is the X-windows server for Linux, i.e. the graphic environment where ‘mouses can drag ’n drop’, that needed gigantic resources to be developed, being on the scale of something like 1.5 million lines of code. 3 Indeed, once a software has been written the additional costs of shipping the software to a customer are only related to transports and packaging, but not directly to production. 4 Of the many corollary of this result, one needs here to be recalled, and is related to the monopoly as the best market structure capable of producing innovation in a dynamic setting. Such is for instance what has been termed the Schumpeter Mark II (Schumpeter, 1943) vision of the innovative process (see for instance Kamien and Schwartz, 1982).

4

this case is quite easy to analyse and rationalise, and there is ‘not very much’ to say from this point of view for the economic theory.5 A completely different picture emerges in the case of a setting with many producers. In this case, according to the textbook orthodox vision there would be no room for production. In a situation close to perfect competition there is no possibility of extracting surplus over marginal costs, thus forbidding the possibility of a traditional solution to the problem different from production being given away for free. However, in some cases it has been possible for a small-scale organisation of the market to emerge. Indeed, by resolving a host of related problems, the successful production of profitable goods and services is one of the main characteristics of the new economy, as far as software development, in particular, is concerned. I therefore, will suggest that the Linux outstanding results are emergent properties of a technological system that was born around it, very much in the same way of Microsoft success, though in an opposite way. In particular, the idea of technological system6 is very useful in understanding the phenomena that are taking place within the label ‘new economy’. Indeed, the development of Linux is characterised by a set of interrelated innovation in many (previously unrelated) fields: technological, organisational, economics, institutional. This set of innovation has undergone a complex process of co–evolution, according to which the various pieces have fallen into place to build a very powerful techno–economic arrangement, that is forecasted as the most serious threat to the incumbent OS for PC.7 The paper is organised as follows. In Section 2 a brief Linux history is presented. Section 3 contains a discussion of the main characteristics of the way in which Linux was 5

Completely different is the case for justifying the existence of a monopoly in front of an Antitrust Commission. See, for instance, the very well documented case of US versus IBM (Fisher et al., 1983). 6 I will not refer in this paper to the idea of TS as nationally bounded set of relationships among agents and institutions (e.g. Lundvall, 1992; Nelson, 1993, Edquist, 1997), but rather to those ideas that refer to sectoral TS or better to clustering of relationships around either a single technology or a set of related techno–economic features (e.g. De Liso and Metcalfe, 1996; Carlsson, 1995). For a survey, see Leoncini (2000). 7 It is important to stress that it is the co-evolution of this set of interrelated features to produce a successful outcome. In fact, there exist other OSs (like, for instance, BEOS, FreeBSD, etc.) that by failing to integrate and co-develop the different aspects involved in the construction of such a complex object as an operating system, did not succeed, in spite of their very good (in most cases excellent) technical performances.

5

developed. Then in Sections 4 the main aspects of a system approach are spelt out, to lead to the discussion of the Linux technological system in Section 5. Conclusions follow.

2

Brief Linux history

This is, of course, a very brief and idiosyncratic story, intended to cover neither all the technicalities of the kernel development, nor the development of the various pieces of software that make up for a working Linux distribution (on average 400 software packages). The Linux OS8 has been developed from scratch by a network of hackers co-ordinated by who is now widely acknowledged as the Linux ‘father’: Linus Torvalds. On the 5th of October 1991, Linus Torvalds posted the following message on the comp.os.minix newsgroup: Do you pine for the nice days of Minix-1.1, when men were men and wrote their own device drivers? Are you without a nice project and just dying to cut your teeth on a OS you can try to modify for your needs? Are you finding it frustrating when everything works on Minix? No more all-nighters to get a nifty program working? Then this post might be just for you. :-) As I mentioned a month ago, I’m working on a free version of a minix-lookalike for AT-386 computers. It has finally reached the stage where it’s even usable (though may not be depending on what you want), and I am willing to put out the sources for wider distribution. It is just version 0.02 (+1 (very small) patch already) but I’ve successfully run bash, gcc, gnu-make, gnu-sed, compress, etc. under it.

Many hackers answered that question, and started collaborating with Torvalds submitting software, reporting bugs, proposing solutions to the reported bugs, etc. The number 8

In the following the term Linux will be used to signify the whole lot of programs that constitute the entire OS. However, it must be pointed out that Linux in reality is a kernel, and that it is not possible to use a kernel by itself, but as part of an integrated set of programs. Thus, what is normally referred to as Linux is a combination of the Linux kernel with the GNU OS. More than this, it could be said that the OS is almost all GNU while Linux is ‘only’ the kernel. Hence, it should be more correctly called GNU/Linux. GNU Project’s contribution to Linux distributions is quite remarkable. Indeed: “One CDROM vendor found that in their “Linux distribution”, GNU software was the largest single contingent, around 28% of the total source code, and this included some of the essential major components without which there could be no system. Linux itself was about 3%.” (Stallman, 2000). Having said that, for reasons of simplicity and understandability from a less technical audience, I will stick to the most known name (Linux). For a thorough argument on this point, see Stallman (2000). For more details on the GNU Project see below.

6

of people actively involved in the project soon become very large to eventually reach tens of thousands.9 Linus Torvalds retained for him the (leading) role of maintaining the network operative and of deciding (after discussion, of course) which piece of new code should have entered the next version of the OS. The process that Torvalds put in motion was so vast and interactive that at times new Linux versions were released weekly and almost daily. Table 1 reports the history of the different series released from version 0.01. Version 1.0 (consisting of 175,000 lines of code) of the kernel was released in March 1994 and version 2.0 (consisting of 780,000 lines of code) in June 1996. Just to have an idea, a full Linux distribution is of the order of 10 million lines of code. The actual stable10 version available is 2.2, while version 2.4 has been released on the 4th of January of 2001. Table 1: Linux releases Release Date of Number of series release releases 0.01 9/91 2 0.1 12/91 85 1.0 3/94 9 1.1 4/94 96 1.2 3/95 13 1.3 6/95 115 2.0 6/96 34 2.1 9/96 141 2.2 1/99 14 2.3 5/99 60

Time to final release 2 months 27 months 1 months 11 months 6 months 12 months 24 months 29 months 9 months 12 months

Source: Moon and Sproul (2000).

So far, Linux distributions are more than thirty in English, plus a host of distribu9

Just to have an idea of the order of magnitude it is estimated that the Minix community towards which Torvalds’ message was addressed numbered around 40,000 people at that time. About 30 people contributed in various ways to the development of the kernel in the first two months, to become more than 15,000 by 1995 (Moon and Sproull, 2000). 10 Stable versions have even numberings, while development versions have odd numberings.

7

tions available in different languages, Russian (Cyrillic), Chinese, Spanish and Portuguese included.11 The architectures supported are many, such as all the Intel based, Apple PowerPC, Compaq Alpha AXP, Sun SPARC and UltraSPARC, Motorola 68000, IBM S/390, DEC VAX, etc. By 1997, Linux started attracting attention from the business: • both big (e.g. IBM, Sun, Oracle) and medium (e.g. Corel) commercial enterprises began the operation of porting various applications to Linux; • hardware vendors (such as Dell, Compaq/Digital, Hewlett Packard, IBM, etc.12 ) started offering Linux-based server solutions and pre-installed Linux systems for desktops and laptops; • large companies started partnerships with Linux distributors; • Linux companies, such as Red Hat (a Linux distribution) and VA Linux (a hardware vendor) entered the stock market. In the beginning, support for Linux was largely provided by hackers interacting with each other over the Internet. Today, such interaction is still a major source of Linux support. But, for commercial users, more conventional forms of support are becoming available. Incidentally, this amount to one of the most significant source of income for most Linux distributors (Red Hat, SUSE, Caldera, etc.), in the form of ‘for-fee’ support and consulting. Moreover, certification is an expanding area of market opportunities (maybe one of the most promising, if not the most), as a free OS raises doubts and questions about the availability of ‘true’ providers of various services from installationconfiguration to online support, etc. Also a different support industry is emerging. Take, for example, the case of LinuxCare. It is a new company that specialises in providing support to Linux users. It contracts with both Linux vendors and directly with user organisations to provide Linux support, consulting, systems development, implementation, integration and training. The company supports all of the Linux distributions and most other important platforms. 11 12

See the detailed and updated list at http://www.linux.org/dist/index.html. A very up–to–date list is in http://www.linux.org/vendors/systems.html.

8

However, support is possibly the great unknown about Linux. One of the obvious concerns about the Open Source Software (henceforth OSS)13 model is whether a sufficient level of support for commercial users is available in the required amount/quality and at an acceptable/affordable price. If this will not be the case, the cost of support would thus be too high, with the consequence of offsetting the low cost of acquisition of a Linux based system.14 However, it is true that one of the crucial conditions for Linux to commercially succeed is that customers can obtain an adequate level of support for their activities. Moreover, the Linux user, who is on the average a PC ‘literate’, is aware that there is a massive amount of support documentation freely available on line, and this adds to the low cost of entry and of upgrading. Indeed, an OS is never a static piece of work, but it is a dynamic one, which continuously undergoes betterment and updating. Again, as far as this aspect is concerned, a free software, is free for all, while for commercial one every updates need to be paid.15 Despite these kinds of problems, the standard reached by Linux is so high, that, according to some scattered16 data collected, Linux is now the most popular support for server applications (in conjunction to the Open Source Software Apache the most diffused webserver software). Its actual figures are somewhere between 17% and 24%17 of all computer servers, that account for more than 10 million computers. The rate of growth is the highest among the OS, and is starting threatening Windows based OS in the most lucrative market of desktop PCs. Here its share should be around 5-7% with very impressive yearly rates of growth of around 100% and more. 13

More details in the following. This would not be a forbidding condition for the diffusion of Linux based systems, since it is a sort of well known ‘stylised fact’ that lower purchase prices attract customers anyway. 15 The most notable case is obviously that of MS Windows OS. In fact, it must be noted that from the point of view of a rational consumer there is no rationale at all in migrating from the various releases of Windows from Windows 95, to Windows 95 OSR2, then to Windows 98, to Windows 98 SE, and finally to Windows ME, and in between these migrations various patches have been added to the lot. Every one of these releases had to be paid for, while remaining, at the end of the day, with the same OS, but more stable thank to 5 years of debugging and upgrading. To put in other words, some commentators reckon that the last release (Windows ME) is all but what Windows 95 should have been when it was first released, and the ‘rational’ customer had to spend money for 5 upgrading to obtain what he/she expected from the first OS purchased. 16 It is obviously impossible to have reliable data on the diffusion of an OS which can be downloaded for free. 17 But Netcraft. Inc (http://www.netcraft.com) states that as of May 2000 they are 30% of all. 14

9

And indeed, the impressive rate of diffusion of Linux together with its increasing level of reliability has so deeply concerned MS, that an analysis has been internally commissioned to evaluate the potentialities, the pros and cons, and the possible strategies to contrast such an emergent phenomenon. In fact, in October 1998 two internal memoranda on MS strategy against Open Source Software (Valloppillil, 1998) and Linux (Valloppillil and Cohen, 1988) were secretly handed to a prominent OSS developer (Eric Raymond) who published them on the Open Source Project webpages. In those documents the authors offer a thorough examination of the potentialities and shortcomings of the OSS and of Linux. The documents are very illuminating for many reasons, and are worth reading in themselves. However, for this paper’s sake suffices it to underline some key points that are useful for the following discussion. The authors point out, among the many potentialities, that Linux has managed to achieve “commercial quality”. That is, software products given ‘away for free’ are no longer perceived as lacking those ‘virtues’ characterising commercial software, i.e. they are no longer perceived as second best choices. This is a brand new and very challenging aspect, especially when it is connected to a very strong and so far well documented capacity of developing large-scale and complex products.18 Moreover, the increasing complexity has virtuous multiplicative effects when it is linked to the asymptotic degree of interconnection allowed by the development of Internet. This feature further enhances the strength of the OSS projects. These peculiarities are quite concerning since the processes that are put in motion are in the long run potentially disruptive for the established software market. In fact, in such a product market the network externalities put in motion are so overwhelming and spread so rapidly that the winner is likely to become a sort of ‘natural monopolist’ by sweeping away the competitors. The market would thus be locked into a single standard configuration (the MS case is obviously a very well known antecedent). A brand new threat is thus posed to the incumbent, the nature of which is radically different from what previously experienced. In fact, so–to-say, the rules of the game are completely changed by a new competitor for which the usual tools of ‘standard’ 18

And this capability is rather novel, as far as this kind of software is concerned. Indeed, previous projects developed as free software were thought to be limited in size and complexity by the necessarily limited involvement of the developers who could not extract an earning from their product.

10

competition are completely useless. And the authors of the MS internal documents seem to be well aware of this. Indeed, they argue that: the real key to Linux isn’t the static version of the product but rather the process around it. This process lends credibility and an air of future-safeness to customer Linux investments. [Hence, it becomes quite evident that] to understand how to compete against OSS, we must target a process rather than a company. (Valloppillil, 1998)

And indeed, also Torvalds is quite clear about this crucial point when stating that: The power of Linux is as much about the community of co-operation behind it as the code itself. If Linux were hijacked — if someone attempted to make and distribute a proprietary version — the appeal of Linux, which is essentially the open-source development model, would be lost for that proprietary version. (Torvalds, 1999, p. 109)

Moreover, to make the point crystal clear, he states that: Making Linux freely available is the single best decision I’ve ever made. There are lots of good technical stuff I’m proud of too in the kernel, but they all pale by comparison. (Torvalds, 1998)

The problem at stake is how to explain the dynamic process throughout which has been developing along so weird co–ordinates, so as to be a new entrant in the market that is perceived as such a threaten by the incumbent that a radical shift in strategy is called forth from within the incumbent itself. Therefore, the main question is how Linux developed and grew stronger than it apparently seemed at the beginning. And to this task the next Section is devoted.

3

The Linux network development

As previously said, the development of Linux is characterised by two apparently contrasting features: the decentralised/quasi anarchist organisation of the job and the outstanding quality level which outperforms commercial packages. Indeed, it is generally held that in order to achieve high quality of the software, usually the development phases can result 11

only from a rational organisation, tightly controlled by a linear, very centralised management, with few very gifted individuals working in very close and compact groups, with few releases. This is considered to be the best way to deal, in particular, with large projects like operating systems, or like, for instance, text processors, desk publishers, and so on. In fact, the basic tenet of this approach is that, when a certain dimensional thresholds is reached, projects become so complex to deal with, that only a tight grip on code production can obtain good results. This ‘tenet’ is so deeply entrenched into the developers working style, that, even an enthusiast of OSS like Eric Raymond states that: I had been preaching the Unix gospel of small tools, rapid prototyping and evolutionary programming for years. But I also believed there was a certain critical complexity above which a more centralized, a priori approach was required. I believed that the most important software (operating systems and really large tools like Emacs) needed to be built like cathedrals, carefully crafted by individual wizards or small bands of mages working in splendid isolation, with no beta to be released before its time. (Raymond, 1998)

As a result of this approach it follows, almost as a corollary, that both the problems of architectural development and the debugging process are painstaking activities taking months to discover hidden holes, thus making it impossible to release frequently patches and upgrading. In a very famous and highly cited paper (Raymond, 1998), this is called the cathedral style of software engineering. Like monks, gifted developers work in splendid isolation from the outside, and, with a linear organisational approach, they try to solve, one after another, complicate puzzles, much to the resemblance of patient watchmakers. This attitude is well entrenched into the developers community and is naturally suited to the attitude of commercial software developers. Therefore, the cathedral style is here depicted as one privileging property rights and secrecy, with strong (legal and technical) attitude at defending copyrights, with small groups of well paid software developers, few beta releases, and so on. Also the economic conduct of the ‘cathedral’ has very precise references to ‘textbooks’, with behaviours such as undercutting competitors, bundling products to force competitors out of their narrower competence niches, battling for standards, etc.19 19

And indeed, as an example, Microsoft has based its conduct (and its success) essentially upon two

12

This attitude has been completely upset by the development of Linux. In fact, the way Linux has been developed was based, again as Raymond (1998) termed it, on the opposite of the cathedral style: the bazaar style.20 Indeed, when Linus Torvalds posted his message in 1991, he put in motion a completely new type of process (new at least because it worked on an unprecedented scale). Linux development was based on some few principles which turned out to be the opposite of the ones upon which a development plan was thought to be based. The first one was related to making the source code freely available. In this way, everyone could work on a perfectly customisable software. To this adds the fact that the potential users of the kernel were almost all developers. Thus, being Torvalds able to treat users as co–developers the improvement of the code followed unparalleled rates. Moreover, this allowed for very early and frequent releases of the code. These processes were to benefit from increasing returns due to the network externalities granted by the use of Internet. Therefore, the rate of growth of Linux was something never experienced before. The sense of community that was developed among developers actively involved in the project was granting a pool of skills on a scale unaffordable to any of the commercial software producers. The process of development and debugging was literally transformed from a time consuming, painstaking process of close scrutinisation of literally thousands lines of code, to the interactive, quick cleaning up of the code by thousands of eyes, or, according to the “Linux’s law” as Raymond (1998) calls it: “Given enough eyeball, all instruments: a very effective utilisation of the ’standard’ tools available to a monopolistic firm (one case for all is the release for free of MS browser to contrast — very effectively, indeed — the archrivals of Netscape, see Windrum (2000), and a very clever utilisation (as will be explained below) of a system approach to software production. 20 The Raymond parable of the cathedral vs. bazaar dichotomy is very sketchy, and although this vision is criticisable (as, for example, in Eunice, 1998 and Bezroukov, 1999), still in my opinion it highlights some of the most important features characterising two radically different approaches to software building. It must however be kept in mind that, obviously, some features of one approach can be found in the other one, and vice versa. For instance, the high level of centralised decision making for the actual Linux kernel could be seen as a ‘cathedral’ feature, while Microsoft shows for some issues a certain degree of decentralisation. However, as will be clearer in the following, the very characteristics of the bazaar style with respect to the cathedral one lies in the different organisation of parallel work. Indeed, while in a cathedral organisation type, parallel efforts are constantly checked in order to be minimised (to minimise waste and inefficiency, and thus costs), in the bazaar style this does not (and indeed cannot) happen. In short, this is the result of a bottom–up, (semi) decentralised, non bureaucratic ‘work organisation’.

13

bugs are shallow.” Of course Linus Torvalds’ role was essential in keeping all the pieces from falling apart. Moreover, he was clever enough to understand what pieces of code include into the kernel, and to maintain his original architectural design. Again Raymond argues that: Linux didn’t represent any awesome conceptual leap forward. Linus is not (or at least, not yet) an innovative genius of design in the way that, say, Richard Stallman or James Gosling (of NeWS and Java) are. Rather, Linus seems to me to be a genius of engineering and implementation, with a sixth sense for avoiding bugs and development dead-ends and a true knack for finding the minimum-effort path from point A to point B. Indeed, the whole design of Linux breathes this quality and mirrors Linus’s essentially conservative and simplifying design approach. (Raymond, 1998)

Indeed, for instance, Linux architecture has been highly criticised, when it was first released for being backward. Indeed, it is a monolithic kernel, while microkernels were considered to be the future of kernel development.21 If this process is seen from the standpoint of the evolutionary theory,22 it is quite easy to realise how nicely the bottom–up Linux development fits into this framework.23 Indeed, the results of Linux development seem to suggest one of the best example of emergent properties from very varied behaviours (even contradictory) at the micro level. The resulting system is thus characterised by emerging properties which are neither intrinsic qualities of the system as a whole, nor embedded into the micro constituents behaviours. Therefore, the interactions themselves among agents, who are certainly not pursuing an architectural design elaborated a–priori, generate the structural qualities observed emerging at system level. The dynamics of this particular evolution is thus amenable to a sort of biological analogy in which a mechanism of trial and error provides a diversity generation–selection process which gives rise to the evolutionary drift. Hence, the process of evolution produces dynamic stability at the edge of chaos between two diverging forces: a tendency towards sedimentation and standardisation, and a dynamic development, that opens up new possibilities for undermining existing solutions. 21

See, on this, The Tanenbaum-Torvalds Debate in Appendix A of DiBona et al. (1999a). For a neoclassical account of OOS development, see Lerner and Tirole (2000). 23 For convincing arguments on the evolutionary nature of Linux development see, for instance, Raymond (1998) and Kuwabara (2000). 22

14

The main characteristics of this process, again, are in line with evolutionary theory: variety is constantly generated by a huge population of developers willing to contribute to various projects, selection is obtained mainly by a process of peer review deciding which pieces of code are to be eventually added, a certain degree of waste and inefficiency is inherent into the process which, however, generates high levels of efficacy, the speed of the overall system development is very high because of the massive degree of parallel processing, the complexity of the products made available is quite high because of the bottom–up highly non linear nature of the process characterised by intense feedback.24 Variety is generated by several aspects according to the different stages of the project development. At the beginning, the population of contributors is made mainly by users/co-developers, i.e. people who write code to solve particular personal problems for their own use.25 In a second developmental stage of Linux, the contributions are coming, on the one hand, from developers responding to different incentives,26 on the other hand, from low–end users signalling bugs and asking for easy-to-use software and documentation. Selection is based mainly upon a peer review process. Upgrades are submitted, then evaluated, and only if significant included into the next software release. At this regard, this process has been quite correctly assimilated (Raymond, 1998) to the scientific selection process.27 The process of selection is obviously a hierarchical one, first of all Linus Torvalds in general has the last say about the inclusion of a piece of software into the kernel (and similarly do the leaders of the projects for the various applications). Secondly, there is an obvious skewness in the quantity and quality of the contributions, hence there exists a so-to-say protective belt around the leader made up by the most skilled developers.28 24

As will be clear in the following, the positive feedback of the internet was possible only because the Linux developers committed themselves to the releasing of the source code. Otherwise, the internet is neither a necessary nor a sufficient condition. 25 Just as an example, a German programmer wrote the German keyboard driver for himself, that was subsequently included into the next Linux release (cited in Moon and Sproull, 2000). 26 For instance, they could be willing to signal their ability with a “business” aim. See, for instance, Lerner and Tirole (2000). 27 More than this, OSS advocates claim the “science is ultimately an Open Source enterprise” (DiBona et al., 1999b, p. 7), since information sharing is the basis of the replication which is the ultimate test of a scientific result. 28 For instance, while Linus Torvalds is now entirely focused on the experimental kernel releases, Alan Cox is responsible for the stable ones.

15

This is, more or less, the same for every OSS project. However, this hierarchy is quite different from the hierarchy of a private company, since the former is itself a result of the selection process. Thus there is a double direction in the causal relationships. Only those who survive several selection steps move higher into the hierarchy, and in turn eventually become selectors. There is thus a continuous change, although not very rapid, in the hierarchy (in some cases, there are explicit rules concerning the project leaderships rotation). Moreover, no one is in the position of neglecting contributions, even those contrasting the maintainer’s policy or strategic direction. There are many examples of changes that have been forced against the leader’s will. Finally, the process of decentralisation of the different tasks can be pushed very far, because the modularisation of the software allows for the existence of many ‘quasi–independent’ sub-systems. The overall result of these unintentional behaviours is an emergent property. This means that, although, the process is based on micro-redundancy and duplicated micro effort, since the results are obtained at system level as an emergent property, efficiency and efficacy are not properties inherent to the single individuals but are global, resulting from local non linear interaction and feedback. Moreover, there are quite clear advantages from parallel effort (in particular as far as code debugging is concerned, that is one of the (if not the) most cumbersome activity of a developer) in terms of speed and neatness of solution.29 Hence, this process is characterised by waste and inefficiency at micro level, but efficiency and efficacy at macro level. The evolutionary view has proved to be very effective and appealing although some issues call forth for a further step in the analysis. Indeed, there are some difficulties in copying with issues related to how order emerges from the chaotic micro behaviours and to the motivations of a single to contribute. In fact, if the evolutionary metaphor is well apt to explain the speed and the quality of Linux development, what we are left with is to explain how order emerges from chaos. Apparently no endogenous forces exist that can force an OSS project to converge towards a regular developmental path, but to overwhelm it under the burden of ever increasing complexity. Moreover, why should a 29

Incidentally, given the nature of the process, there is obviously no point in arguing that a patch to the original code, being made by someone else could introduce another hole somewhere else, since one of the main characteristics of the system is the trial and error behaviour, and this is precisely the reason why it has so massive resources devoted to this aspect.

16

developer contribute to this chaotic project, being faced, on the one hand, with uncertainty in terms of results, and, on the other hand, by certainty in terms of direct remuneration (that will be zero) of his/her effort? The way that will be suggested in the rest of the paper is to advocate to a system perspective. In so doing, an explanation will be provided which is consistent with the previous evolutionary view,30 based on the role of institutional interface to keep, on the one hand, the system from exploding away because of increasing complexity, and, on the other hand, by supplying contributors with stable and reliable patterns of relationships upon which it is possible to build relationships based on long–term commitment to deliver and to further refine a certain software.

4

The system perspective

In this section a very concise description of the system approach will be presented, by focusing only on those aspects that are directly related to the paper. Thus, a rather idiosyncratic view of the main features is this section’s aim, without pretending to be exhaustive. In particular, the questions related to network externalities, to co–evolution, and to the importance of the institutional arrangement will be briefly explored in order to furnish a background to the argument that will be put forward in the following.31 A system is based on linkages, i.e. on flow exchange among agents and networks of agents. To put it more strongly, it is the existence of the flow exchange among nodes that makes up for the existence of a system perspective at all. And it is the establishment of a network of relationships among agents and institutions and their feedback that generates the process of dynamic change. It is therefore crucial to consider the historical state of a system connections. Within certain systems certain channels of transmission might have developed rather than others, they might have been developed in different historical moments as response to different stimuli and therefore, although morphologically equivalent, they could well exert different consequences and thus different outcomes.32 30

For explicit analyses investigating the evolutionary basis for a system approach, see Lundgren (1995), McKelvey (1997), Saviotti (1997). 31 See the introduction for more thorough references. 32 Moreover, there is the need to consider the direction and intensity of the connections. Each system

17

Without invoking a holistic approach, it is possible to say that within a system view the behaviour of the whole system exceeds that of the single constituencies, while every unit still keeps its identity (DeBresson, 1996). Indeed, given that a system is based on connections between agents (nodes), two things clearly emerge. First, each node is at least connected to another one and thus its behaviour depends at the very least on that of another one. Second, given the interdependencies existing within a system, when a node is taken out of the system, also all the other remaining nodes will change their set of characteristics. Thus removing one node has implications, ranging from the partial redefinition of the system’s relationships, to the disappearing of the system itself if the node has a very ‘central’ position. The only impossible outcome is the system keeping unaltered its pace. In this sense, a system cannot be interpreted only by watching to its single units, but also requires a ‘system view’, which refers to the particular behaviour that can be referred only to the system itself as a unit of analysis. The definition of system must be properly spelled, because otherwise there is the risk of using ad hoc definitions with undesired effects on the analysis. For instance, consider the problem of how to determine the balance between import and export of entropy in order to maintain the dynamics of the system. Indeed, it is difficult to come up with a satisfactory and meaningfully operational definition.33 A social system is historically rooted, and it is also the result of many accidental and unintentional actions coupled in various and complex ways with intentional ones. Therefore, a role for an institutional interface emerges from within the system as the only way, so–to–say, to drive the process of change in certain directions rather than others. This emergence is however triggered by completely different and differently shaped social forces. They act with different directions, intensities and timing, with the sole common purpose to keep a ‘suitable’ degree of stability, which pushes forward the forces of change (of ‘progress’) in a certain direction. The direction is obviously socially determined and depends upon the distribution of the balance of power among the various agents within has its own network of linkages, which is characterised by: (i) the fact that one connection exists or not, and the fact that a connection links the same couple of agents or institutions; (ii) the direction of their connections; (iii) the fact that in one system one connection carries more flows than another one, while the reverse holds for another system (i.e. the intensity of the linkages); (iv) the fact that the connections and the flows travelling along them coincide or not; (v) the fact that one connection is mediated or not by the institutional interface. All these are crucial factors determining the system dynamics. 33 See Hughes (1989) and Carlsson and Stankiewicz (1991).

18

the system. It is only by referring to the set of interrelationships among agents and institutions, and to the properties emergent from their complex interrelations, in a push and pull mechanism of continually creating and resolving bottlenecks,34 that the emergence of a complex ‘product’ (or rather a set of complex products and services), such as an operating systems can be described, which is related to a dynamic setting where a trial-and-error behaviour is adopted to solve continuously evolving sets of problems. Or to put in other words, to the establishment of a new ‘technological system’ of the type depicted, for instance, by Carlsson and Stankiewicz (1991). A framework like this stresses the differences in the expertise of the different agents, and their differences in articulating them into different set of capability, and between the same agents in different moments. This has implications upon the flows of information among then. Thus, different information flows are established within a system according to (i) the relationships between differently endowed agents (in terms of capability and/or expertise); (ii) the changing patterns in capability/expertise accumulation. This can explain why different models of relationships between agents are observed at different moments: parithetic user–producer relationships followed by hierarchical ones, which, in turn, determine different patterns of incentives.35 34

Or more dynamic concepts such as development blocks (Dahm´en, 1989) or reverse salients (Hughes, 1989). 35 Although it is for another paper to explain the main characteristics of its development, it is possible to briefly sketch also Microsoft development by advocating to a system perspective. Indeed, the process that, starting with MS DOS, culminated with the release of the various version of the Windows OS can be described as a typical ‘large system building’, in the sense described by Thomas Hughes (e.g. 1983 and 1989). Microsoft is therefore a very classical example of what a profit seeking system builder can do in the 20th century weightless economy, very much in the line of what other system builders (such as H. Ford, T. Edison, E. Sperry G. Bell, etc.) have done in ‘heavy’ manufacturing. The creation, development and establishment of MS Windows has followed quite closely the pattern of evolution envisaged for large technical systems by Thomas Hughes (1989). Indeed, several of the phases characterising the evolution of a technical system (invention, development, innovation, etc.) apply also to the case of MS Windows (Hughes, 1997). However, for this paper’s sake, it only needs to add that being a closed system, it is quite different from the Linux one, and this, as will be clear in the following, has far reaching consequences.

19

5

The Linux technological system

The development process of such a complex product as a brand new operating system, with the need for all the sort of related software (from drivers for hardware interface to packages for different architectures, from utilities of general usage to particular libraries, from scientific to commercial applications, from desktop applications to laptop, palmtop, etc.), could only be possible within the web of highly interrelated and quick linkages between a multitude of differently specialised agents. While it is fairly clear that from an orthodox (textbook) point of view it is quite difficult to cope with the emergence of Linux, the adoption of an idiosyncratic, heterodox point of view might allow for a deeper understanding of the problems at stake. Let’s turn to the various components that have determined this particular system performance. The first and obvious issue is about technological innovation. This is obviously the most visible part of the project. UNIX is considered to be the most stable and reliable OS, and thus its porting to the PC world implies a more rational and efficient utilisation of the hardware resources (i.e. a full multi-tasking, multi-user, multi-processor environment). Also an important feature to underline is its modularity, that has important consequences on the organisation of production, as will be clear in the following. However, ‘only’ a technical superiority would have ended (as in other cases of very big challengers to the dominant OS) in a niche product for either a small community of developers, or server administrators, etc. In fact, from a system perspective, it is the co–evolution of several elements to guarantee a certain level of performance. And indeed, it is necessary to take into account other innovative element of the technological system developed around the Linux OS. One aspect regards the organisational innovation. This is the result of two factors: the extreme modularisation of the product (the OS in this case), and the intense utilisation of the net. The organisation of the production is completely decentralised, with each module being developed by a set of hackers that are contiguous only in the Internet space. The Internet made it possible to keep all pieces from falling apart.36 The organisation is thus based on parallel developments by a very large number of developers co-ordinated by one 36

For obvious reasons, I will completely skip a fuller discussion of the role of Internet in this kind of processes being quite self–evident and already well explored.

20

central unit that acts as the catalyst and the final collector of the various modules. Once a set of improvements has been agreed on, a new release of the kernel starts its intense beta testing life (this procedure works more or less in the same way for every piece of open source software produced). This distributed method of work has very big (and interesting) pros. First of all, it allows for a very rapid pace of innovation. Secondly, it guarantees a very effective degree of transparency and control over the various stages of development of each piece of software. Thirdly, it allows for effective and massive reductions (to zero in many cases) of replication and redundancy at macro level. However, without other related innovation regarding the institutional and the economic side, it would be hard to understand how and why should rational people get involved in projects like these. A very important aspect regards the evolution of the institutional side, since this is a very interesting case of institutional innovation. In fact, this step led to the creation of a new series of licenses. The most important of which is the GNU General Public License (GPL), developed by the Free Software Foundation (FSF). The best way to explain what is this license about, is to refer directly to the license site (http://www.gnu.org):37 “Free software is a matter of liberty, not price. To understand the concept, you should think of “free speech”, not “free beer.” “Free software” refers to the users’ freedom to run, copy, distribute, study, change and improve the software. More precisely, it refers to four kinds of freedom, for the users of the software: • The freedom to run the program, for any purpose (freedom 0). • The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this. • The freedom to redistribute copies so you can help your neighbour (freedom 2). • The freedom to improve the program, and release your improvements to the public, so that the whole community benefits. (freedom 3). Access to the source code is a precondition for this. A program is free software if users have all of these freedoms. Thus, you should be free to redistribute copies, either with or without modifications, either gratis or 37

For a more articulated explanation of the various types of GPL, see Perens (1999).

21

charging a fee for distribution, to anyone anywhere. Being free to do these things means (among other things) that you do not have to ask or pay for permission. You should also have the freedom to make modifications and use them privately in your own work or play, without even mentioning that they exist. If you do publish your changes, you should not be required to notify anyone in particular, or in any particular way. In order for the freedom to make changes, and to publish improved versions, to be meaningful, you must have access to the source code of the program. Therefore, accessibility of source code is a necessary condition for free software. You may have paid money to get copies of GNU software, or you may have obtained copies at no charge. But regardless of how you got your copies, you always have the freedom to copy and change the software. (Free Software Foundation, 1996)

This is another crucial step in the building of the technological system. Indeed, this license, being focused on the importance of redistribution of the source code of a program, is structured so as to forbid that someone could eventually appropriate of something freely available and make it disappear from the general public. In so doing, who wants to benefit from the advantages of free replicability and modifiability must conform, in turn, to this conduct. In a way this is the overturning of the usual principles upon which the copyright is based.38 It is fairly intuitive that the production organisation depicted above could only work within this rules: only within a GPL framework could hundreds of developers share their creations without the risk of suffering spoliation. It is evident that this institutional setting aims at stressing the non-rival nature of the goods provided. In this way, the network of contributors is linked by a copyright license that, on the one hand, gives certainty to the network of contributors, and, on the other hand, forbids that the project get a forking path.39 38

Not surprisingly this has been termed copyleft. A project undergoes a forking when a certain number of developers decide to quit the project to start a new one. Two main reasons may cause a project to split: one is related to diverging views about the strategic direction of the project development, another one is related to commercial purposes, if part of the developers realise that a certain direction might lead towards profitable opportunities. It goes without saying that when commercial appropriation exists (such is the case, for example, of BSD, which is distributed under a license different from the GPL one), forking is more likely and easily found, than is the case of a very serious and unrecoverable disagreement about the fundamental direction of the development of a GPL project (however if the disagreement is serious enough, forking is possible, as is the case of the Samba project). 39

22

After having delineated the different technological, organisational and institutional aspects, we finally come to the question of how is it possible to set up an adequate explanation, within this sort of technological system, of the mechanism of incentives. The determination of the various types of incentives is the next important step. It works on several related levels. One is related to the level of the pricing mechanism for the final product to customers (firms and families). The interesting thing is that there is nothing new at this regard. This mechanism has been quite easily adapted from other experiences and works throughout the individuation of collateral services that can makeup for revenues, in this zero-selling-price world.40 The collateral services are mainly the supply of help-on-line, manuals, training, certification, building-up and maintenance of structured LANs, intranet, etc.41 The problem of incentives for the single unit of the system is obviously a major one. For, why should a developer contribute to a software for which he/she will not be paid, and moreover somebody else could eventually appropriate? Different answers have been proposed, but they all went short from a complete explanation of the whole phenomena linked to the particular type of good produced. Indeed, OSS has mixed nature, so–to–say, half based on immaterial incentives, such as inner motivation, as pointed out by Raymond (1998), and reputation, as in Ghosh (1998), and half based on material (market) incentives, such as the career concern and signalling practices as in Lerner and Tirole (2000). Indeed, the former explanation has to do, for some aspects with peer rewarding,42 for some others with the participation to a very ambitious project (such as the creation of a new standard). The latter relates to a more traditional set of incentives and in particular it may be 40

See for instance the very lively description of Red Hat, the company producing the most diffused Linux distribution, by one of its founders, Robert Young (Young, 1999). See also, for a more general treatment of the provision of public goods, Jonason (1999). 41 It is not this case, but in other cases (typically web based services) there is also advertising: the more a certain web site is visited, the more the visitors are exposed to advertising. Therefore, for instance either a heavily visited site attracts many advertisers, or a very diffused software can be distributed for free in a ‘bannerised’ form (i.e. with a built-in software, a ‘daemon’, that continually downloads adverts when the main program within which it resides is used). 42 Very much like scientists working in the academia, who earn less than their colleagues working in private R&D labs. Also in this case, one very powerful propeller of the research is the recognition by their colleague, the most important of which is obviously the Nobel prize.

23

related to the capacity of a single to show his/her ability and thus to find a ‘proper’ job, either from a private company or by setting up a company to commercialise (for instance throughout a Linux distribution) the results of his/her previous work.43 The theories, however, fail to give a comprehensive explanation of incentives to contribute OSS. For instance, Raymond’s emphasis on motivation fails short of furnishing explanations for very demanding contributions in terms of opportunity costs on the side of the single developer, while Lerner and Tirole signalling theory appears to be insufficient for some aspects. For instance, the signalling mechanism does not work at early stages of a project, for which the level of uncertainty is very high. Moreover, in the (likely) event of a failure (not all the OSS projects deliver good and quick results, one for all the HURD Operating System developed by the Free Software Foundation) the signalling incentives should work to keep developers away from other OSS projects. Moreover, the structure of the contributions mechanism is very skewed, that is, few developers make the majority of contributions, while a very big percentage of developers contribute only once or little more.44 This calls for the following two considerations. First, there is a very strong path dependence in the ability to contribute more than average, thus it is very difficult to enter a certain (quite advanced) stage of the development of a software project with the ability to make important contributions, at least because the process necessary to learn all that has been done up to that point risks to be quite long. Obviously, the longer the period necessary for catching–up, the lower the discounted value of the pay–off from signalling. Thus, it seems likely that, if a developer is not paid up to catch–up, as is the case, the choice will be trapped between very visible projects with very long catch–up periods, and relatively new projects with very scarce visibility. Second, the large number of small contributions seems to point to the existence of a large pool of developers, maybe unable but, rather more likely, uninterested in the signalling mechanism. In both cases, the period necessary to become visible (coupled to the high level of uncertainty) could make it very hard for a developer to embark in an OSS project at all. 43

These are quite frequent solutions. Linus Torvalds himself has been hired by a private US company. Just as an example, in Linus Torvalds’ mail announcing the release of version 2.4.0 of Linux kernel, posted on the 5th of January 2001 to the mailing list of kernel.org, he acknowledges the very last contributors since the prerelease. 18 of them have only one contribution, 4 have two, then 2 plus himself have five, 1 has eight, and 2 have made 10 or more contributions. 44

24

Hence, we are left at best with a half baked theory which advocates in a very ad hoc way alternatively to ‘irrational’ (motivation, ego, etc.) and ‘rational’ explanations (career incentives, signalling, etc.), depending on the stage, the characteristics and the importance of the OSS project. By adopting a system approach it becomes possible to give this issue a more coherent explanation. Consider a rational consumer who has to take a decision about the acquisition of a certain software. The consumer will know with certainty if that software is available either by a commercial vendor or on a OSS repository, or on both. And being rational and well informed he/she will know the characteristics of both types of software. Now the problem the consumer faces is the following: which type of software should be rationally chosen? In case the decision is to buy the software from a commercial vendor, there will be a certain level of (textbook) utility associated to a certain expenditure. In case the choice is for the Open Source Software, the decision to be made is about the amount of resources (in terms of direct and opportunity costs) to allocate for participating to the development of the software. Since the software will be supplied for free, than the problem to be solved is finding the point of indifference between the amount of money necessary to buy the software from a commercial company and the cost of participation to that particular OSS project. In the case of taking part to a project for the development of a certain OSS, the consumer will be part of a network trying to develop it, and the resources that he/she will be asked for are mainly time to either write some pieces of code or to debug it or to participate to the bug search. The type of involvement obviously depends on the skills (the ability as a developer), the previous experience, and the ‘willingness to pay’ for that piece of software he/she is interested in. Ultimately, even in this case it is the ‘ratio’ between the opportunity costs and the benefits related to the software. Benefits may be both direct and indirect, that is direct benefits comes from the availability of the new software, and the indirect benefits come from the participation to the network that built it (signalling mechanisms included). Some more considerations are necessary to discuss the different outcomes when a consumer is faced with the choice between a commercial and an OS software. If the commercial one exists already, then three are the possibilities: (i) OSS exists as well, and 25

is stable; (ii) OSS exists but is not stable; (iii) OSS does not exist. In the fist case, the rational consumer will definitely choose the software that suits his/her needs, and low quality bugs are the only feedback needed/possible. In case the software have comparable features, the choice is determined only by previous lock–in situations, and not by pecuniary reasons. In the second case, the rational consumer has to evaluate the price of a commercial software against the opportunity costs of the OS one. The opportunity costs are related to two types of incompleteness: stability and quantity/quality of the features of the programs. Differently from the previous case, the problem is dynamic. Consumers have to discount the opportunity cost for the expected period necessary to the OSS to become stable, and then compared to the price of the commercial one. The choice will then depend on two sets of issues: the starting point and the speed of catching up process. The former point (i.e. the gap between the quantitative and qualitative features of the two programs) turns out to be the crucial one, since the second point is so–to–say set by default. Indeed, network externalities, in this case, work at their maximum speed for the OSS diffusion and its market share. In the third case, the OSS does not exist and the consumer will buy the commercial one. However, since there is a positive probability that some developers get interested in it, a process to build the software may eventually set up. In this case, a rational consumer has to dynamically evaluate a two step process: will a process start, and how long will it take for stage two to be reached. It is important to underline the crucial role of networks in this process. In fact, the development of OSS would be impossible to understand if the very idea of system is left out of the picture. The main reason is that participating to the development of OSS for a rational agent would be impossible if the participation itself would not lead to the formation of a network (or to the enlargement of a pre-existing one). Therefore, the main characteristic of an OSS project is that it, so–to–say by default, implies the existence of an underlying network of relationships which is what constitutes the real strength and justification for goods that will ultimately be given away for free. It can be said, more strongly, that a network of relationships is a necessary condition for any OSS project to exist at all. And that the network must exhibit several characteristics at different levels, i.e. it must be multi dimensional. In this way, technological innovation evolve along with 26

organisational and institutional innovation, or better only their co–evolution properly support the TS dynamics. Having said that, some peculiarities of the network geometry along which an OSS project evolves are the following. First of all, and most importantly the network allows for externalities to be produced that generates increasing returns in the ‘production function’ of the whole system. And the unit of production is not the single contributor, but rather the entire network. Thus, it is rational for a single unit to participate to this type of networks exactly because his/her effort will cumulate with that of the other participants more than proportionally due to establishment of network externalities. For this reason, the single unit’s resources needed for the system to produce a fully working software might be (and often are) less than the monetary resources needed to buy the same software from a commercial vendor. It is thus possible to gain from the participation to such a network. Second, the massive amount of resources that can be mobilised under an OSS project has different dimensions. One is that it guarantees a very powerful and effective mechanism for the dynamic process of evolutions. Indeed, a huge pool of contributors acts simultaneously, on the one hand, as a tremendous generator of novelty and variety within the system, and, on the other hand, as a powerful selection mechanism. Another one is related to the effectiveness of the result, as the network of contributors enlarge the risk linked to the abandonment and subsequent death of a project converges to zero. Furthermore, since the real strength of a software is not related to its inherent potentialities at the time of the release, but to the dynamic process of upgrading and of adding new features, this becomes automatically a major advantage of the Open Source Software with respect to the commercial one. The focus on the underlying process, rather than on the product itself, has another important implication, maybe the most important one. As a certain OSS develops, so does its TS (or better it could be said that a certain software develops to the extent that its TS develops). This boils down to two important features: multidimensionality and intrinsic dynamic behaviour. These features have strong implications in terms of market relationships and of commercial appropriability of a OSS. Suppose that, at a certain moment in time, it could be possible for a commercial company to become legal the legal owner of a certain OSS (which has reached a sufficiently stable level of development) by assuming that it is possible to slightly change some particular features and to bring the 27

software outside of the GPL domain. In this case, it would be possible for a commercial firm to exploit the OS movement by obtaining the property rights of a certain piece of software that since then could be sold on the market for a positive price. However, even if it is possible so–to–say to ‘steal’ a software, by no means it is possible to appropriate the TS developed around that piece of software. In fact, in that occurrence, no one of the developers working under GPL would be willing to work under different terms. That is, no one would be willing to work and then give up the results of his/her work for a commercial firm to gain profits from it, unless the firm hires that particular developer. However, it would be clearly impossible for the commercial firm to hire every single developers. And even if it was, the system would be closed at a certain boundary, thus loosing the most appealing characteristic of an OSS TS: its openness. Therefore, the single piece of software has no value if it is detached from the (dynamic) open TS that produced it. It would be a static piece of code, a product with no chance of further development from the OS community, and hence destined to languish and die. Moreover, the reasons why it is impossible for a commercial firm to exploit the OS community are both economic and organisational. Indeed, first of all, it would be impossible (unprofitable) for a commercial firm to hire all the developers, nor to choose the best ones (e.g. the co-ordinator). In fact, this would not result, for obvious reasons, in the other developers keeping their collaboration open.45 Since a collaboration is possible only within the GPL terms, which is incompatible with the need for appropriability that commercial firms badly need.46 However, even supposing that all this could be possible, organisational reasons further forbid this results. In fact, the OS process is a bottom–up process that works insofar as it guarantees that enough variety is produced, in order for the TS to keep its innovation pace. Again, this is not a compatible condition with the internal organisation of a firm. Which determines, if not every single step, at the very least the sequence of the various developmental phases. Even the most ‘bazaarish’ style adopted within a firm’s organisation is very unlikely to replicate the degree of freedom needed for a true bazaar style. 45

Even more, that co-ordinator would suffer a stigma from the community. Some attempts at developing less rigid license terms are under way, to guarantee that proprietary software could eventually coexists with OS one. However, so far, GPL software is the most diffused, most varied and most dynamic one. 46

28

Finally, also the user-producer (user-developer) relationships are again likely to suffer a dramatic decrease for several reasons. First of all, there is not the possibility of involving users deeply into the project because of the proprietary nature of the software (which means no access to the source code). Second, a user that pays for the software is more likely to feel the asymmetry between the position of those who pay and those who profit from the discovering of a bug, or from the release of a stable software. Thus, the consumers will be more likely to pretend a fully working product, rather than a work in progress, beta version, and consequently will be reluctant to signalling bugs, etc.

6

Concluding remarks

In this paper a system view of the development of a brand new operating system has been proposed. The exceptional performance of Linux in both high and low end segments of the market has generated so far few explanatory attempts. It is this paper’s aim that a system view based on an evolutionary perspective could furnish a coherent and exhaustive explanation of Linux development. Linux TS is characterised by a decentralised bottom–up work organisation, which, working on an unprecedented scale, realised unparalleled growth rates (in terms of both quantity and quality) thank to a very innovative institutional framework. The institutional framework in fact acts as a powerful ‘focusing device’ by acting as to maintain the TS environment open. This sharply contrasts with the usual idea (Thomas, 1989) that the system builder is constantly pushing forward the system boundaries. Indeed, in this way it is possible to increase profits by increasing the size of the system that is under control, therefore reducing uncertainty as the size of the environment out of control shrinks. Therefore, while the system builder is constantly striving to internalise functions vertically within the system, for a TS based on an OSS organisation this is meaningless, since the system is built around technical features and not economic ones. The establishment of systemic characteristics has made it possible for OSS to overcome the hobby phase to become a fully fledged commercial phenomenon. This has caused, quite obviously, the Linux community to undergo profound changes. From the very compact and motivated pool of talented user–developers, to a huge commercial success that is quickly spreading to low–end users, with very low motivation and 29

high requests in terms of user–friendliness and documentation. However, the basis are laid for Linux to remain a completely free and different product, the strengths of which are radically new and impossible (so far) for commercial firms to cope with, by means of the usual toolkit of competition. It must be repeatedly stressed that this type of process is possible, first, because of the system perspective, but also because other elements add up to the kind of increasing returns in production deriving from network externalities. It must never be forgotten that the real strength of a system approach lies mainly in two features: co–evolution and openness. Therefore to the very important point of network externalities other elements add up. They are, for instance, as already said, the use of internet, which, allowing for semi instantaneous communication, makes it possible the working of loosely geographically connected systems. But another important “system component” is constituted by the GPL license. In particular, here it is important to underline its ‘viral’ nature, that is its capability of infecting every further piece of code based on GPL license.

7

References

Bezroukov N. (1999), A second look at the cathedral and the bazaar, First Monday (http://www.firstmonday.dk), vol. 4, n. 12. Carlsson B. and Stankiewicz R. (1991), On the nature, functions and composition of technological systems, Journal of Evolutionary Economics, vol. 1, pp. 93-118. Carlsson B. (ed.) (1995), Technological Systems and Economic Performance: The Case of Factory Automation, Kluwer, Dordrecht. Dahm´en E. (1989), ‘Development blocks’ in industrial economics, in Carlsson B. (ed.), Industrial Economics, Kluwer, Dordrecht. DeBresson C. (ed.) (1996), Economic Interdependence and Innovative Activity, Edward Elgar, Aldershot. De Liso N. and Metcalfe S. (1996), On technological systems and technological paradigms, in Helmst¨adter E. and Perlman M. (eds.), Behavioral Norms, Technological Progress, and Economic Dynamics, University of Michigan Press, Ann Arbor. DiBona C., Ockman S. and Stone M. (eds.), (1999a), Open Sources: Voices from the Open Source Revolution, O’Reilly, Sebastopol, CA. DiBona C., Ockman S. and Stone M. (1999b), Introduction, in DiBona C., Ockman S. and Stone M. (eds.), Open Sources: Voices from the Open Source Revolution, O’Reilly, Sebastopol, CA. Edquist C. (ed.) (1997), Systems of Innovation. Technologies, Institutions and Organization, Pinter, London. Eunice J. (1998), Beyond the Cathedral, Beyond the Bazaar, http://www.illuminata.com/ public/content/cathedral/intro.htm 30

Fisher F., McGowan J. and Greenwood J. (1983), Folded, Spindled and Mutilated: Economic Analysis and US v. IBM, MIT Press, Cambridge, Mass. Free Software Foundation, (1996), What is Free Software?, http://www.gnu.org/ philosophy/free-sw.html. Ghosh R. (1998), Cooking pot markets: an economic model for the trade in free goods and services on the Internet, First Monday (http://www.firstmonday.dk), vol. 3, n. 3. Hughes T. (1983), Networks of Power: Electrification in Western Society, 1880-1930, Johns Hopkins University Press, Baltimore. Hughes T. (1989), The evolution of large technological systems, in Bijker W. et al (eds.), The Social Construction of Technological Systems, MIT Press, Harvard. Hughes T. (1997), William Gates, the new hedgehog, The Seattle Times, (http://seattletimes.nwsource.com/extra/browse/html97/bill 111497.html). Kamien M. and Schwartz A. (1982), Market Structure and Innovation, Cambridge University Press, Cambridge. Kuwabara K. (2000), Linux: a bazaar at the edge of chaos, First Monday (http://www.firstmonday.dk), vol. 5, n. 3. Jonason A. (1999), textitTrading in and Pricing of Not Easily Defined Products and Services, WP 99/6, KTH/ IEO, Stockholm. Leoncini R. (2000), System View of the Process of Technological Change, Memorie di Ricerca, n. 1/00, IDSE-CNR, Milano. Lerner J. and Tirole J. (2000), The Simple Economics of Open Source, WP n. 7600, NBER, Cambridge. Lundgren A. (1995), Technological Innovation and Network Evolution, Routledge, London. Lundvall B. (ed.) (1992), National Systems of Innovation. Towards a Theory of Innovation and Interactive Learning, Pinter, London. McKelvey M. (1997), Using evolutionary theory to define systems of innovation, in Edquist C. (ed.), Systems of Innovation. Technologies, Institutions and Organization, Pinter, London. Moon J. and Sproull L. (2000), Essence of distributed work: the case of Linux, First Monday (http://www.firstmonday.dk), vol. 5, n. 11. Nelson R. (ed.) (1993), National Innovation Systems. A Comparative Analysis, Oxford University Press, New York. Perens B. (1999), The Open Source definition, in DiBona C., Ockman S. and Stone M. (eds.), Open Sources: Voices from the Open Source Revolution, O’Reilly, Sebastopol, CA. Raymond E. S. (1998), The Cathedral and the Bazaar, http://www.linux.it/GNU/ cathedral-bazaar/index.html. Raymond E. S. (1999), The Magic Cauldron, http://www.tuxedo.org/˜/esr/writings/ magic-cauldron/ Saviotti P. (1997), Innovation systems and evolutionary theories, in Edquist C. (ed.), Systems of Innovation. Technologies, Institutions and Organization, Pinter, London. Schumpeter J. (1943), The Theory of Economic Development, Harvard University Press, Cambridge. 31

Stallman R. (2000), Linux and the GNU Project, http://www.gnu.org/gnu/linux-andgnu.pt.html. Torvalds L. (1998), FM interview with LT: What motivates free software developers?, First Monday (http://www.firstmonday.dk), vol. 3, n. 3. Torvalds L. (1999), The Linux edge, in DiBona C., Ockman S. and Stone M. (eds.), Open Sources: Voices from the Open Source Revolution, O’Reilly, Sebastopol, CA. Valloppillil V. (1998), Open Source Software. A (New?) Development Methodology, v1.00, http://www.opensource.org/ halloween/ halloween1.html. Valloppillil V. and Cohen J. (1998), Linux OS Competitive Analysis. The Next Java VM?, v1.00, http://www.opensource.org/ halloween/ halloween2.html. Windrum P. (2000), Back from the Brink: Microsoft and the Strategic Use of Standards in the Browser Wars, MERIT Research Memoranda, n. 005. Young R. (1999), Giving it away: how Red Hat software stumbled across a new economic model and helped improve an industry, in DiBona C., Ockman S. and Stone M. (eds.), Open Sources: Voices from the Open Source Revolution, O’Reilly, Sebastopol, CA.

32