The 14th Biennial Conference of International ... - CiteSeerX

7 downloads 170149 Views 782KB Size Report
Aug 20, 2002 - portals and other online intermediaries, web-hosting firms, application service .... years of schooling were only 5.1 and adult illiteracy was 44%.
E-business Research Center Working Paper August 2002

Weighing the intangible: towards a framework for Information Society indices Dan M. Grigorovici Jorge Reina Schement Richard D. Taylor

August 2002

eBusiness Research Center 401 Business Administration Building University Park, PA 16802 Phone: 814.861.7575 Fax: 814.863.0413 Web: www.ebrc.psu.edu A joint venture of Penn State’s Smeal College of Business Administration and the School of Information Sciences and Technology

Weighing the intangible: towards a framework for Information Society indices Dan M. Grigorovici Institute for Information Policy, Pennsylvania State University University Park, PA, USA [email protected] Jorge Reina Schement Institute for Information Policy, Pennsylvania State University University Park, PA, USA [email protected] Richard D. Taylor Institute for Information Policy, Pennsylvania State University University Park, PA, USA [email protected]

Abstract The objective of the paper is to make a contribution to a conceptual model of information indicators. As suggested by the recent surge in the development of quantitative measures of “e-readiness” or “e-metrics” in both academic and industry research, macro level indicators of Information and Communication Technologies (ICT) are an essential tool for quantifying the “Digital Divide”, thus having a profound role in the development of effective policies to overcome it. A variety of statistics are being currently used to measure Internet access or level of deployment of ICT at various levels, but the underlying theory is non-existent at best, data are often not comparable, the choice of indicators is much too subjective, and the variables chosen, statistics, their methodologies and the logical process of arriving at a choice for one index rather than another do not have common conceptual ground, thus lacking concurrent validity. This fact is obvious when comparing same level rankings (country or industry, etc.) stemming from different indices, as there are no two levels ranked in the same place in different indices. We suggest reasons why this is the case, and ways to overcome the challenge of developing valid e-metrics. The paper is organized into five sections. First, a survey of the history of Information Society Indicators (ISI) draws on an analysis of the relationship between ICT and development. We argue that the area lacks coherence in that every major index is built upon different values, different definitions of the key terms (“information”, “knowledge”, etc.), and boundary conditions restricting concurrent validity. It is our contention that this is an effect of the lack

of conceptual clarity that stems mainly from the difficulties of “weighing the intangible” when using industrial era measurement instruments. Thus, the first section ends with the conclusion that most of the existing operationalizations lack a comprehensive deductive theory to guide them. The second section analyzes what the different measurement models reviewed bundled into their operationalizations of ISI, thus answering the question of where, how and in what forms are information indicators currently being applied. We argue that a major reason for the lack of a common metatheoretical ground in measuring inter-sector and inter-country comparisons in terms of the impact of information in development, Information Society readiness, etc., is the lack of agreement as to the use of different terms and the values behind them. Put another way, we might measure different things with completely different, and too often with “unweighted” measures. The third section discusses challenges to building an effective model of information metrics by way of a statistical analysis of one of the widely diffused e-metrics (UNDP’s “Technology Achievement Index”). Based on results from several regression analyses of the index chosen, we dissociate the variables that could be useful for developing a unified measure of Information Society and report the results. Our “weighing” of key indicators and the variables under them will help draw a conceptual blueprint for future research in the area and answering the question whether a single composite index is at all needed. The fourth section inductively argues for the usefulness of building a new model of ISI that addresses the difficulties analyzed by allowing a more flexible, multi-criterial instrument suited to the necessity of measuring intangibles. We suggest Structural Equation Modeling as a methodology useful in testing current models and sorting variables that cluster together into a new model. We also discuss its potential to issues of application to social development measurements. In the last section we suggest steps for future research in the area and discuss the utility of building a new model of ISI by taking in the account issues of broad-based development and their subsequent impact on quality of life. 1.

Relationships between e-metrics and development: implications for measurement

A review of the large amount of literature in the area of Internet or Information and Communication Technologies (ICT) and development shows a fierce debate and contradictory findings on the relationship between ICT and development: even if some studies show that the level of ICT provision in a country is highly correlated with income per capita ([40], p. 13), there is still lack of understanding of the direction and magnitude of their underlying causal links. Broadly, three lessons are suggested by the literature. One, there is a link between ICT and growth; two, econometricians will never have a clear answer as to how strong it is (due to the lack of common standards and methodologies); third, the link is probably context and level-dependent. There is a complex relationship between telecommunications development and economic growth, one dependent on a range of other conditions such as policy variables,

the state of supporting infrastructure and the role of services in the economy. ([40], 18) A variety of statistics are currently used to measure Internet access, but the main problem is that the data are often not comparable, the choice of indicators is much too subjective, and the statistics, their methodologies and the logical process of arriving at a choice for one index rather than another are not available. Usually, information indicators have been classified in three classes: “Physical indicators” (measuring the need for infrastructure for accessing the Internet, with indicators quantifying host computers, telephone lines, personal computers, etc.), “People Indicators” (studying consumers’ level of use of the Internet, and of ICT), and “Policy Indicators” (containing measures of variables thought to have a direct impact on distributive justice issues, such as: ISP market, pricing, or usage). The SINE (Statistical Indicators for the New Economy) program of the European Union [98] considered that Indicators for the New Economy1 cover the following four distinctive target areas: Technology Domain, Industry Domain, Economy Domain and Social Domain. The Technology domain comprises the major technological changes that constitute the foundations of the New Economy and future economic growth. Briefly, these would cover indicators of enablers and accelerators of the information revolution, such as improvements in processing power, storage and communications through bandwidth enhancements; convergence, and so on. It should be noted that measuring the digital economy implies the measurement of the penetration and use of digital technologies in the public and private sectors, and across regions and countries. It also implies measuring the speed and direction of technological progress. As an illustration, the Technology Domain could contain groups of indicators that concern the new technology and its users such as: Information Technology and Communications (ICT) Infrastructure; Internet Infrastructure; Digitization; Virtualization; Multimedia; Internet users; Internet penetration. There are close connections between the Technology domain and the Industry domain. The single digital space is becoming the basis of all sectors. At the same time, organizational transformation proceeds at a rapid pace. Public and private organizations evolve towards smart organizations, using knowledge, managing intangible assets and behaving dynamically. Small enterprises can achieve economies of scale in the digitally networked economy. Mass production gives way to mass customization. There are the “gazelle” firms in which innovation is a process of continuous renewal. The development of electronic commerce has both positive and negative impacts on transport needs, major environmental and spatial planning issues. When it comes to industry, several important issues arise. For instance, at the micro-, sectoral and macro-levels, the impact of ICT on organization, production performance, restructuring 1

“The New Economy is a knowledge and idea-based economy where the keys to job creation and higher standards of living are innovative ideas and technology embedded in services and manufactured products” (Atkinson & Court, 1998, p.8). Several generally agreed upon features of the New, or otherwise called Digital, Economy are thought to be: dynamic, global, networked, digitization-based technology, service-based workforce.

and competitiveness is profound, giving rise to a need for indicators on these aspects. Examples of groups of indicators which could belong to the Industry Domain are the following: ICT production and trade indicators; Knowledge Capital Indicators; Industry Performance Indicators; Inter-enterprise alliances indicators; New Business Organizational Types Indicators. Within the Economy domain, it is essential to capture the nature of the transition from the industrial to the New Economy. Some factors that are considered to reflect this shift are, for example, industrial and occupational change, growth in the service sector, factors that reflect the pace of globalization, etc. Furthermore, the New Economy is a knowledge economy. Thus, indicators of the nature, strengths, weaknesses and trends in the skills base would be necessary, together with measures of the level, production, distribution and utilization of intellectual capital. Such new indicators should provide a firmer basis for identifying areas for policy actions to promote employment and competitiveness. Indicators designed to measure employment impacts should take into consideration primary direct impacts, primary indirect impacts and secondary impacts, substitution effects of traditional jobs, social inclusion/exclusion effects in the labor market, etc. Special features of less developed regions need to be taken into account. In particular, understanding the nature and the driving forces behind competitiveness is crucial. Given the discussion above, some groups of indicators identified as belonging to the Economy Domain are the following: Production indicators; Economic Performance Indicators; Foreign Trade Indicators; Foreign Investment Indicators; Internet Economy Indicators; Business Indicators; Deregulation Indicators; Information Production & Diffusion Indicators; Price and Wage Indicators. All the effects mentioned in the earlier domains do, in varying degrees, percolate to the social sphere. Thus, while the first three domains might be regarded as concerned with “inputs and throughputs”, the social domain reflects the ultimate “outputs” of ICT, i.e. outcomes in terms of the gains and losses for people. Smart cards, smart household appliances, intelligent traffic control systems, distributed classrooms by satellites, remote and online publication of electronic multimedia materials, the electronic marketplace, 3-D virtual images and new ways of experiencing the world all affect our daily lives. The New Economy will continue to make a difference to the way we live, how we work, learn, play, interact and even think. But we lack the indicators and the data to tell us exactly the nature and magnitudes of those differences. It is obvious that a vast range of social impacts needs to be measured and monitored. This list of social impacts includes: the living standards; the life styles; the social usage of Internet; the environment and social inclusion in, or exclusion from, the information society. For instance, UNDP’s conceptual linkage between technology readiness of a country and its human development level are graphically represented in the figure below, used from [100].

Source: [100], p. 41 Thus, as already implied, the required indicators should unequivocally capture the consequences of the New Economy for living conditions and lifestyles, more specifically for wealth creation, income distribution, earnings inequalities, education and training, social protection and social cohesion, (including the risks of the emergence of an information underclass), demographic dynamics, individual empowerment, new communities, changing cultural norms, and so on. Taking into account the previous discussion, some groups of indicators which are likely to be in the Social Domain may be the following: Economic and social demography indicators; Lifelong learning/training indicators; Living standards and lifestyles indicators; Cultural indicators; Social inequality indicators; Technology penetration indicators; Internet penetration indicators; Time use.

2. A review of existing Information Society measures Over the last three years, a number of ‘e-readiness’ assessment tools have been developed. On the surface, each tool gauges how ready a society or economy is to benefit from information technology and electronic commerce. On closer examination, the range of tools uses widely varying definitions for e-readiness and different methods for measurement. For example, Harvard University’s model [27] looks at how information and communications technologies (ICT) are currently used in a society, while APEC’s method [6,7] focuses on government policies for e-commerce. For example, Bridges.org’s research on e-readiness indicators [18] classified two major categories that are used, offering different underlying goals: those that seek to measure ‘e-economy’ metrics, and those that look at ‘e-society’ indicators. E-economy assessment tools look at the ability of ICT to impact the economy, while e-society assessment tools look at the potential impact of ICT on the wider society. The number of tools that are ready-to-use to assess a nation's e-readiness is limited, as relatively few organizations have presented their assessment methods for use by others. However, there is a wide range of reports and other resources that can be re-worked into ‘assessment tools,’ including position papers and survey results. A first step in any approach to the digital divide problem is to consider a country's ability or "readiness” to integrate information technology (IT) and e-commerce, in order to provide a baseline that can be used for regional comparisons and planning. It is important to understand what it means for a country or economy to be "e-ready" and conduct an evaluation based on objective criteria to establish basic benchmarks. It means considering whether the necessary infrastructure is in place, but also looking beyond that to whether IT is accessible to the population at large and whether there is an appropriate legal and regulatory framework to support its use. If we are to narrow the digital divide, all of these issues need to be addressed simultaneously in a coherent, achievable strategy which is tailored to meet the needs of a particular country. An e-readiness assessment process can be used as an information-gathering mechanism for governments as they plan their national and international strategies for IT integration. It can help governments focus their efforts from within, and identify areas where external support or aid is required. A range of assessment tools have been developed to measure a country’s or economy's e-readiness, and assessments have already been conducted in dozens of countries. The tools use widely varying definitions for e-readiness and different methods for measurement and the assessments are very diverse in their goals, strategies and results. The right tool depends on the user's goal. The user should choose a tool that measures what they are looking for (construct validity), and does it against a standard that fits their own view of an 'e-ready' society. The need for well-justified country-level information economy and e-readiness metrics is strong. The recent proliferation of various "e-readiness" and similar indexes, and a recently announced initiative by the World Bank's Information for Development Program to fund such

studies [47], underscores the strong interest of policy makers and business people alike. Researchers who are studying how the Internet is influencing and changing the economic, political, and social systems of various countries have been limited by the absence of measures that are more accurate, descriptive, and sophisticated than the simple number of Internet hosts in a country [63-66; 106]. Interest in national level metrics is well-founded. To what a user has access and why depends on the specific legal, economic, political, and social conditions that surround that user. In spite of claims that the Internet and other trends related to globalization are subverting the sovereignty of national governments and blurring national boundaries, governments still make policies that can have a dramatic effect on the diffusion and absorption of the Internet. Furthermore, users are located within a particular national system of innovation, which also strongly influences the diffusion process and the absorptive capacity of a country. Authors who write papers that are primarily concerned with metrics always face a dilemma. If we begin by examining prior work and the theory behind the measures, we must ask readers to accept the justification without fully understanding the measures. If we put the measures first, we must ask the reader to temporarily accept that they do, in fact, have sufficiently strong justifications.

Authors who write papers that are primarily concerned with metrics always face a dilemma. If we begin by examining prior work and the theory behind the measures, we must ask readers to accept the justification without fully understanding the measures. If we put the measures first, we must ask the reader to temporarily accept that they do, in fact, have sufficiently strong justifications. Classification systems reduce more complex phenomena to simpler representations that are easier to understand and to manipulate in the formulation and testing of hypotheses. Classification systems should be "natural," meaning that they represent real underlying properties and relationships--the way the world actually works [Ridley 1986]. They also should be practical, based on data that can be collected with reasonable accuracy, timeliness, and cost. Traditional diffusion studies typically stop at the point at which a user has chosen to adopt a single innovation, and thus have a single dependent variable [Rogers 1995]. For the Internet, this variable has often been "number of hosts" or users. We will argue, however, that the Internet is not a single innovation but is a cluster of related technologies that must be present together to support adoption decisions by end users. The Internet cannot work unless there are servers, communication links, software, end user devices, content to transmit, etc. For interactive technologies such as the Internet, network externalities influence the critical mass

needed for widespread adoption [Mahler and Rogers 1999]. Using a single measurement variable does not capture the richness of what is happening and in fact may be misleading. Mosaic, a consortium of universities, has been doing them for some years, and has done case study and self assessments for some 25 countries. The Mosaic Group study, “Tracking the Global Diffusion of the Internet focuses on the nation as the unit of analysis, and characterizes the state of the Internet along six dimensions: pervasiveness, geographic dispersion, sectoral absorption, connectivity infrastructure, organizational infrastructure, and sophistication of use. In addition to these dimensions, the framework includes an open-ended list of determinants -- factors which influence the development of the Internet, i. e., the values of the dimensions in a nation. Each dimension has five ordinal values ranging from zero (nonexistent) to four (highly developed). Table 1 shows the definition of the levels of the first dimension, pervasiveness. Geographic dispersion was selected as the second variable. While widespread access is desirable, the payoff is in who uses the Internet in a nation. This is accounted for in the sectoral absorption dimension, a measure of the degree of Internet utilization in the education, commercial, health care, and public sectors. These sectors are seen as key to development, and were suggested by the measures used by the United Nations Development Program’s Human Development Index [7]. Connectivity infrastructure is the fourth variable. It a measure based on international and intranational backbone bandwidth, exchange points, and last-mile access methods. Organizational infrastructure is a measure based on state of the ISP industry and market conditions. A highly rated nation would have many ISPs and a high degree of openness and competition in both the ISP and telecommunication industries. It would also have collaborative organizations and arrangements like public exchanges, ISP industry associations, and emergency response teams. The final variable is sophistication of use, a measure characterizing usage from conventional to highly sophisticated and driving innovation. A relatively conventional nation would be using the Internet as a straight forward substitute for other communication media like telephone and FAX, whereas in a more advanced nation, applications may result in significant changes in existing processes and practices and may even drive the invention of new technology. In addition to these six dimensions, Mosaic’s framework considers determinants of Internet diffusion. One view of these determinants is presented in [4], which organizes them into government policies and non-governmental determinants of Internet success, as shown in Table 2.

Table 1: The five levels of the pervasiveness dimension. (Source: Mosaic website) Level 0 Non-existent

Level 1 Experimental

Level 2 Established Level 3 Common

Level 4 Pervasive

The Internet does not exist in a viable form in this country. No computers with international IP connections are located within the country. There may be some Internet users in the country; however, they obtain a connection via an international telephone call to a foreign ISP. The ratio of users per capita is on the order of magnitude of less than one in a thousand. There is limited availability, and use of the Internet is embryonic. Only one or a few networks are connected to the international IP network. The user community comprises principally networking technicians. The ratio of Internet users per capita is on the order of magnitude of at least one in a thousand. The user community has been expanded beyond networking technicians. The ratio of Internet users per capita is on the order of magnitude of at least one in a hundred. The infrastructure of supporting and related goods and services has become well-established, although is not necessarily extensive. The Internet is pervasive. The ratio of Internet users per capita is on the order of magnitude of at least one in ten. Internet access is available as a commodity service.

Table 2: Factors and policies influencing Internet success within a nation (Source: Mosaic website) Internet Success Determinants • • • • •

Telecommunication infrastructure Personal computing and software Financial resources Human capital Sectoral demand and awareness



Competitive environment

Government Policies • • • • •

Markets and choice Investment policy National security Cultural concerns Social equity

The primary motivation for this work is to guide and aid policy makers. And there is increasing consensus that the Internet and related telecommunication technology is strategic national infrastructure. While this is true for all nations, it is perhaps most important for less developed nations where the marginal impact of improved telecommunication is very high, leading to improved economic productivity, education, health, and quality of life, particularly in rural areas. (Figure 1 below shows a graph of the relationship between Internet hosts per capita and the UNDP Human Development Index for nations).

Fig. 1: Correlation between Internet Hosts/1000 population and the UNDP HDI Understanding how the determinants influence the dimensions in a given country can lead to prescriptive statements, and GDI studies typically include thorough analyses of both

the dimensions and the determinants [cf. Wolcott and Goodman 2000]. The results are presented on a diagram with six "spokes" representing each of the dimensions. Values for one or more countries at one or more times can be plotted on the same diagram or compared sideby-side on several diagrams. Figure 1, for example, shows the status of Internet diffusion in Turkey and Pakistan in 1999. Figure 2 shows the rapid growth of the Internet in Finland from 1994 to 1997.

Fig. 2. Dimensions for Turkey and Pakistan, 1999 (Source: Mosaic website)

Fig. 3. Dimensions for Finland, 1994-1997 (Source: Mosaic website) The IDC/World Times Information Society Index is the fifth installment of the Information Society Index (ISI) research done at the International Data Corporation. The index measures information capacity and wealth for 55 nations, accounting for 97 % of the world’s GDP and more than 99 % of all IT spending. Based on clustering some 23 variables in four classes (computer infrastructure, information infrastructure, internet infrastructure and social infrastructure), the latest figures consulted (2000) classify countries in “skaters” (ISI score above 3,500, with Sweden, Norway, Finland, and United States occupying the first, second, third, and fourth ranks respectively), “striders” (ISI above 2,000, with New Zealand 16th, Belgium - 17th, Taiwan - 18th, and Korea - 19th), “sprinters” (ISI score above 1,000, with UAE - 28th, Hungary - 29th, Poland - 30th, and Argentina - 31st), and “strollers” (ISI score below 1,000, with Saudi Arabia - 44th, Brazil - 45th, Colombia - 46th, and Thailand 47th). ISI variables are: computer infrastructure (PC’s installed per capita; Home PC’s shipped per household; Government and commercial PC’s shipped per non-agricultural workforce; Educational PC’s shipped per student and faculty; Percent of non-home networked PC’s; Software vs. hardware spending), information infrastructure: (Cable subscribers per capita; Cellular phone ownership per capita; Cost per phone call; Fax ownership per capita; Radio ownership per capita; Telephone line error rates; Telephone lines per household; TV ownership per capita), Internet infrastructure (Business Internet users per non-agricultural workforce; Home Internet users per household; Education Internet users per student and faculty; eCommerce spending per total Internet users), and Social infrastructure (Civil liberties; Newspaper readership per capita; Press freedom; Secondary school enrollment; Tertiary school enrollment). Apparently, there is no analysis in ISI’s criteria for bundling these specific variables

into the factors. It seems that some of the variables could have been bundled together into just one, some others are split in demographic classes when it shouldn’t have been necessary to do so, etc. Perhaps an assessment of what each variable (and percentage variation explained) brings to the cluster might very well be needed. The Economist Intelligence Unit/Pyramid Research e-readiness rankings tallies scores across six categories--including the business environment rankings--and 19 additional indicators. Each variable in the model is scored on a scale from one to ten. The six categories that feed into the final rankings (and their weight in the model) are: connectivity (30%), business environment (20%), eCommerce business and business adoption (20%), the legal and regulatory environment (15%), supporting e-services (10 %), and social and cultural infrastructure (5 %). "Connectivity" measures the access that individuals and businesses have to basic fixed and mobile telephony services, including voice and both narrowband and broadband data. Affordability and availability of service (both a function of the level of competition in the telecoms market) also figure as determinants of connectivity. “Business environment” bundles 70 indicators covering criteria such as the strength of the economy, political stability, the regulatory environment, taxation, and openness to trade and investment. The resulting "business environment rankings" measure the expected attractiveness of the general business environment over the next five years. The “E-commerce consumer and business adoption” evaluate the extent of credit-card ownership as well as the existence of secure, reliable and efficient electronic payment mechanisms, the ability of vendors to ensure timely and reliable delivery of goods, and the extent of website development by local firms. “Legal and regulatory environment” considers the extent of legal support for virtual transactions and digital signatures. Ease of licensing and the ability of firms to operate with a minimal but effective degree of regulation are other criteria. “Supporting e-services” include portals and other online intermediaries, web-hosting firms, application service providers (ASPs), as well as website developers and e-business consultants. The rankings assess the extent to which local companies and organizations have access to these services. Finally, “Social and cultural infrastructure” assess the national proclivity to business innovation and receptiveness to web content. McConnell International has comparative data for 42 countries. report measures five areas: 1. connectivity (infrastructure, access and pricing), 2. e-leadership (government policies and regulations), 3. information security (intellectual property, privacy, electronic signatures), 4. human capital (ICT education, available skilled workforce), and 5. e-business climate (competition, political and financial stability, foreign investment, financial infrastructure). The ratings measure status and progress on five interrelated attributes: connectivity; E-

Leadership; Information Security; Human Capital; E-Business Climate. A quick look at McConnell International's ratings of "Global E-Readiness" shows how complex the reality of ICT use is. For example, Mexico rated poorly on connectivity, but reasonably well on human capital, information security, government policies and the business climate. The Philippines had good human capital, but rated poorly in all other areas (McConnell 2000b). The Internet Economy Indicators, developed at the Center for Research on Electronic Commerce at the Graduate School of Business, the University of Texas at Austin, and sponsored by Cisco Systems, seek to provide a foundation for conceptualizing and measuring the various components of the Internet Economy. These indicators – the Internet Economy Revenues Indicator and the Internet Economy Jobs Indicator – are built on an analysis of four layers of the Internet Economy. The Internet Economy Indicators divide the Internet Economy into four distinct but related layers: Internet infrastructure, Internet applications, Internet intermediaries and Internet-based transactions. APEC e-readiness has released their 5th version of their annual assessment of Asia-pacific countries in terms of their readiness for the New Economy. They define readiness as the degree to which an economy or community is prepared to participate in the digital economy. Readiness is assessed by APEC by determining the relative standing of the economy in the areas that are most critical for e-commerce participation. Six broad indicators of readiness for e-commerce are developed into a series of questions that provide direction as to desirable policies that will promote e-commerce and remove barriers to electronic trade. Six categories are measured for "readiness for e-commerce:" 1. basic infrastructure and technology (speed, pricing, access, market competition, industry standards, foreign investment), 2. access to network services (bandwidth, industry diversity, export controls, credit card regulation), 3. use of the Internet (use in business, government, homes), 4. promotion and facilitation (industry led standards), 5. skills and human resources (ICT education, workforce), and 6. positioning for the digital economy (taxes and tariffs, industry self-regulation, government regulations, consumer trust). The Index of the Massachusetts Innovation Economy measures progress of three key components of the Massachusetts Innovation Economy. It is based on a dynamic conceptual framework that links resources to economic results through an innovation process. The framework measures Massachusetts’ progress in leveraging its resources through innovation to create higher levels of economic performance. The Massachusetts Innovation Economy has three interrelated and interactive components: Results (Outcomes for people and business— job growth, rising average wages, and export sales), Innovation process (Dynamic interactions that translate resources into results—idea generation, commercialization, entrepreneurship,

and business innovation) and Resources (Critical public and private inputs to the Innovation Economy—human, technology, and investment resources, plus infrastructure). The City of Seattle has developed a set of indicators for measuring the community’s level of access to the Information Economy. They are arranged into six major categories. Each category has a set of subtopics with a description and one or more measurements. The measurements are intended to cumulatively provide an indicator of the topic presented. The indicator categories are: Access (ownership and home use, public access and level of use., Information Technology Literacy and Fluency, Business and Economic Development (tracking the workforce and industry needs and impacts to the community) , Community Building (measures the extent to which IT is being used in community building activities), Civic Participation, Human Relationships to Technology (including Quality of Life), and Partnerships and Resource Mobilization. The Computer Systems Policy Project’s (CSPP)2 “Readiness for the networked world” defined e-readiness as the degree to which a community is prepared to participate in the Networked World. The Index examines 19 different categories of indicators, ranking each by levels of advancement in Stages One through Four. The Guide neither offers specific advice nor suggests that the only route from Stage Two to Stage Four be through Stage Three. Nor does it provide an overall score; it seeks only to offer a starting point in an ICT planning process. The categories are linked, each driving the others, such that a community cannot concentrate solely in one area, but must pay attention to each, noting where it might be able to capitalize on synergies among the categories. The categories fall within five groups: Network Access (Information Infrastructure, Internet Availability, Internet Affordability, Network Speed and Quality, Hardware and Software, Service and Support), Networked Learning, Networked Society, Networked Economy, and Network Policy.

Commonwealth of Australia’s (2000) E-commerce preparedness has been assessed using a new indicator: the Allen E–commerce Preparedness (ECP) Index. This is based on key factors illustrating enduring structural differences between the preparedness of the States and Territories to adopt e-commerce. This composite index includes data associated with ecommerce use which are arranged into two clusters: • data that indicates capacity to use e-commerce in supply; and

2

CSPP is a "public policy advocacy group...comprised of the Chairman and Chief Executive Officers" of US

information technology companies.

• indicators of the capacity to use e-commerce in consumption. The analysis in this report is conducted by modeling a "base case" of the economy in which e-commerce is frozen. Another scenario is then modeled with e-commerce factored into the economy thereby enabling the effects of e-commerce to be isolated by comparing the two scenarios. The Monash Multi-Regional Forecasting (MMRF) model simulation results indicate that Australia’s economy and the economies of all of its States gain in the medium to long term. They are projected to have a higher level of output of between 0.8 per cent to 3.6 per cent by the year 2010, with national GDP rising by 2.9 per cent. The Allen E-commerce Preparedness (ECP) Index has been thus developed to identify and assess the structural regional differences in the impact of greater use of e-commerce. Scores have been normalized to facilitate comparison. The higher the Allen ECP Index the more prepared the specific economy is with respect to the take up of e-commerce. The Allen ECP Index is thus a composite index based on two clusters of factors and data: • supply—the preparedness and propensity of producers to use e-commerce and the initiatives of governments in helping to prepare their respective economies for the take up of ecommerce. This was weighted at 0.603 ; and • consumption—the different propensity of consumers to use e-commerce. This was weighted at 0.40. Classification systems reduce more complex phenomena to simpler representations that are easier to understand and to manipulate in the formulation and testing of hypotheses. Classification systems should be "natural," meaning that they represent real underlying properties and relationships--the way the world actually works [14-15; 24-25; 96]. They also should be practical, based on data that can be collected with reasonable accuracy, timeliness, and cost. Traditional diffusion studies typically stop at the point at which a user has chosen to adopt a single innovation, and thus have a single dependent variable [92]. For the Internet, this variable has often been "number of hosts" or users. We argue, however, that the Internet is not a single innovation but is a cluster of related technologies that must be present together to support adoption decisions by end users. The Internet cannot work unless there are servers, communication links, software, and end user devices, content to transmit, etc. Using a single measurement variable does not capture the richness of what is happening and in fact may be misleading. In conclusion, a comparative look at several largely diffused measures reveals much too few common findings. One reason why this can be the case is the current lack of theory supporting the measures, and even more, the circularity of the bits of theory that underlies some of them: the assumptions built into the variables chosen to be bundled into different clusters contain assumptions on what the indices are supposed to measure.

The logical question that is raised is: what is the cause of the contradictions in the different measurement models and how are they related to the conceptual assumptions lying at their foundation? To answer this, we employed several statistical regression analyses of three indices: UNDP’s “Technology Achievement Index”, CID’s “Networked Readiness Index”, Metricnet’s “Global New E-Economy Index”. The analyses and their results are discussed in the next section.

3. Statistical analysis of UNDP’s “Technology Achievement Index” The uneven diffusion of information and communications technology—the digital divide— has caught the public discourse worldwide, already becoming almost a catch phrase. There have long been huge differences among countries but, as stated in the previous sections, the differences are mapped differently by different measures. One of the measures for instance, the “Technology Achievement Index” was introduced in the the 2001 UNDP Report [100] and presents a snapshot of each country’s average achievements in creating and diffusing technology and in building human skills to master new innovations. In addition to the differences across countries, the index reveals considerable disparities within countries. India, for example, home to one of the most dynamic global hubs— Bangalore, which Wired rated 11th among the 46 hubs, ranks 63rd in the technology achievement index, falling among the lower end of dynamic adopters. Why? Because of huge variations in technological achievement among Indian states. The country has the world’s seventh largest number of scientists and engineers, some 140,000 in 1994. Yet in 1999, mean years of schooling were only 5.1 and adult illiteracy was 44%. Descriptive statistics like this are a good example of the current contradictory findings when comparing different measures. This led to the present inquiry upon what causes or explanations can be found to support the contradictions and different rankings for the same countries across different indices. The statistical analyses reported in this paper uses findings from several regressions in an attempt to predict technology achievement (based on UNDP’s “TAI” index) using both the composite human development index (HDI) for the same year, as well as specific variables from the index (educational attainment, income, etc.). The HDI measures the overall achievements in a country in three basic dimensions of human development—longevity, knowledge and a decent standard of living. It is measured by life expectancy, educational attainment (adult literacy and combined primary, secondary and tertiary enrolment) and adjusted income per capita in purchasing power parity (PPP) US dollars. The HDI is a summary, not a comprehensive measure of human development. As a result of refinements in the HDI methodology over time and changes in data series, the HDI should not be compared across editions of the Human Development Report (see [100] for detailed explanations of the index).

As stated in [100], (which uses data up to 2000) the TAI aims to capture “how well a country is creating and diffusing technology and building a human skill base—reflecting capacity to participate in the technological innovations of the network age”. This composite index measures achievements, not potential, effort or inputs. It is constructed using indicators, not direct measures, of a country’s achievements in four dimensions: “technology creation capabilities” (number of patents granted per capita and receipts of royalty and license fees from abroad per capita), “diffusion of recent innovations” (based on Internet diffusion and exports of high- and medium-technology products as a share of all exports), “diffusion of old innovations” (based on electricity consumption in kwh/capita, and telephones per 1000 people) and finally, “human skills” (mean years of schooling, and gross enrolment ratio of tertiary students enrolled in science, mathematics and engineering, as components). As suggested by its authors, the TAI is not a measure of which country is leading in global technology development, but focuses on how well the country as a whole is participating in creating and using technology. As an example, by the UNDP data, the United States has far more inventions and Internet hosts in total than does Finland, but it does not rank as highly in the index because in Finland the Internet is more widely diffused and more is being done to develop a technological skill base throughout the population. One of the reasons for proceeding with the current analyses was to study what are the predictors of such variations in ranking. Using the UNDP 2001 Human Development report containing data until 2000, several statistical analyses were performed. N= 162, after removal, N=157. A simple linear regression was performed to examine HDI level (with “leaders”, “potential leaders”, “dynamic adopters”, “marginalized” and “others” as the categories proposed by UNDP) as predictors of HDI level (with “high development”, “medium development” and “low development” as categorized by UNDP). Table 1 reports the statistics associated with this analysis, and shows that HDI level accounted for a significant portion of the variance in the Technology Achievement Index rank.

Table 3. Human development as predictor of Technology Achievement

β HDI

.80* 2

F(1,160)=293.07, Adjusted R =.64, p