PROCEEDINGS OF THE IADIS INTERNATIONAL

0 downloads 0 Views 10MB Size Report
Feb 16, 2018 - Irene Polycarpou, The Colorado School of Mines, USA ...... exchange ideas and quickly come under one of two umbrellas – either the Save ...... Berman, Fran (Editor); Fox, Geoffrey (Editor); Hey, & Anthony J.G. (Editor), Grid ...
IADIS INTERNATIONAL CONFERENCE on

INTERNET TECHNOLOGIES & SOCIETY (ITS 2010)

ii

PROCEEDINGS OF THE IADIS INTERNATIONAL CONFERENCE on

INTERNET TECHNOLOGIES & SOCIETY (ITS 2010)

PERTH, AUSTRALIA 29 NOVEMBER – 1 DECEMBER, 2010

Organised by IADIS International Association for Development of the Information Society

Co-Organised by

iii

Copyright 2010 IADIS Press All rights reserved This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Permission for use must always be obtained from IADIS Press. Please contact [email protected]

Edited by Piet Kommers, Tomayess Issa and Pedro Isaías Associate Editors: Luís Rodrigues and Patrícia Barbosa

ISBN: 978-972-8939-31-1

iv

TABLE OF CONTENTS

FOREWORD

xi

PROGRAM COMMITTEE

xv

KEYNOTE LECTURES

xix

FULL PAPERS TIME, SPACE AND SITUATIONS IN AN INTERNET BULLETIN BOARD SERVICE

3

Peter Dell

ELEMENTS OF NIGERIA’S PREPARATION TOWARDS ADVANCED E-DEMOCRACY

14

Steve Nwokeocha

APPLYING THE THEORY OF PLANNED BEHAVIOUR TO EXPLAIN THE USAGE INTENTIONS OF MUSIC DOWNLOAD STORES: GENDER AND AGE DIFFERENCES

22

Markus Makkonen, Veikko Halttunen and Lauri Frank

CAN VIRTUAL WORLD PROPERTY BE CONSIDERED A DIGITAL GOOD?

33

Nicholas. C. Patterson and Michael Hobbs

CONSTRUCTION OF TRUST IN ONLINE AUCTION INTERACTIONS

41

Sanna Malinen and Jarno Ojala

THE EFFECTS OF SOCIOECONOMIC CHARACTERISTICS AND CONSUMER INVOLVEMENT ON THE ADOPTION OF MUSIC DOWNLOAD STORES AND PAID MUSIC SUBSCRIPTION SERVICES

49

Markus Makkonen, Veikko Halttunen and Lauri Frank

THE ROLE OF INFORMATION TECHNOLOGY IN PHARMACEUTICAL SUPPLY CHAINS

59

Ladislav Kochman, Tomayess Issa and Paul Alexander

CONTRAIL: CONTENTS DELIVERY SYSTEM BASED ON A NETWORK TOPOLOGY AWARE PROBABILISTIC REPLICATION MECHANISM

67

Yoshiaki Sakae, Masumi Ichien, Yasuo Itabashi, Takayuki Shizuno and Toshiya Okabe 75

MULTIAGENT WORKGROUP COMPUTING Ben Choi

v

INVESTIGATION OF ELEMENTS FOR LEADERSHIP BY HYBRID INTELLIGENT SYSTENMS

83

Yuya Ushida, Keiki Takadama and Minjie Zhang

STREAM PREFETCHING METHOD ON STREAMING MEDIA SERVICE FOR HIGH SPEED MOBILE USERS

91

Dongmahn Seo and Suhyun Kim

THE RATIONALES BEHIND FREE AND PROPRIETARY SOFTWARE SELECTION IN ORGANISATIONS

99

Damien J. Sticklen and Theodora Issa

SEMANTIC WEB SERVICES: STATE OF THE ART

107

Markus Lanthaler, Michael Granitzer and Christian Gütl

VIRTUAL GOODS REPURCHASE INTENTION IN SOCIAL NETWORKING SITES

115

Echo Huang

PROSUMERIZATION - CHANGING THE SHAPE OF MOBILE SERVICE PROVISION

123

Dirk Werth, Andreas Emrich and Alexandra Chapko

DELAY REDUCTION IN MANET REFLECTING CHAIN OF RESCUE COMMAND

131

Yuki Okuda, Fumiko Harada and Hiromitsu Shimakawa

IMPROVING ARABIC INFORMATION RETRIEVAL: INTRODUCING THE LOCAL STEM

139

Eiman Tamah Al-Shammari

ONTOLOGY CONCEPT ENRICHMENT VIA TEXT MINING

147

Qiang Wang, Susan Gauch and Hiep Luong

FINDING HOW-TO INFORMATION WEB PAGES AND THEIR RANKING BY READABILITY

155

Ryoji Nonaka, Takayuki Yumoto, Manabu Nii and Yutaka Takahashi

THE IMPACT OF USING WEB-BASED CURRICULA ON NINTH GRADE STUDENTS' ACHIEVEMENT IN MATHEMATICS

164

Wafa' N. Muhanna and Awatif M. Abu-Al-Sha'r

PRELIMINARY ANALYSIS OF CLOUD-BASED ASSIGNMENTS

172

Ashley M. Aitken

EIDSVOLL 1814: TEACHING HISTORY IN 3D COLLABORATIVE VIRTUAL ENVIRONMENTS

181

Ekaterina Prasolova-Førland and Ole Ørjan Hov

FORMAL MODELS FOR EXTRACTION AND VISUAL PRESENTATION OF RESEARCH INFORMATION SUITABLE FOR ACTIVITY SITUATIONS

189

Shoichi Nakamura, Yachiyo Ishikawa, Setsuo Yokoyama, Yasuhiko Morimoto and Youzou Miyadera

VALIDATING THE MEASURES FOR INTENTION TO ENROL AN ONLINE MBA PROGRAM Lim Lay Lee and Suhaiza Zailani

vi

198

SPANNING THE DIGITAL DIVIDE: A REMOTE IT LEARNING ENVIRONMENT FOR THE VISION IMPAIRED

206

Helen Armstrong, Iain Murray and Nazanin Mohamadi

FROM “PATIENT-CENTRIC”TO “CITIZEN-CENTRIC”: REVIEW OF THE TERM USAGE THROUGH E-HEALTH EVOLUTION

214

Yong Han

SHORT PAPERS PERFORMANCE EVALUATION OF THE CONVENTIONAL SCHEDULING ALGORITHMS FOR QOS CONTROL IN HOME GATEWAYES

223

Toshinori Takabatake and Koike Akira

PASSWORD PRACTICES OF SWEDISH WEBSITES

229

Nena Lim and Önay Jakobov

OLDER PEOPLE AND COMPUTER TECHNOLOGIES

235

Isaura Ribeiro

DESIGN OF AN OMA DM-BASED REMOTE MANAGEMENT SYSTEM FOR PERSONAL HEALTHCARE DATA DEVICES

241

Ju Geon Pak and Kee Hyun Park

A MOBILE MASHUP SCHEME FOR FLEXIBLE SERVICE INTEGRATION

246

Takahiro Koita and Yosuke Imairi

PBQOS – A POLICY-BASED MANAGEMENT ARCHITECTURE FOR OPTIMIZED MULTIMEDIA CONTENT DISTRIBUTION TO CONTROL THE QOS IN AN OVERLAY NETWORK

252

Fernando Luiz de Almeida, Graça Bressan and Denis Gabos

APPLYING SOCIAL NETWORKING TOOLS TO ELEMENTARY LEARNING: TYPHOONS AS THE THEME OF AN INTEGRATED CURRICULUM UNIT

257

Hsu-Wan Chen

APPLYING DIGITAL STORYTELLING TO ELEMEANTARY SCIENCE EDUCATION: THE CASE OF DING-DONG RAINWATER GARDEN IN TAIWAN

262

Hsu-Wan Chen

DISTANCE LEARNING SYSTEM AS AN ALTERNATIVE TO TRADITIONAL TRAINING: A CASE OF JORDANIAN PUBLIC SECTOR EMPLOYEES

267

Huda Ibrahim and Thamer Ahmad Mousa Alrawashdeh

SCREENCASTS: AUGMENTING LEARNING MATERIALS IN OPEN DISTANCE LEARNING

272

Janette Kruger

WEB-BASED LEARNING ENVIRONMENTS FOR THE VISION IMPAIRED Ruchi Permvattana, Helen Armstrong and Iain Murray

vii

277

RELATIONSHIP BETWEEN EDUCATIONAL LEVEL AND CUSTOMER ADOPTION IN INTERNET BANKING IN CURTIN MIRI COMMUNITY: CASE STUDY

282

Amin Saedi

FAST TRANSFORMATION IN E-BUSINESS ENVIRONMENTS: THIRD PARTY FULFILMENT ADAPTATION TO ONLINE RETAILER DEMANDS

287

Paul Alexander

A MODIFEID TAM FOR VERIFICATION OF E-COMMERCE ADOPTION FACTORS IN SMES

292

Mohammad Ali Sarlak, Mohammad Babaian and Ali Ghorbani

KNOWLEDGE MANAGEMENT AND IMPROVEMENTS OF PROFESSIONAL KNOWLEDGE IN CREATIVITY INSTRUCTION

297

Yu-chu Yeh, Ling-yi Huang, Yu-hua Chen, Yi-ling Yeh, Bi-ling Yeh and Di-rong Cheng

REFLECTION PAPERS DISCREPANCY OF TERMINOLOGY IN BPM-SYSTEM IMPLEMENTATION

303

Shima Abdullah

A COMPARISON BETWEEN KEYWORD-BASED AND ONTOLOGY-BASED ADVERTISING NETWORKS

307

Lilac A. E. Al-Safadi

COMMERCIALIZATION OF NEW ICT-RELATED OPPORTUNITIES IN THE NEWS INDUSTRY: AVOIDING A BUSINESS-AS-USUAL MIND-SET

311

John Parm Ulhøi and Anna B. Holm

DEVELOPING A MOBILE LEARNING CONCEPTUAL MODEL FOR UNIVERSITIES IN PAKISTAN

316

Umera Imtinan, Vanessa Chang and Tomayess Issa

DIGITAL LITERACIES – A CRUCIAL REQUIREMENT FOR A SUSTAINABLE ECONOMIC FUTURE IN AN INFORMATION AND KNOWLEDGE-BASED SOCIETY?

321

Lydia Bauer

POSTERS FROM DOS TO UNICODE – A LITERATURE REVIEW AND A SYRIAC (ARAMAIC) STANDPOINT Theodora Issa, Tomayess Issa and Touma B. Issa

viii

329

NETWORK-BASED ENSEMBLE CLASSIFIER FOR DIFFERENTIAL DIAGNOSIS OF BREAST TUMORS IN ULTRASONIC IMAGES

333

Atsushi Takemura

ASSESING INFORMATION TECHNOLOGY STRATEGIC PLANNING PROCESS AMONG BRUNEIAN SMES

337

Afzaal H. Seyal & Hj.Awg Yussof Hj. Mohammad

DOCTORAL CONSORTIUM

A STUDY OF INSIDER THREAT BEHAVIOUR: DEVELOPING A HOLISTIC FRAMEWORK

343

Asmaa M. Munshi

COMPONENT-BASED RUNTIME ENVIRONMENT FOR INTERNET APPLICATIONS Mark Wallis, Frans Henskens and Michael Hannaford

AUTHOR INDEX

ix

349

x

FOREWORD These proceedings contain the papers of the IADIS International Conference on Internet Technologies & Society (ITS 2010), which has been organised by the International Association for Development of the Information Society and co-organised by Curtin University of Perth, Australia, 29 November – 1 December 2010. The IADIS Internet Technologies & Society 2010 conference (ITS 2010) aims to address the main issues of concern within WWW/Internet as well as to assess the influence of Internet in the Information Society. Broad areas of interest are Internet Technologies, Information Management, e-Society and Digital Divide, e-Business / e-Commerce, e-Learning, New Media and e-Society, Digital Services in e-Society, e-Government / e-Governance and e-Health. These broad areas are divided into more detailed areas (see below). However innovative contributes that do not fit into these areas will also be considered since they might be of benefit to conference attendees. 

Internet Technologies: Electronic Data Interchange (EDI), Intelligent Agents, Intelligent Systems, IS Security Issues, Mobile Applications, Multimedia Applications, e-Payment Systems, Protocols and Standards, Semantic Web and XML, Services, Architectures and Web Development, Software Requirements and Web Architectures, Storage Issues, Strategies and Tendencies, System Architectures, Telework Technologies, Ubiquitous Computing, Virtual Reality, Web 2.0 technologies, Social Networking and Marketing and Wireless Communications.



Information Management: Computer-Mediated Communication, Content Development, Cyber law and Intellectual Property, Data Mining, e-Publishing and Digital Libraries, Human Computer Interaction and Usability, Information Search and Retrieval, Knowledge Management, Policy Issues, Privacy Issues, Social and Organizational Aspects, Virtual Communities, Internet and Disability, Internet and Aging Population, e-Society and Digital Divide, Social Integration, Social Bookmarking, Social Software, e-Democracy and Social Integration.



e-Business / e-Commerce: Business Ontologies and Models, Digital Goods and Services, e-Business Models, e-Commerce Application Fields, e-Commerce Economics, e-Commerce Services, Electronic Service Delivery, e-Marketing, Languages for Describing Goods and Services, Online Auctions and Technologies, Virtual Organisations and Teleworking.

xi



e-Learning: Collaborative Learning, e-Mobile Learning , Curriculum Content Design & Development, Delivery Systems and Environments, Educational Systems Design, e-Citizenship and Inclusion, e-Learning Organisational Issues, Evaluation and Assessment, Political and Social Aspects, Virtual Learning Environments and Issues and Web-based Learning Communities.



New Media and e-Society: Digitization, heterogeneity and convergence, Interactivity and virtuality, Citizenship, regulation and heterarchy, Innovation, identity and the global village syndrome, Internet Cultures and new interpretations of “Space” and Polity and the Digitally Suppressed.



Digital Services in e-Society: Service Broadcasting, Political Reporting, Development of Digital Services, Freedom of Expression, e-Journalism and Open Access.



e-Government /e-Governance: Accessibility, Democracy and the Citizen, Digital Economies, Digital Regions, e-Administration, e-Government Management, eProcurement, e-Supply Chain, Global Trends, National and International Economies and Social Inclusion.



e-Health: Data Security Issues, e-Health Policy and Practice, e-Healthcare Strategies and Provision, Legal Issues, Medical Research Ethics and Patient Privacy and Confidentiality.

The IADIS Internet Technologies & Society 2010 conference (ITS 2010) Conference received 154 submissions from more than 25 countries. Each submission was reviewed in a double-blind review process by an average of four independent reviewers to ensure quality and maintain high standards. Out of the papers submitted, 26 got blind referee ratings that published them as full papers, which means that the acceptance rate was below 17%. Some other submissions were published as short papers, reflection papers, doctoral papers and poster demonstrations. The authors of the best papers will be invited to provide revised and expanded versions of their papers in the International Journal of Web Portals (IJWP) - special issue http://www.igi-global.com/ijwp (ISSN: 1938-0194) and in the Journal of Theoretical and Applied Electronic Commerce Research http://www.jtaer.com/ (ISSN 0718–1876). In addition to the presentation of full papers, short papers, reflection papers, doctoral papers and poster demonstrations, the conference also includes two keynotes presentation from internationally distinguished researcher. We would therefore like to express our gratitude to Dr. Peter Dell, School of Information Systems, Curtin University, Australia and Associate Professor Catherine McLoughlin, Australian Catholic University, Canberra, Australia.

xii

A successful conference requires the effort of many individuals. We would like to thank the members of the Program Committee for their hard work in reviewing and selecting the papers that appear in this book. We are especially grateful to the authors who submitted their papers to this conference and to the presenters who provided the substance of the meeting. We wish to thank all members of our organizing committee. Last but not least, we hope that participants enjoyed Perth and their time with colleagues from all over the world, we hope that you can join us in our next edition of the IADIS International Conference on Internet Technologies & Society. Piet Kommers, University of Twente, The Netherlands Tomayess Issa, Curtin University, Perth, Australia Pedro Isaías, Universidade Aberta (Portuguese Open University), Portugal Conference and Program Co-Chairs Perth, Australia 29 November 2010

xiii

xiv

PROGRAM COMMITTEE CONFERENCE AND PROGRAM CO-CHAIRS Piet Kommers, University of Twente, The Netherlands Tomayess Issa, Curtin University, Perth, Australia Pedro Isaías, Universidade Aberta (Portuguese Open University), Portugal

COMMITTEE MEMBERS Abdelwahab Hamou-Lhadj, Concordia University, Canada Ananda Jeeva, Curtin University of Technology, Australia Andrew Stranieri, University of Ballarat, Australia Anteneh Ayanso, Brock University, Canada Astrid Weiss, University of Salzburg, Austria Carina de Villiers, University of Pretoria, South Africa Charlynn Miller, University of Ballarat, Australia Chei Sian Lee, Nanyang Technological University, Singapore Chrisa Tsinaraki, University of Trento, Italy Christian Zirpins, University of Karlsruhe (TH), Germany Christopher Buckingham, Aston University, UK Chun-Hsin Wu, National University of Kaohsiung, Taiwan David Geerts, University of Leuven, Belgium Dominik Zyskowski, Poznan University of Economics, Poland Dumitru Burdescu, University of Craiova, Romania Dumitru Roman, SINTEF, Norway Durgesh Pant, Kumaun University, India Eduard Babulak, University of South Pacific, Fiji Islands Ephrem Eyob, Virginia State University, USA Gérard Dupont, Université de Rouen, LITIS laboratory, France Heinz Dreher, Curtin University of Technology, Australia Helen Armstrong, Curtin University of Technology, Australia Heng Tang, University of Macau, China Hiroshi Mineno, Shizuoka University, Japan Ina Fourie, University of Pretoria, South Africa Irene Polycarpou, The Colorado School of Mines, USA Jeton McClinton, Jackson State University, USA Jinsong Leng, Edith Cowan University, Australia John Venable, Curtin University of Technology, Australia

xv

Jozef Hvorecky, High School of Management/City University of Seattle, Slovakia Junping Sun, Nova Southeastern University, USA Kazushi Ohya, Tsurumi University, Yokohama, Japan Konstantin Todorov, MAS lab, Ecole Centrale Paris, France Liam Peyton, University of Ottawa, Canada Liana Stanescu, University of Craiova, Romania Marianna Obrist, University of Salzburg, Austria Marita Turpin, University of Pretoria, South Africa Martin Drlik, University of Constantinus the Philosopher in Nitra, Slovakia Martin West, Curtin University of Technology, Australia Maslin Masrom, Universiti Teknologi Malaysia International Campus, Malaysia Matthew Mitchell, Swinburne University of Technology, Australia Morad Benyoucef, University of Ottawa, Canada Mounir Kehal, ESC Rennes School Of Business, France Neels Kruger, University of Pretoria, South Africa Niccolo Capanni, The Robert Gordon University, UK Nicolas James, MAS Lab Ecole Centrale Paris, France Paul Alexander, Curtin University of Technology, Australia Peixiang Liu, Nova Southeastern University, USA Peter Dell, Curtin University of Technology, Australia Pierre Tiako, Langston University, USA Prabhat K. Mahanti, University of New Brunswick, Canada Raphael Khoury, Laval University, Canada Rastislav Zabojnik, University of St.Cyril and Methodius (UCM), Slovakia Richard Khoury, Lakehead University, Canada Richard Picking, Glyndwr University, UK Robert Joseph Skovira, Robert Morris University, USA Sally Firmin, University of Ballarat, Australia Sally Jo Cunningham, University of Waikato, New Zealand Sarka Kvetonova, Brno University of Technology, Czech Republic Sitalakshmi Venkatraman, University of Ballarat, Australia Songhua Xing, University of Southern California, Los Angeles, USA Stuart Cunningham, Glyndwr University, UK Suhaiza Zailani, University Sains Malaysia, Malaysia Theodora Issa, Curtin University of Technology, Australia Tim Harrison, University of Ballarat, Australia Tiong-Thye Goh, Victoria University of Wellington, New Zealand Tzung-Pei Hong, National University of Kaohsiung, Taiwan Vandana Bhattacherjee, BIT, Ranchi, India Vanessa Chang, Curtin University of Technology, Australia Vibha Rani Gupta, Birla Institute of Technology, India Vladimir Burcik, Academy of Communication, Slovakia

xvi

Vladimir Bures, University Hradec Kralove, Czech Republic Vladimir Fomichov, National Research University "Higher School of Economics", Russia Wei Li, Nova Southeastern University, USA Xiao Wu, Southwests Jiaotong University, China Xue Bai, Virginia State University, USA Zalina Mohd Daud, Razak School of UTM in Engineering and Advanced Technology, Malaysia Zhaohao Sun, University of Ballarat, Australia Jaydip Sen, TCS, India Helen Thompson, University of Ballarat, Australia P. Balamuralidhar, TCS Innovation Labs, India

xvii

xviii

KEYNOTE LECTURES THE INTERNET IN THE COMING YEARS By Dr. Peter Dell, School of Information Systems, Curtin University, Australia Abstract The Internet is about to undergo a fundamental change. The supply of IP addresses, used by every device connected to the Internet and analogous to a phone number, is rapidly dwindling and will be completely exhausted by around January 2012. A new version of Internet Protocol, known as IPv6, was standardised more than a decade ago but has not been widely adopted and thus the transition to IPv6 will be a bumpy one – if indeed it happens at all. Either way, the Internet tomorrow will be different from the Internet as we know it today. This presentation will explore why the transition to IPv6 has not happened in the way the technical community had hoped, will describe potential Internet scenarios in the medium-term, and will investigate some of the possible social and economic consequences.

21ST CENTURY LEARNING AND HIGHER EDUCATION PEDAGOGY: WHAT IS CHANGING AND HOW DO WE CAPITALISE ON EMERGING TECHNOLOGIES? By Associate Professor Catherine McLoughlin, Australian Catholic University, Canberra, Australia Abstract Several major technology trends that are playing a role in shaping the future of higher education globally are ubiquity, mobility, personalization and virtualization. This keynote address offers a global view of changes in higher education and a conceptual framework for understanding the challenges and opportunities involved in rethinking curricula to transform schooling for the 21st century. The rationale draws on changes in the global economy, the impact of ICT on communication, learning and everyday life and shifts in the outcomes expected of millennial learners. A contextualised conceptual framework for 21st century pedagogy and learning is proposed that includes a global, lifelong learning perspective powered by technological change and an evolving learning society.

xix

xx

Full Papers

IADIS International Conference on Internet Technologies & Society 2010

TIME, SPACE AND SITUATIONS IN AN INTERNET BULLETIN BOARD SERVICE Peter Dell Curtin University - GPO Box U1987 Perth WA 6845 Australia

ABSTRACT The definition of situations proposed by Joshua Meyrowitz is examined in the light of computer-mediated communication (CMC), and refined for this environment. It is concluded that for situations to be useful in analysing behaviour in public places created within CMC, a definition of situations as information access patterns is too broad. Instead, a definition that recognises such flows of information between situations is required. Goffman’s definition, based on temporal and spatial boundaries, can be used if one takes into account the creation of virtual space. This approach is demonstrated in analysis of an Internet Bulletin Board Service. KEYWORDS Erving Goffman, situation analysis, Internet, BBS.

1. INTRODUCTION Erving Goffman is one of the most important sociologists of the 20th century, and although his work has influenced many sociological schools of thought, it is difficult to pigeon-hole into any single category. His work is principally about behaviour and social interaction, and works such as The Presentation of Self in Everyday Life (1959) and Behavior in Public Places (1963) provide a detailed field guide for the student of social behaviour. It is important that any study of social behaviour should address the increasing amount of social interaction that occurs on the Internet. Despite this, research to date has largely ignored study of the relationship between ordinary practices in the virtual world and the material world (Jones, 2004). To study the social implications of the Internet requires a framework for understanding the social interaction that occurs online. Many such theories have been proposed, including Cues-Filtered-Out (CFO) theories, the Social Information Processing Perspective, and Social Identity Deindividuation (SIDE) (Kim, 2000). All of these have shortcomings. First, both the technologically determinist “bandwidth limitation” theories that have emerged from the disciplines of Computer Science and Communications Studies and the “social deterministic” views are overly simplistic (Bargh, 2002). Mabry (2001) also concludes that theories with a narrow focus on the presumed communicability of message channels are at a disadvantage to theories based on self-interest. As well as the preoccupation with anonymity at the expense of other factors, findings from experimental CMC research have also often been inconsistent (Baltes et al., 2002). This is particularly important when one considers that CFO, SIDE and Hyperpersonal approaches are all based on short-term, experimental research (Kim, 2000), which may not be generalisable to situations outside the laboratory. For example, one empirical study based on natural sources, found that confrontational messages were more likely to contain identity information than other messages. This is completely inconsistent with CFO perspectives, and is consistent with SIDE only if one assumes that participants felt strong in-group identification (Mabry, 2001). Further, many studies also do not support the view developed by experimental research that CMC lacks social content. That this is not true is particularly evident among younger and more experienced users (Walther, 1992; Walther, 1996; George and Sleeth, 2000). Another weakness with experimental CMC research is that it has been highly variable. Variation in the types of CMC studied the wide range of dependent variables, the wide range of participants in experimental

3

ISBN: 978-972-8939-31-1 © 2010 IADIS

studies and their degree of CMC experience, and the range of tasks and time allocated to complete them has contributed to a situation where it is difficult to compare findings from different studies (Bordia, 1997; Baltes et al., 2002). There is a need for Internet research that encompasses cultures and that is based on methods other than experimental research. What is the purpose of such research? One option has been to develop CMC specific theories that overcome the weaknesses already discussed; however, broader theories of social interaction exist and have been the subject of decades of analysis, refinement and critique, and rather than develop ad hoc theories of online interaction it is preferable to demonstrate the application of mature, more general theories (Dell and Marinova, 2008). Indeed, the distinction between online and offline is a false dichotomy, and activities in one influence the other, for example in Aarsand’s (2008) observation of frequent frameswitching between online and offline interaction frames. Goffman’s situational analysis approach is a prime candidate for an existing, mature theory of social interaction to be used online, and is used in this article. The purpose of this paper is to demonstrate that situational analysis can be applied to CMC as well as to face-to-face interaction, by exploring the dual aspects of time and space. An application of this approach is demonstrated in a case study of an Internet Bulletin Board Service (BBS).

2. TIME Social interaction requires the communication of social information. Goffman distinguishes between embodied and disembodied information – embodied is that which is the result of a current bodily activity, while disembodied is that which requires “something that traps and holds information long after the organism has stopped transmitting” (Goffman, 1963: 14). In other words, embodied information is communicated immediately – in real-time – while disembodied information is preserved by some other means after the transmission has finished, and not communicated in real-time. The distinction between embodied and disembodied is increasingly difficult to sustain today. Huge volumes of communication are disembodied – indeed disembodiment is the very nature of CMC. While Goffman emphasised embodied communication, it is possible to apply his approach to CMC by acknowledging that it is real-time social interaction with which he is primarily concerned. On the Internet, real-time applications are often referred to as synchronous applications. By considering synchronicity with interactivity, Dell and Marinova (2002) arrange different forms of CMC on a matrix as shown in Figure 1.

Interactive (communication between individuals) Non-interactive (communication between person and machine)

• • • • • • • • • •

Online chat ICQ IRC Videoconference MUD MMORPG WWW FTP Gopher (Single-user) Virtual Reality Synchronous

• Email • Usenet

• Automatic notification services

Asynchronous

Figure 1. CMC applications categorised by synchronicity and interactivity

In studying online social behaviour, we are concerned only with interactive services. While Goffman’s ideas on performance have been applied to non-interactive, typically web-based applications (see Bortree, 2005; Chen, 2010; Cheung, 2000; Karlsson, 1998; Miller, 1995; Miller and Mather, 1998; Walker, 2000), they have been less explored in to interactive applications. It is not the purpose of this article to apply Goffmanian theory to the WWW. Rather, it is concerned with its application to interactive services.

4

IADIS International Conference on Internet Technologies & Society 2010

3. SPACE As well as time, one must also address space when analysing CMC. In particular, one question that has piqued the interest of CMC researchers has been whether a virtual space is created within these environments. Steve Jones (1995) asserts that CMC is “socially produced space”, and intuitively it makes sense to consider Multi User Dungeons (MUDs) as virtual spaces as they are generally based on spatial metaphors. IRC channels are also often described in a similar manner - a search reveals many such channels, for example “AllNiteCafe”, “cybercafe”, “flirtbar”, and “cyberpub” to name but a few. Barbatsis et al. (1999) do not even attempt to distinguish between different CMC forms when asserting that space is created in chat rooms, Internet fora, and online virtual communities and web sites, in the same way that “televisual” or “filmic” space is created in other media. On the other hand, Quentin Jones (1997) believes that simply sending and receiving messages – “where postings go directly from one individual to another with no common virtual-place” – does not constitute the creation of virtual space. Similarly, Soukup (2006) concludes that in order for viable spaces to be created, the conditions of localisation, accessibility and presence must be created. We cannot consider all forms of CMC as equivalent, as different applications have different characteristics and different communication processes (Liu, 1999). Certainly, email has no dominant spatial metaphor and is perhaps more akin to posting a letter or sending a telegram than creating a virtual space. Analysis of one application cannot necessarily be extrapolated to explain others. Problems are also caused by differences in what people mean by space. Samarajiva and Shields (1997) distinguish between environment and space, suggesting that environment is only a prerequisite for the creation of space, while space itself is a “context or resource for action” and can be either proximate or virtual. In order to determine whether a CMC application supports a virtual space, we are really asking whether an application provides a suitable context for social action. In this sense the application is analogous to the environment. Different environments will create different types of space. December (1995) distinguishes between communication space, interaction space, and information space. Communication spaces are those in which discussion is the dominant activity, typically in an asynchronous manner. Interaction spaces are those in which social interaction occurs, and information spaces are those in which the “dissemination and retrieval of network-based information” is key. Re-examining the matrix in Figure 1, we can map these different types of space onto the same four quadrants, as shown in Figure 2. This approach allows us to avoid any technologically determinist “technology x causes effect y” conclusions. In assessing whether a technology can sustain a space for social interaction, we must consider the intentions of the users – interactive or non-interactive, synchronous or asynchronous. Interactive Non-interactive

Interaction space Information space Synchronous

Communication space Information space Asynchronous

Figure 2. Types of space in CMC

The type of space then affects the action that takes place within it. The environment in CMC – the hardware and software – is affected not only whether applications are interactive or non-interactive, synchronous or asynchronous, but also by design decisions taken by those creating the application. On the BBS discussed at length later in this article, discussion fora are referred to using the spatial metaphor of “rooms”, and it is likely this influences user’s perceptions of the type of space is created. The effect of such design decisions is in itself a rich topic for investigation, however it is outside of the scope of this article. Users’ intentions are difficult to gauge. A useful rule of thumb is to consider whether the user feels as if they’re going somewhere when they use a particular application. Virtual space is intangible and exists within the perceptions of the user. Thus, if the user does not feel as if they have visited a virtual place, no socially produced space has been created. Blanchard (2004) makes a similar point, and suggests that a sense of place in virtual behavior settings is derived from mental maps used by participants to understand what is happening when they use CMC. In

5

ISBN: 978-972-8939-31-1 © 2010 IADIS

other words, the thought processes of the user – whether they feel like they are going anywhere – are central to whether a virtual space is created. A clue is the use of “verb emoting” to overcome the lack of physical contact (Chenault, 1998). Doing so creates at least some allusion to being in the same space, and is perhaps a sign that an effort is being made to compensate for the lack of proximate space by creating virtual space. If a user treats email in the same way as they might treat an instant message product or IRC, it is reasonable to suggest that a virtual space is socially created as the application is used both interactively and synchronously. On the other hand, if the user considers their actions to be analogous to sending a letter or a telegram, that is asynchronously, it is unlikely that a virtual space has been created. Likewise, sending an email message to a document delivery service or similar automatic reply service does not constitute a virtual space because it is not being used to support social interaction.

4. SITUATIONS Goffman defines situations are bounded by space and time, limited both temporally and physically. However, Meyrowitz (1985, 1990) defines situations as patterns of access to social information. For Meyrowitz, situations are bounded not by physical and chronological limits, but by the ability to access information about others. The rationale for this definition is that information from and about other people is what guides our behaviour in our interactions with them. Goffman (1959: 106) himself briefly alluded to the “barriers to perception” which bound regions, but only considered physical barriers such as walls and doors. While this observation of Goffman’s tends to “get lost” in most discussions (Meyrowitz, 1985: 36), its importance should not be underestimated. If, as in Goffman’s example, a performance is given in a room bounded aurally by thick glass, the audience outside the glass wall can still see what is happening in the room. Thus, “mutual monitoring” (Goffman, 1963: 18) can occur visually between those on either side of the glass wall. Likewise, that mutual monitoring is possible aurally among workers in partitioned offices is painfully obvious to those who work in such environments, in which they can hear but cannot see each other. In both these cases, the situational boundary is not dependent on the physical structures in place but on the availability of social information. Clearly then, information access is an important aspect of situations and thus of behaviour. However, it is possible to obtain information about others via non-situational means, and there are many ways to obtain social information. Information can pass through indirect channels, through mutual friends and acquaintances, even through the mass media. However, when analysing situations we must limit ourselves to that which is synchronous and interactive. Information obtained asynchronously, that is, via delayed mechanisms, may even have been obtained directly from those in the present situation, but this does not extend the situational boundary to include the occasion in which the information was transmitted. Rather, recognition must be made that information can be carried from one situation to another - our experiences in past situations can affect future situations. Otherwise, it would be impossible to avoid the conclusion that life is one giant, all-encompassing situation. Thus, the patterns of access to information in which we find ourselves extend beyond the boundaries of the situations within which we interact. If situations cannot be defined by physical space, and if their boundaries are not defined by information patterns, we are left with the dilemma of having no definition of their boundaries. Taking virtual space into account, which allows us to think about situations in Goffman’s terms, bounded by space, albeit virtual, can solve this dilemma. Parks and Floyd (1996) assert that “cyberspace is simply another place to meet”. Situations can occur within mediated environments, as long as virtual space is created. Situations are also bounded by time, and begin when mutual monitoring occurs and end when mutual monitoring is ended by the second-last person leaving. This implies that situations occur in synchronous CMC. Referring to Figure 2, this places situations in the interaction space that is created by synchronous, interactive applications. Goffman makes a distinction between situations and the encompassing concept of the “social occasion” (Goffman, 1963: 18), which provides the context for the situation. The IRC party documented by Danet et al. (1997), in which a number

6

IADIS International Conference on Internet Technologies & Society 2010

of IRC participants simulated a party that culminated in the simulated smoking of marijuana, is an example of an online social occasion. In order to explore online situations further, an analysis of one such synchronous, interactive chat service – an Internet BBS – is conducted in the next section.

4.1 Online Situations, Social Occasions, and Behaviour Settings There are three methods by which BBS users can send messages to each other. The first is to send private “eXpress” messages, which are transmitted directly to the recipient. These messages can only be sent to one recipient at a time, and appear on the recipient’s screen immediately after having been sent. Thus, express messages can only be sent to users who are currently logged in. The second method is to post messages in the public discussion fora, or “rooms”. A large number of fora are available, including light-hearted topics such as “Babble” and “Humor”, technical topics such as “Intel PCs and Clones” and “Network Design and Administration”, and serious discussions such as “Gender Issues” and “Political Theory and Ideology”. There are currently 197 fora, a number that fluctuates as fora are periodically created and removed according to demand. Messages posted in these areas can generally be identified by your username, although a small number of fora allow an anonymous option, usually if the forum topic is a sensitive one. Some areas can also be accessed only by application and are not publicly available. The third method is to send private mail messages that can only be read by the recipient. Such messages are stored semi-permanently, and in contrast to express messages can be sent whether the recipient is logged in, or not. The following analysis is based on a log of messages posted to the “Babble” forum on the BBS. According to the forum description, Babble’s purpose is for “light conversation, inane chatting, socializing, unstructured role-playing, and just about everything else”. Although messages posted to the forum are saved for later viewing, the forum is analogous to ephemeral chat services like IRC (Internet Relay Chat). A maximum of 150 messages are stored, so that the 151st message posted causes the deletion of the 1st message. In this way only the 150 most recent messages are stored by the BBS. The forum also tends to flow rapidly; the transcript analysed here contains approximately 120 messages, all posted during a 50minute period. Thus, messages do not persist for a very long time the way they do in, for example, webbased discussion fora or Usenet groups. During the time logged, 19 different users posted messages. The highest number of messages posted by any single user was 23, and the lowest was 1. The median number of messages posted by all users was 4, and the mean average number of messages was 6. Five users tended to dominate the discussion, posting 10 or more messages each, while 13 users posted five or less messages. While the messages analysed here were posted pseudonymously, the names have been changed in this article. It is often possible to trace the real-world identities of the users, either through personal profiles they themselves make available, or perhaps through other fora. It is necessary to determine exactly what constitutes a situation in this example. The BBS environment, including the numerous discussion fora and express message facility, is not a situation. It is impossible for users who are simply logged in to the BBS, and not participating in any particular exchange with other users, to monitor or to be monitored by others. The term behaviour setting, as Goffman used it, is appropriate to describe the BBS as a whole. Within such a setting, it is inevitable that smaller social occasions may occur, just as they would in a shopping mall containing hundreds of people. Goffman (1963: 18) uses the term “gathering” to refer to two or more people who are in each other’s immediate presence. When a gathering comes together, the space within which the gathering collects is the situation. Situations do not exist in and of themselves, but are created when a gathering forms – or when “mutual monitoring” (Goffman, 1963: 18) of each other begins. Likewise, situations cease to exist when the gathering disbands, that is when mutual monitoring stops. Thus, merely being logged in to the BBS might place you in a behaviour setting but does not involve you in any particular gathering. When the user logs in they are placed in the Lobby, a general-purpose message area for general announcements from the BBS staff to all users. Normal users cannot post to the Lobby, and it is not used for social interaction. Thus, immediately after logging in a user is not being monitored by others, and is not monitoring anybody else, and is therefore not present in any particular situation.

7

ISBN: 978-972-8939-31-1 © 2010 IADIS

Similar circumstances may occur in other environments, such as MUDs. If anything, a MUD is a richer environment than the BBS, and is perhaps the most comprehensive virtual space created by textual CMC. Such a space typically consists of a richly described virtual world, complete with buildings, roads, and implements that can be manipulated by the characters that populate them. While the BBS uses spatial metaphors it does not engage the imagination to anywhere near the degree that MUDs do. It is not until the user enters a virtual space populated by at least one other user that they become present in a situation. When a user posts an introductory message to the BBS’s Babble room, they join the gathering, if one is already present. A critical difference of this virtual space is that it is not possible to tell if somebody is present, if they do not announce their presence. It is somewhat like entering a pitch-black room – nobody knows you’re there unless you make a sound, and you don’t know anybody else is there until they make a sound, either. Thus, it is customary to announce your presence, as in Figure 3. Feb 16, 20:17 from Archer who's bumping uglies in here? [Babble> msg #23158290 (114 remaining)] Read cmd -> Next ... Feb 16, 20:17 from Johnny *bumps Archie's uglies* [Babble> msg #23158293 (111 remaining)] Read cmd -> Next Figure 3. Transcript excerpt showing presence announcement

The first message serves to announcing Archer’s presence in the room, and also declares that he has no knowledge of others present. Johnny responds, albeit in a playful manner, both acknowledging Archer and declaring his own presence. In this second example, Daydreamer “wanders back in” to the room. This illustrates a key point: that online gatherings, and thus the situations they occur in, can persist after individuals leave. Just as it is possible to leave and return to an offline situation, for example, temporarily moving to the kitchen during a family gathering, the same can be achieved online. Feb 16, 2004 20:44 from Daydreamer *wanders back in* [Babble> msg #23158356 (48 remaining)] Read cmd -> Next Figure 4. Transcript excerpt showing situation persistence

Likewise, it is typical for users to announce their intention to leave the room, as in Figure 5. Feb 16, 20:53 from Chiz buh bye babble [Babble> msg #23158370 (34 remaining)] Read cmd -> Next Figure 5. Transcript excerpt showing leaving announcement

There is no way of telling whether other users are lurking in the forum and not posting. It is difficult on the BBS, as with most forms of CMC, to know exactly who is in the audience. Other research has suggested that identity play is likely to be much more prevalent in personal or recreational Internet use, and this difficulty in identifying the audience leads to pseudonymous or anonymous Internet use (Dell and Marinova, 2002). This is certainly demonstrated in the BBS, where nearly all participants use pseudonyms. Occasionally, when the spontaneity of interaction breaks down, users resort to “verb emoting” (Chenault, 1998) to break the silence. Ostrow (1996) describes difficulties in maintaining spontaneous involvement, once one becomes conscious of the difference between “where we are experientially and where we feel situational pressure to be”. Further, if we are conscious of an insufficient involvement in a situation, our

8

IADIS International Conference on Internet Technologies & Society 2010

ability to act spontaneously is hindered. Behaviour becomes contrived as the individual attempts to demonstrate appropriate involvement, as in Figure 6. Feb 16, 20:21 from Thoriem *listens to di.fm* this is rockin' [Babble> msg #23158307 (97 remaining)] Read cmd -> Next Feb 16, 20:22 from Confused *watches Spike TV* [Babble> msg #23158308 (96 remaining)] Read cmd -> Next Figure 6. Transcript excerpt showing demonstrations of appropriate involvement

By describing offline activities, such as listening to the radio or watching TV, users demonstrate that the BBS is not the only involvement in which they are engaged. In other words, they are demonstrating that they are suitably engaged in activity, albeit outside the BBS. Such behaviour also confirms one’s presence in the situation. Messages such as these are sometimes sent when the user is not playing a central part in the interaction. This is necessary because the user has no physical presence in the situation at all – one is only present as long as one is posting messages. Thoriem is a good example of this type of action. She returns to the situation after a brief absence at 20:13, continuing a previous conversation. Her involvement dwindles, however, and by 20:22 all threads in which she is involved have ended. After a period of five minutes, during which she is not addressed by any of the other participants, she posts the message illustrated in Figure 7, simulating the polite cough one might utter when trying to attract attention. Feb 16, 20:27 from Thoriem *cough* [Babble> msg #23158327 (77 remaining)] Read cmd -> Next Figure 7. Transcript excerpt showing a “polite cough”

A further period of 10 minutes passes, at which point she posts another verb emote message, asserting that she is still present. This time, the message is more provocative (Figure 8). Feb 16, 20:37 from Thoriem *runs amok* [Babble> msg #23158345 (59 remaining)] Read cmd -> Next Figure 8. Transcript excerpt showing unsuccessful attention grabbing

Still another 17 minutes pass, during which period Thoriem is not addressed. She tries a third time to catch the attention of other users. By handing out cookies, she simulates an action that directly involves others, and a response is received from QTip (Figure 9). A user’s having to consciously remind others of their presence could serve to heighten their awareness of their peripherality in a situation. In an online environment, a user’s involvement is indicated through the messages they send and receive. Thoriem’s behaviour can be explained by her feeling insufficiently involved – what is sometimes referred to as “being at a loose end”. Thoriem is able to appeal to others to involve her in the situation because people have an obligation to be accessible in Babble, just as people have an obligation to be accessible for “face engagements” (Goffman, 1963: 104) in offline situations. No infractions of this requirement occurred during the logged period. In itself this cannot be taken as confirmation of the rule, due to the relatively brief period involved. However, it is noted from other observations that infractions occur very rarely, and that users generally do observe the requirement to be accessible. Whether this obligation extends beyond Babble is unclear. Users who are not participating in any forum in particular can still be contacted by eXpress messages, and not responding to such messages can cause offence. The system allows users to prevent eXpress messages being sent, perhaps reflecting that many users

9

ISBN: 978-972-8939-31-1 © 2010 IADIS

do not perceive the wider BBS as a situation, and that choosing not to receive such messages is acceptable. The “disable eXpress” option provides a convenient mechanism by which the awkward task of declining an unwanted engagement is avoided, analogous to the offline act of wearing dark glasses, thus avoiding making eye-contact with others. Feb 16, 20:54 from Thoriem *passes out sugar cookies* [Babble> msg #23158371 (33 remaining)] Read cmd -> Next ... Feb 16, 20:57 from QTip OH MAN No more damn sugar cookies Wife made about 6 dozen and took about 2, not 2 dozen, TWO! on her overnight casino trip. [Babble> msg #23158374 (30 remaining)] Read cmd -> Next Figure 9. Transcript excerpt showing successful attention grabbing

Goffman’s term for this kind of act is “civil inattention” (Goffman, 1963: 83). This is the manner in which people, when unacquainted but in the proximity of each other, convey by their actions that the other “does not constitute a target of special curiosity or design” (Goffman, 1963: 84). Civil inattention avoids the aggressive, threatening act of staring at somebody (hence the dark glasses, referred to above). It seems that such elements of face-engagements are felt so strongly that they translate into a virtual space, where physical eye contact cannot be made, as in Figure 10. Feb 16, 20:17 from Confused u looked in the mirror? [Babble> msg #23158282 (122 remaining)] Read cmd -> Next Feb 16, 20:17 from Thoriem No mistake. I just said...never mind. [Babble> msg #23158283 (121 remaining)] Read cmd -> Next Feb 16, 20:17 from Chiz *stare* are you calling me ugly? [Babble> msg #23158284 (120 remaining)] Read cmd -> Next Figure 10. Transcript excerpt showing face engagement

Feb 16, 20:27 from Johnny BL me, ASAP. [Babble> msg #23158328 (76 remaining)] Read cmd -> ... Feb 16, 20:27 from Black Leather *humps JY into a fine paste* [Babble> msg #23158330 (74 remaining)] Read cmd -> Next Figure 11. Transcript excerpt showing exaggerations for comic effect

Goffman remarks that face engagements are often conversational, and can be applied to Babble as the main focus of activity is conversation. In fact, “[m]any face engagements seem to be made up largely of the

10

IADIS International Conference on Internet Technologies & Society 2010

exchange of verbal statements, so that conversational encounters can in fact be used as the model” (Goffman, 1963: 89-90). While verbal exchanges are important, Goffman does not discount non-verbal gestures within the face engagement, however these have much less relevance in Babble. While non-verbal actions are often communicated using verb emoting, such as Louise’s laugh illustrated below, they are often exaggerated for comic effect and are not intended to be taken literally, as shown in Figure 11. In the final message we see Black Leather “humping [Johnny] into a fine paste”, in response to Johnny earlier entreating “me, ASAP”. Black Leather does not intend this to be taken literally, and it was not interpreted as such by Johnny. It is noted that each individual CMC service will have its own culture and norms, however, and that in other services such an act might be taken literally. MUDs, for example, often tend to interpret verb emoting far more literally.

4.2 Situational Start and End-Points Identifying the start and endpoints of the situation that occurs within Babble may seem problematic because of its longevity. While situations are defined as continuing until the second-last person leaves, this may not happen in an online environment. A similar dilemma surely faces researchers studying other chat services such as IRC – a single chat session might last for days, if not indefinitely, as new users join to replace users who have left. For an offline analogy, consider a doctor’s waiting room. Patients are seen and leave; new patients in the interim arrive. Eventually, all of the people in the room have been replaced, but at what point has the situation changed, if at all? In such cases, it is clear that the situational definition may change over time. However, the situational spatial boundaries remain fixed. In the case of Babble, the forum is the space within which the social occasion exists. The social occasion may evolve, in the same way that offline social occasions may also evolve. In fact, it is a requirement of many occasions that such evolution take place. Consider a hypothetical example of a small wedding ceremony. The groom, groomsmen and celebrant may be present at first. The nature of the occasion changes as guests arrive. Further changes will occur when the bride arrives. Finally, when the bridal party leave, followed by the celebrant, the guests are the only people remaining. As with offline situations, information can be carried from one situation to another, whether those situations occur within the BBS or outside of it. Participants known to each other offline can exchange information that then influences their behaviour in online situations, as in Figure 12. Feb 16, 20:14 from Chiz Lou you've seen me in a hotel... [Babble> msg #23158270 (134 remaining)] Read cmd -> Next ... Feb 16, 20:14 from Louise Chiz, i know what a night heh [Babble> msg #23158273 (131 remaining)] Read cmd -> Next Figure 12. Transcript excerpt showing the influence of offline information

Here, interaction between Chiz and Louise is affected by information about each other from a previous, offline encounter. This example also serves to illustrate how, even though a number of users are present in the situation, how smaller groups of users tend to interact together. While Chiz and Louise are involved in conversation with each other, Thoriem and Confused are involved in a separate thread of their own. Such interaction is unavoidably visible to all others present in the situation, however – a similarity Babble shares with most, if not all, multi-user CMC systems. This is a key difference between situations in CMC and offline situations. As communication within channels is transmitted to each user present in the channel,

11

ISBN: 978-972-8939-31-1 © 2010 IADIS

it is difficult for distinct and sustained group engagements among a subset of channel members to form. Throughout the whole of the logged session, at no point do separate groups form for very long. Rather than forming separate groups within the situation, if a group of users wish to have a private conversation excluding others in the forum, private eXpress messages can be exchanged. A similar approach can be witnessed in IRC, where private messages can be exchanged, or a new channel can be formed – the equivalent of moving to another room as people might do offline. Note that in both the BBS studied here and IRC, other users are not notified about the existence of private messages – only the sender and recipient know the message was ever transmitted. While in face-to-face interaction, an encounter within a situation communicates information to the situation as a whole (Goffman, 1963: 103), within IRC such an encounter can be carried out entirely in secret. Further, eXpress messages on the BBS can be sent to any user, however, and not just those in the same forum or channel. Thus, they can create entirely new situations. This is also true in IRC, where users can participate in several IRC situations simultaneously, either by having multiple copies of their IRC software running, each in a separate window, or by being present in a number of situations caused by the exchange of private messages. Being present in multiple situations face-to-face would be much more difficult; there is truth in the cliché that you can’t be in two places at once. There is thus a dimension of multiplicity to virtual situations that does not exist in face-to-face interaction.

5. CONCLUSION Goffman considered situations as bounded in both space and time. The analysis above demonstrates that, by considering online social interactions in terms of virtual space, it is possible to apply his theoretical framework to understand social interaction in synchronous, interactive CMC. The advantage of this approach is that Goffman’s theoretical legacy is thorough, mature and its use widespread; thus, it can provide a sound theoretical foundation for analysis of online social interaction. The Internet is not disconnected from physical reality. Individuals interact socially with the purpose of reducing uncertainty and developing affinity; they will do this regardless of the medium, be it Internet, telephone or otherwise. The use of mature, established social theory allows the analyst to recognise this fact. Situation analysis provides a rich field for the analysis of online social interaction, and its application is a step towards a comprehensive approach for understanding online behaviour.

REFERENCES Aarsand P.A., 2008. Frame switches and identity performances: Alternating between online and offline. Text and Talk, Vol. 28, No. 2, pp. 147-165. Baltes B.B. et al. 2002. Computer-Mediated Communication and Group Decision Making: A Meta-Analysis. Organizational Behavior and Human Decision Processes. Vol. 87, No. 1, pp. 156-179. Barbatsis G. et al., 1999. The Performance of Cyberspace: An Exploration Into Computer-Mediated Reality. Journal of Computer-Mediated Communication. Vol. 5, No. 1. Bargh J.A., 2002. Beyond Simple Truths: The Human-Internet Interaction. Journal of Social Issues. Vol. 58, No. 1, pp. 1-8. Blanchard A., 2004. Virtual Behavior Settings: An Application of Behavior Setting Theories to Virtual Communities. Journal of Computer-Mediated Communication. Vol. 9, No. 2. Bordia P., 1997. Face-to-face versus computer-mediated communication: A synthesis of the experimental literature. The Journal of Business Communication. Vol. 34, No. 1, pp. 99-120. Bortree D.S., 2005. Presentation of Self on the Web: an ethnographic study of teenage girls’ weblogs. Education, Communication and Information. Vol. 5, No. 1, pp. 25-39. Chen Y.N.K., 2010. Examining the presentation of self in popular blogs: a cultural perspective. Chinese Journal of Communication. Vol. 3, No. 1, pp. 28-41. Cheung C., 2000. A Home on the Web: Presentations of Self on Personal Homepages. In: D Gauntlett (Ed), Web.Studies: Rewiring Media Studies for the Digital Age. Arnold, London, UK, pp. 43-51.

12

IADIS International Conference on Internet Technologies & Society 2010

Danet B. et al., 1997. “Hmmm... Where’s That Smoke Coming From?”: Writing, Play and Performance on Internet Relay Chat. Journal of Computer-Mediated Communication. Vol. 2, No. 4. December J., 1995. Transitions in Studying Computer-Mediated Communication. Computer-Mediated Communication Magazine. Vol. 2, No. 1. Dell P. & Marinova D., 2002. Erving Goffman and the Internet. Theory of Science. Vol. 24, No. 4, pp 85-98. Dell P. & Marinova D., 2008. Acting Your Age: Online Social Interaction and the Elderly, In: M. Rzadkowolska (Ed), Ku Przyszłości, Wydawnictwa Akademickie i Profesjonalne, Warsaw, Poland, 207-226. George G. & Sleeth R.G., 2000. Leadership in Computer-Mediated Communication: Implications and Research Directions. Journal of Business and Psychology, Vol. 15, No. 2, pp. 287-310. Goffman E., 1959. The Presentation of Self in Everyday Life. Anchor Books, New York, USA. Goffman E., 1963. Behavior in Public Places. The Free Press, New York, USA. Jones Q., 1997. Virtual-Communities, Virtual Settlements & Cyber-Archaeology: A Theoretical Outline. Journal of Computer-Mediated Communication, Vol. 3, No. 3. Jones R.H., 2004. The Problem of Context in Computer Mediated Communication, In: P. LeVine & R. Scollon (Eds), Discourse and Technology. Georgetown University Press, Washington D.C., USA, pp. 20-33. Jones, S.G., 1995. Understanding Community in the Information Age. In S.G. Jones (Ed), Cybersociety: Computermediated communication and community. Sage, Thousand Oaks, California, USA, pp. 10-35. Karlsson AM., 1998. Selves, Frames and Functions of Two Swedish Teenagers’ Personal Homepages. 6th International Pragmatics Conference, Reims, France. Kim JY., 2000. Social Interaction in Computer-Mediated Communication. Bulletin of the American Society for Information Science. February/March, p15. Liu G.Z., 1999. Virtual Community Presence in Internet Relay Chat. Journal of Computer-Mediated Communication. Vol. 5, No. 1. Mabry E.A., 2001. Ambiguous Self-Identification and Sincere Communication in CMC. In: L. Anolli, R. Ciceri & G. Riva (Eds), Say not to say: New perspectives on miscommunication. IOS Press, Amsterdam, The Netherlands, pp. 247-264. Meyrowitz J., 1985. No Sense of Place, Oxford University Press, New York, USA. Meyrowitz J., 1990. Redefining the situation: Extending dramaturgy into a theory of social change and media effects. In: S.H. Riggins (Ed), Beyond Goffman: Studies on Communication, Institution and Social Interaction. Mouton de Gruyter, Berlin, Germany, pp. 65-98. Miller H., 1995. The Presentation of Self in Electronic Life: Goffman on the Internet. Embodied Knowledge and Virtual Space Conference, London, UK. Miller H. & Mather R., 1998. The Presentation of Self in WWW Home Pages. Internet Research and Information For Social Scientists ‘98, Bristol, UK. Ostrow J.M., 1996. Spontaneous involvement and social life. Sociological Perspectives. Vol. 39, No. 3. Parks M.R. & Floyd K., 1996. Making Friends in Cyberspace. Journal of Computer-Mediated Communication, Vol. 1, No. 4. Samarajiva R. & Shields P., 1997. Telecommunications networks as social space: implications for research and policy and an exemplar. Media, Culture and Society. Vol. 19, pp 535-555. Soukup C., 2006. Computer-mediated communication as a virtual third place: building Oldenburg’s great good places on the world wide web. New Media and Society, Vol. 8, No. 3, pp. 421-440. Walker K., 2000. “It’s Difficult to Hide It”: The Presentation of Self on Internet Home Pages. Qualitative Sociology, Vol. 23, No. 1, pp. 99-120. Walther J.B., 1992. Interpersonal effects in computer-mediated interaction: A relational perspective. Communication Research. Vol. 19, No. 1, pp. 52-90. Walther J.B., 1996. Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research. Vol. 23, No. 1, pp. 3-43.

13

ISBN: 978-972-8939-31-1 © 2010 IADIS

ELEMENTS OF NIGERIA’S PREPARATION TOWARDS ADVANCED E-DEMOCRACY Steve Nwokeocha, PhD. Director of Professional Operations - Teachers Registration Council of Nigeria (Federal Ministry of Education) Plot 567 Aminu Kano Crescent Wuse 2, Abuja - +234-7064480579

ABSTRACT Nigeria is an emerging force in the application of electronic systems in governance, education, commerce and other aspects of life, perhaps second to only South Africa in Sub-Saharan Africa. However, global rankings place the country among the least developed in terms of electronic governance and there is a preponderance of literature articulating the reasons for this low ranking and what the country must do to advance. This paper does not question the ranking but provides materials that give the hope that the country is desirous and serious with the idea of electronic governance, and may indeed, transit faster than estimated to the rank of countries that are advanced in electronic governance. With specific reference to electronic democracy or the participation of citizens in governance aided by information and communication technology (ICT), “the clouds are gathering and the storm is not far away” when Nigeria will launch fully into the use of Electronic Voting Systems and other applications that have been used as criteria for the assessment of countries’ e-political participation. This paper therefore, looks at some of the foundations currently being laid in Nigeria which promise rapid transformation of the country to advanced electronic society. The foundations include the development of appropriate legal framework for ICT; electoral system reforms; readiness to pass the freedom of information bill; full integration of ICT in the nation’s education curriculum and systems; up-scaling of institutional capacity through the establishment of functional national ICT agencies and vanguards; existence of innovative electronic practices some of which have earned world recognition; the spectacular emergence of the GSM in the country and the exploitation of the tool by citizens; among others. The paper further reviews how in March 2010 the citizens effectively used a combination of the “limited electronic systems” in Nigeria to peacefully overcome the worst threat to the corporate existence of the country since the return of civil rule in 1999. The paper then points out the lessons that could be drawn from the Nigerian case and makes recommendations for sustaining the march towards e-political participation. KEYWORDS Electronic Governance, Democracy, Nigeria.

1. INTRODUCTION For most part of the year 2010, the headline in virtually all print and electronic media in Nigeria was what is now commonly called the historic amendment of Nigeria’s 1999 Constitution (the first time since the return of the civil rule in Nigeria in 1999) and the use of electronic voting machine by the National Assembly in the amendment of the Constitution (Alechenu, 2010; This Day, 2010; Awowole-Brown, 2010). It was widely reported that the electronic voting machine was procured since 1999 for use by the National Assembly but was abandoned due to the because politicians then felt that it could be used to rig elections (Omowa, 2006; Umonbong, 2006; Albert, 2009; INEC, 2010; Nigerian Muse, 2010)) However, the same National Assembly that rejected the EVS in 2006 has become the first to experiment with the idea and to come out with a unanimous euphoria calling for a sustainable use of the EVS and its extension to the 2011 general elections. (Nigerian Television Authority, 2010). This paper discusses Nigeria’s conscientious strides towards e-political action which covers the following issues: (a) The development of appropriate legal frameworks for ICT, Freedom of Information (FOI), and Electoral Reforms; (b) Integration of ICT in the curriculum at all levels of the education system; (c) Laying of strong institutional capacity by creation of very vibrant national ICT agencies and vanguards; (d) Encouragement of innovation in ICT especially online services some of which have earned international recognition; (e) The GSM Revolution; and (f) The role of the mass media. The paper further presents a case

14

IADIS International Conference on Internet Technologies & Society 2010

study of Nigeria’s handling of the worst political developments in recent times that threatened the corporate existence of the country and concludes by stating the strategic lessons learned from the Nigerian case and recommendations on how the tempo towards e-political action could be accelerated.

2. CONCEPTUAL ANALYSIS Electronic Democracy or e-Democracy for short refers to the participation of citizens in the governance of their country with the aid of information and communication tools such as the internet, mobile and fixed telephones, digital cameras, radio, television, etc. The participation covers the acquisition, utilization and dissemination of information as well as the exercise of the freedom of choice or opinion in political issues as part of fundamental human rights. In this sense, e-Democracy is as an integral part of the concept of Electronic Governance (e-Governance) which could be defined as “the use of any and all forms of information and communication technology (ICT) by governments and their agents to enhance operations, the delivery of public information and services, citizen engagement and public participation, and the very process of governance” (Curtin, 2003).

3. NIGERIA’S e-READINESS RANKING Some scholars and international organisations have ranked countries (including Nigeria) with respect to their level of e-Governance (World Economic Forum, 2003; United Nations, 2003; Economist Intelligent Unit, 2006; United Nations 2007; World Bank, 2007; Ngulube, 2007; LeBlanc et al, 2008; Albert, 2009). This section of the paper presents a brief review of the rankings. In his review of literature on the e-Governance status of countries in Africa, Albert (2009) reported the rankings by Tankoano and Docktor which were based on certain aspects of e-Governance. For instance, Tankoano in a 2001 ranking presented statistics that showed that the African continent generally has the least number of telephone lines, television and computers per 200 hectares in the world. Docktor in another 2001 study classified countries based on connectivity access, e-vision and planning, implementation of egovernance by the government and information technology (IT) students in tertiary education. The reports showed that Nigeria ranked “Low” in level of computer penetration, “Low” in terms of e-leadership; “Medium-High” for government web pages; and “High” for number of IT students in tertiary education. Another very important study titled, “TeleDemocracy in Developing Countries: A Focus on Sub-Saharan Africa”, was undertaken by a team of scholars based in the United States of America (LeBlanc et al, 2008). The study underscored the importance of free press in the evolution of e-governance. The scholars defined TeleDemocracy as “the use of ICT to increase citizen participation in democratic processes such as voting, polling and education that serve the public interest.” They observed that for decades, poor governance and governance systems militated against the development of viable healthcare, telecommunication, transportation, education and politics in Sub-Saharan Africa but cited reports by other authorities such as Mbarika et al, the International Telecommunications Union (ITU), and the World Bank which attest that internet connectivity, use of computers, the diffusion of wireless communications, and the networking of households, schools, workplaces and libraries have witnessed rapid expansion and broadened citizens’ participation in governance in the past few years. Ngulube (2007) also devoted his work to what he called the “Nature and Accessibility of E-Government in Sub-Saharan Africa”. A great deal of this study had to do with discussing the models of e-government in literature. He summarized the models he discovered in literature review into three, namely, the “definitional, evolutionary and stakeholder-oriented”. He observed that the evolutionary approached has dominated theoretical framework and focuses on the “stages of growth” of e-government. He further indicated that although the approach seems to be mechanistic, it is useful in evaluating progress on e-government in a given context. According to his findings, Nigeria belongs to the second stage of e-government which embodies dynamic online information though the communication is still one way. Also, a report by Awoloye et al (2008) who did a research to assess what they called “e-Governance Resource Use in South-Western Nigeria” depicted Nigeria as steadily advancing in the use of computers and the internet.

15

ISBN: 978-972-8939-31-1 © 2010 IADIS

4. NIGERIA’S PREPARATION FOR ADVANCED e-DEMOCRACY The studies cited above show clearly that e-governance does not exist in isolation. It requires environment made conducive through legislation, education, political reform, infrastructure, etc in order to thrive. This section attempts to summarize how the nation is transforming in these various sectors and thereby pushing steadily and promisingly towards advanced form of e-governance.

4.1 The Legal Frameworks: Freedom of Information Bill, Electoral Reforms and IT Policy Nigeria is making historic attempts to put certain legislations in place which will impact directly on the level of e-governance in the country. Three of such legislations illustrated here are the Freedom of Information (FOI) Bill, Electoral Reforms and ICT National Policy. The FOI represents the most sustained struggle by human rights activists, a conglomeration of interest groups and the general public in Nigeria since the civilian administration came into power in May 1999 (Federal Republic of Nigeria 1999; Abati, 2007; Freedom of Information Coalition, 2010; Media Rights Agenda, 2000, 2003). The struggle gave rise to the formation of a coalition of over 133 national and international non-governmental organisations led by the Media Rights Agenda (MRA). The MRA, Nigerian Union of Journalists and other members of the coalition in 1999 ignited the resistance to state monopoly of information and caused the National Assembly to begin hearing on the FOI Bill. The National Assembly has now given very strong assurances that it will soon pass the Bill. In the area of electoral reform, the National Assembly has already in March 2010 amended the nation’s 1999 Constitution for the very first time. The amendments have given the Independent National Electoral Commission (INEC) and Nigeria’s Judiciary financial and administrative autonomy, among others (Idonor, 2008; Akintunde, 2009; Golu, 2009; Iriekpen, 2009; Ogunmade, 2010;). Nigeria’s interest in e-governance could however be best seen in the fact that it articulated, published and began the implementation of a National IT Policy since 1992 (Federal Ministry of Science, 2001).

4.2 Up-scaling of Institutional Capacity: the Case of NITDA, NeGSt, NCC, NIMC, CPN, etc. Nigeria’s ICT landscape is dotted with several institutions either established recently or existing previously but now strengthened by the Federal Government to act as catalyst in the quest for advanced e-governance. Such institutions, most of which are wholly owned by the government while others are private-public partnerships, have attracted tremendous budgetary allocation from public funds. Each institution has specific mandates under law and covers critical aspects of the match towards full e-governance. The National Information Technology Development Agency (NITDA) is the key regulatory agency for ICT in Nigeria and accountable to the Federal Government through the Federal Ministry of Science and Technology (NITDA, 2010). The NITDA went a step further to set up a public-private partnership called the National eGovernment Strategies (NeGSt) through which it puts into practice most of the ICT dreams of the nation. The NITDA is currently networked with all notable e-governance bodies across the globe and through the networking is fast-tracking the development of e-governance in Nigeria. The other agencies helping to boost electronic governance in Nigeria include the Nigerian National Communications Commission (NCC, 2010); National Identity Management Commission (NIMC, 2010); and Computer Professionals Registration Council of Nigeria.

4.3 Integration of ICT in National Education Curriculum at All Levels Worldwide, Education is regarded as the most important agent of change and holds the key to national development. This truism is taken seriously by Nigeria and informed the integration of ICT in the curricula from primary to the university levels as a core course. The status of a core course means that a student is not likely to go further in his or her education system without satisfactory performance in ICT. The Minister of Education in 2009 went a step further to direct the Governing Boards of all tertiary educational institutions in Nigeria to lay off academic staff who fail to show proficiency in the appreciation and use of ICT by the end

16

IADIS International Conference on Internet Technologies & Society 2010

of the year (Egwu, 2009). In the same 2009, the National Council on Education (NCE) which is the highest policy making body in Education in Nigeria approved a National ICT in Education Policy (Federal Ministry of Education, 2009). The NCE indicated that the Policy was made to effectively use Education to achieve the various purposes of e-governance in Nigeria. Among the purposes are the (a) Seven Point Agenda of the current Federal Government; (b) National Economic Empowerment and Development Strategy (NEEDS); (c) National Policy on Education; (d) Roadmap for the Education Sector; (e) National Information Technology Policy; and (f) National Information Technology Education Framework.

4.4 E-Governance Innovations on the Rise in Nigeria The rapid transformations from traditional to online services are on the rise in Nigeria as the message of edemocracy and the wider e-governance concept keep spreading. The transformations have permeated so many fabrics of national life that it will not be possible to give a sufficient account in this sub-section. From the banking sector to health, immigration, education and commerce, the application of electronic tools particularly interactive web-based services have become common experience. In the banking sector, the Central Bank of Nigeria (CBN) and other regulatory bodies have made electronic transactions indispensable. The CBN in 2010 has come up with an order that no individual or organisation in Nigeria will be allowed to issue a bank cheque exceeding ten million Nigeria (50,000 Euros) any more. The Federal Government in 2009 also issued a directive nullifying the payment of any public money through check except through epayment. This directive affects all payments of salaries and allowances to staff and all payments for contract jobs done in the public sector in Nigeria. After initial difficulties, all these orders and directives of Government are now operating smoothly. As mentioned earlier, in 2006, the National Independent Electoral Commission (INEC) introduced what it called the “Electronic Voting System” (EVS). According to INEC, the components of the EVS are “Electronic Voters Register which had been in operation since 2002 and was used for compiling the voters register used in 2003 elections. The second component is Electronic Authentication, while the third is the Speedy Transmission of election results” (Umonbong, 2006; INEC, 2010). The Nigeria’s National Assembly is now using electronic voting to conduct its business and members of the Assembly described the EVS which they used in amending the Constitution of Nigeria in March 2010 as very transparent, accountable, speedy and convenient with all members voting without leaving their seats for the very first time. In a guest lecture in London on the issue of Nigeria’s electronic voting system, Umonbong (2006) on behalf of INEC stated that Nigeria at that time had estimated population of 120 million people (which is now over 150,000 in 2010), 60 million registered and eligible voters spread across 120,000 polling centres. The required electronic manning and supervision personnel was put at 500,000 officials most of whom were temporary and ad-hoc staff recruited and trained often at the eve of elections. The country had 33 political parties and over 4000 candidates vying for 1458 seats in the National and State Houses of Assembly alone. All of these were challenges which INEC had to surmount or simplify with the use of the EVS. Other innovations pioneered by government agencies are reported by Wokocha and Nwokeocha (2009) and Nwokeocha (2010).

4.5 The Mobile Phone “Revolution” in Nigeria Most commentators argue that no achievement or policy of the last civilian administration of Olusegun Obasanjo revolutionized life and connected Nigeria to the outside world than the GSM did. Before year 1999, it was unimaginable in Nigeria that over 70 million Nigerians would be proud owners of mobile telephones in a matter of few years. Events snow-balled to this present point where for just a few amount of money worth less than a British Pound, individuals pick up GSM SIM packs and get on calling. The Nigerian Communications Commission (NCC, 2010) provided statistics regarding Nigeria mobile phone subscriber data (1999-2009), infrastructure deployment (2006-2008) and the companies’ share of telecommunications services (June 2009). The NCC statistics showed that Nigeria started with only a few thousand telephone lines in 1999 and by 2009 there were over seventy million lines. The teledensity in 1999 was 0.45 and in 2009 it became 53.00. Also, in June 2009, GSM’s share of the telecommunications services was 87.24%, while CDMA was 10.65% and fixed line was 2.12%. Still in 2009, the three key telephone companies are private organisations, namely MTN, Globacom and Zain with the following market shares: MTN 46.19%, Globacom 26.87% and Zain 24.74%. The other two are EMTS (a private company), 1.76%

17

ISBN: 978-972-8939-31-1 © 2010 IADIS

and the government owned Mtel, 0.44%. In terms of infrastructure deployment, in 2006, there were 39,234 kilometers of microwave radio coverage and 3,774 kilometers fiber optics coverage; In 2007, the microwave radio coverage was 57,454 kilometers and the fiber optics coverage was 1,176 kilometers; in 2008, the microwave radio coverage became 103,632 kilometers, the fiber optics was 11,203 kilometers, and the base stations covered 12,857 kilometers.

4.6 Nigeria’s Mass Media Getting More Interactive and Permeating Nooks and Crannies of Nigeria The radio and television in Nigeria have played foremost role in pushing the frontiers of electronic communication to the edges. The power of the Nigerian press has been immense, historically (Olutokun, 2001). Even during the era of the worst military dictatorships in Nigeria, many journalists fearlessly did their jobs and had no regret ending up in prison. Currently in Nigeria, the radio and television stations have multiplied in such great number and speed that there is hardly any part of the country that is not sufficiently covered. The giant electronic media broadcast nationally and internationally through the satellite and are accessible on the internet. In remote villages and towns where absence of electricity has limited the use of the television, the radio which comes in different shapes and sizes including pocket and GSM handset devices, has provided unmatched connectivity to national and international news. Most of Nigeria’s electronic media have equally perfected the art of interactive programmes which feature live phone-in debates on critical issues. Many have hotline telephone numbers such that in cases of emergency like road accidents or natural disasters, individuals could instantly start reporting live with their mobile phones to the nation through the radio or television about what they are seeing. All the leading print media in Nigeria have got their online versions and websites which instantly make their news available to Nigerians at home and abroad. Today, many Nigerians in the United States of America, Canada, Europe, Asia and other places are sufficiently abreast of developments in Nigeria instantly as the news unfolds. In turn, these Nigerians react effectively using the online tools and services made available by the media and the ones they have been inspired to create.

5. THE CASE OF HOW NIGERIANS UTILISED ICT TOOLS TO OVERCOME THREAT TO CORPORATE EXISTENCE IN JANUARY MARCH, 2010 Between November 2009 and March, 2010, three key events shook the roots of Nigeria and many feared that the nation could be heading towards a collapse. First, Nigeria’s late President, Umaru Yar’Adua, battled with serious ill-health from November 2009 till March 2010. In November, 2009, he left the country for Saudi Arabia for medical treatment. He did so without notice to any member of his Government including the Vice President and National Assembly. Each day that passed was like living in hell for Nigerians because it was felt that a dangerous vacuum existed in the governance of the country. Second, as Nigeria was reeling in pain over the absence of the President, the international media broke the news that a young Nigerian student based in London had attempted to bomb an American Airliner on December 25, 2009. The news jolted the country and tempers rose in condemnation of the attempt considering the fact that Nigeria has no history or culture of international terrorism. The United States of America quickly followed up the incident with the classification of Nigeria as a “country of interest” on terrorist matters which placed Nigeria among other countries that attract stricter immigration checks. Nigerians felt extremely disenchanted with this development and blamed it on the fact that the country did not have a President who could stand up to talk one-on-one with President Barack Obama to assure the Americans that Nigeria is in the fight against terrorism. The third was a bloody clash erupted in Jos, a very important and historic city in the central part of Nigeria. The clash led to loss of lives which by March 2010 was put at over one thousand people. Many local and international commentators called it genocide and postulated about the causes of the violence. While some argued that it was part of the usual clash between Christians and Muslims in Nigeria, others pointed out that the clash arose out of communal land disputes, poverty, and ignorance.

18

IADIS International Conference on Internet Technologies & Society 2010

Arising from this chain of events which created grave political situation in Nigeria, the media once again played its historic role in galvanizing public opinion through its broadcasts and online versions of the news and news analysis. Spontaneously, Nigerians came together under different umbrella to mount pressure on all arms of government to appoint the Vice President of Nigeria as the Acting President, deal decisively with the carnage in Jos and launch high level diplomacy to resolve the classification of Nigeria as a country of interest by the United States of America. Through the power of the ICT tools, Nigerians were able to communicate, exchange ideas and quickly come under one of two umbrellas – either the Save Nigeria Group (SNG) which wanted the country to move on without the ailing President or the “Pro-Yar’Adua Group” (PYG) which maintained support for the President even as no one had seen or heard from him for over two months. The SNG was led by the quintessential Nobel Laureate, Professor Wole Soyinka and another vocal Pastor of the Later Day Rain Assembly (Church), Tunde Bakari. In just a few weeks, the SNG due to its immense influence on the polity drastically drowned the voice of the PYG and consequently got most of what it asked for which included the appointment of the Vice President as Acting President by the National Assembly and other notable developments (Federal Republic of Nigeria, 1999; Edirin, 2010; Oluwole, 2010; Okoroma, 2010; Save Nigeria Group, 2010; Daily Trust, 2010; Chiedozie, 2010; Punch, 2010). The SNG achieved the feat through a combination of the power of the mobile phones in Nigeria and the internet. During the crisis period, several websites emerged to articulate the situation and advocate radical actions. The most prominent was the SGN website which became a rallying point where most Nigerians got the latest news about the unfolding crisis and the next line of action. People massively used Twitter to exchange ideas and posted photographs and comments on Facebook; Others used their mobile phones to map centres where protest activities were unfolding, using mobile phone cameras, webcam, Bluetooth technology and other forms of communication devices to distribute pictures and stories to individuals and groups and to connect pictures and stories to the internet. The feat accomplished by Nigerians using ICT tools was so remarkable that the Cable Network News (CNN) on Saturday March 27, 2010 at about 2.00 GMT in its “iReport” (a programme that gives updates on the latest online exploits around the world) broadcast the details of how Nigerians used ICT tools to cause major political change in the country between January and March 2010. After the death of the former President, the transition of the Vice President to the position of President of Nigeria was smooth and normal.

6. STRATEGIC LESSONS FROM NIGERIA’S CASE Basically, the lessons from the issues about e-governance, particularly e-democracy, raised in this paper include the following: (a) The ranking of countries’ level of e-governance based solely on the “stages of development” is useful but only to limited extent – the preparations for and manifestations of e-governance may be more diffused and complex than could be readily ascertained by looking only at number of computers, websites and online registration systems, etc - the socio-cultural, political and environmental contexts ought to be taken into consideration as well; (b) Notwithstanding the limited availability of ICT infrastructure and other socio-economic and political constraints, a country like Nigeria has demonstrated strong commitment and political will towards enthroning electronic governance and e-democracy in particular as seen in the way credible e-governance initiatives are springing up in most spheres of the national life; (c) The saying that “necessity is the mother of invention” is true of Nigerians - in the trying moments of January-March 2010 described above, Nigerians easily became ingenious and creative which enabled them to achieve very effective and successful results that far-outweighed forecasts; (d) The existence of press freedom is very critical for the advancement of e-democracy because the checks and balances on the government by the press is a great stimulus for e-political action. In the Nigerian case, the press consistently set the agenda and released the most sensitive and factual information about the political situation in the country. This armed the citizens for the resistance. Even while the citizens slept, the press kept watch around the corridors of power and reported live most of the important stories as they developed. (e) A country’s quest for e-governance should be holistic and priority must be accorded to the integration of ICT as core course in educational curricula at all levels considering the role of education as a primary foundation for nation-building.

19

ISBN: 978-972-8939-31-1 © 2010 IADIS

7. CONCLUSION AND RECOMMENDATIONS The paper has attempted to give account of Nigeria’s interest and participation in e-governance, particularly e-democracy. It is clear from the paper that Nigeria is transforming and transiting towards the use of the electronic system in more detail than are often accounted for in the sweeping classification of countries according to “stages of e-governance”. This fact therefore calls for urgent and continuing update of most of the reports on Nigeria and other countries given that the ICT transformation is actually progressing at an awesome pace. Again, while the “stages of e-governance” model is useful in the classification of countries with respect to certain criteria, a more pervasive and in-depth study and analysis of the trends in the legal frameworks, Educational policies and practices, institutional capacity, and case studies of innovation in a country, among others, are recommended.

REFERENCES Abati, Reuben, 2007. Obasanjo, National Assembly and the Freedom of Information Bill. The Nigerian Village Square, 06 May. http://www.nigeriavillagesquare.com/articles/reuben-abati/ (Accessed 19/03/2010). Akintunde, Akinwale, 2009. Osinbajo Challenges FG on Electoral Reform. This Day newspapers, 1 May. Albert, Oluwole Isaac, 2009. Whose E-Governance? A Critique of Online Citizen Engagement in Africa. African Journal of Political Science and International Relations Vol. 3, No. 4, pp. 133-141, April. Alechenu, John, 2010. Senate Passes Final Amendments to 1999 Constitution. Punch Newspapes, 26 March. Awoleye, Michael; Adeniran Oluwanti; Willie Siyanbola and Rotimi Adagunodo, 2008. Assessment of E-Governance Resource Use in South-Western Nigeria. Proceedings of the 2nd International Conference on Theory and Practice of Electronic Governance, Cairo Egypt, ACM International Conference Proceedings Series, Vol. 351. Awowole-Brown, Francis, 2010. Constitution Amendment: Senate Voting Inconclusive. Daily Sun Newspaper. 25 March. Curtin, Gregoy. G.; Michael H. Sommer and Veronika VisSommer, 2003. Introduction. Curtin, Gregory et al (Eds). The world of E-Government. The Haworth Political Press New York, USA, pp 1-16. Daily Trust Newspaper, 2010. Editorial: The Presidential Advisory Council. 10 March. Economist Intelligent Unit. The 2006 E-Readiness Rankings: A White Paper from the Economist Intelligence Unit. http://graphics.eiucom/files/ad_pdfs/2006Ereadiness_Ranking_WP.pdf (Accessed 01/05/2007) Edirin, Etaghene, 2010. Yar’Adua’s Return Widens Nation’s Divides. Daily Champion Newspaper, 28 February. Egwu, S.O., 2009. Address of the Honourable Minister of Education at the Retreat for Chairmen and Members of Governing Boards of Federal Polytechnics and Colleges of Education in Nigeria at Kaduna. Federal Ministry of Education of Nigeria, 2009. National Information and Communication Technology (ICT) in Education. Federal Ministry of Science and Technology of Nigeria, 2001. National Policy for Information Technology [IT]. Total Dominion Ltd., Lagos. Federal Republic of Nigeria, 1999. A Bill for an Act to make Public Records and Information more freely available, Provide for Public Access to Public Records and Information, Protect Public Records and Information to the Extent consistent with the Public Interest and the Protection of Personal Privacy, Protect Serving Public Officers from Adverse consequences for Disclosing certain kinds of Official Information without Authorization and Establish Procedures for the Achievement of the those purposes; and Related Purposes hereof. Federal Government Press, Lagos. Federal Republic of Nigeria, 1999. Constitution of the Federal Republic of Nigeria. Federal Government Press, Lagos. Federal Republic of Nigeria, 2008. National Policy on Education. NERDC Press, Abuja. Freedom of Information Coalition, 2010. Activities Carried out by Freedom of Information Coalition. http://www.foicoalition.org/activities/index.htm?56,12. (Accessed 19/03/2010). Freedom of Information Coalition, 2010. Member Organisations of the Freedom of Information Coalition. http://www.foicoalition.org/members/index.htm?47,11. (Accessed 19/03/2010). Golu, Timothy, 2009. Electoral Reform – Group Adopts Uwais Report. Leadership Newspaper, 15 September. Idonor, Daniel, 2008. Nigeria: Electoral Reform – UWAIS Panel Recommends Independent Candidates. Daily Champion Newspaper, 12 December.

20

IADIS International Conference on Internet Technologies & Society 2010

Independent National Electoral Commission, 2010. News – Misleading Editorials on the use of EVS. http://www.inecnigeria.org/newsview.php?news=15&newsid=116. (Accessed 19/03/2010). Iriekpen, Davidson, 2009. NBA – Uwais Report, Key to Electoral Reform. This Day Newspaper, 22 August. LeBlanc, Patrick; Victor Mbarika; Lynette Kvasny; Scott McCoy and Peter Meso, 2008. TeleDemocracy in Developing Countries: A Focus on Sub-Saharan Africa. The Cameroon Journal on Democracy and Human Right.(Report of Research Supported by the National Science Foundation, USA under Grant No. IISN 0644305.) Media Rights Agenda, 2000. Unlocking Nigeria’s Closet of Secrecy: A Report on the Campaign for a Freedom of Information Act in Nigeria. Author, Lagos, Website: http://www.internews.org/mra. Media Rights Agenda, 2003. Campaigning for Access to Information in Nigeria: A Report of the Legislative Advocacy Programme for the Enactment of a Freedom of Information Act.Author, Lagos. Website: http://www.internews.org/mra. National Identity Management Commission, 2010. Introducing NIMC. http://www.nimc.gov.ng/ (Accessed 24/03/2010). National Information Technology Development Agency, 2010. Introduction. http://www.nitda.gov.ng/home.html (Accessed 24/03/2010). Ngulube, Patrick, 2007. The Nature and Accessibility of E-Government in Sub Saharan Africa. International Review on Information Ethics, Vol. 17, September. Nigerian Communications Commission, 2010. Industry Statistics. http://www.ncc.gov.ng/aboutis.htm (Accessed 29/03/2010). Nigerian Muse. INEC, Computer Society canvasses e-voting in future. http://www.nigerianmuse.com/20090408011537zg/projects/ElectoralReformProject/inec... (Accessed19/03/2010). Nigerian Television Authority coverage of the Senate Amendment of the Constitution, March 25, 2010, 9.00 pm National Network News. Nwokeocha, Steve, 2010. The Digital Divide Between Students and Lecturers: A Case Study of the Access and Attitudes Towards Information and Communication Technology (ICT) in Selected Nigerian Universities. Paper presented at the International Conference on Society and Information Technology, at Orlando, Florida, April 6-9. Ogunmade, Omololu, 2010. Electoral Reform Campaign Gains Grassroots’ Support. This Day Newspaper, 2 February. Okoroma, Louis, 2010. Yar’Adua’s Cabal: Who They Are. Leadership Newspaper, 7 March. Olutokun, Ayo and Dele Seteolu, 2001. The Media and Democratic Rule in Nigeria. Development Policy Management Network Bulletin Vol. XIII, No. 3, September, pp. 30-34. Oluwole, Josiah, 2010. Abuja Marches Relive Protests against Military Dictatorship. Punch Newspaper, 20 March. Omowa, Joseph, 2006. Nigeria: The Fear of the Electronic Voting System. Daily Champion, 27 February. Punch Newspaper, 2010. Headline – Ministers: Lobbyists Besiege Senate as Screening Begins. 29 March. Save Nigeria Group, 2010. The Acting President has become an ACTION President - R.I.P CABAL. http://savenigeriagroup.com/ (Accessed 19/03/2010). This Day Newspaper, 2010. Headline – Constitution: Senate Set to Pass 35 Amendments. 25 March. United Nations. Benchmarking E-Government: A Global Perspective - Assessing the Progress of the UN Member States. http://unpan1.un.org/intradoc/groups/public/documents/ (Accessed 10/05/2007) United Nations. World Public Sector Report 2003: E-Government at the Crossroads. New York: Department of Economic and Social Affairs, United Nations. http://unpan1.un.org/intradoc/groups/public/documents/un/ (Accessed 25/02/2007) Umonbong, Okop, 2006. The Voting System in Nigeria. Paper presented on behalf of the Independent National Electoral Commission at the AEA Seminar held in Blackpool, England, February. Wokocha, Addison and Steve Nwokeocha. 2009. E-Governance: Lessons from the Capacity Building Project and Online Registration of Teachers in Nigeria. Janowski, Tomas and Jim Davies (Eds.) ICEGOV: Proceedings of the 3rd International Conference on Theory and Practice of Electronic Governance, Bogota Colombia, November 10-13, New York: The Association for Computing Machinery, pp. 265-270. World Bank, 2007. Report on African Region Communications Infrastructure Programme. World Economic Forum. The Networked Readiness Index Rankings 2003, http://www.weforum.org (Accessed 02/11/2008)

21

ISBN: 978-972-8939-31-1 © 2010 IADIS

APPLYING THE THEORY OF PLANNED BEHAVIOUR TO EXPLAIN THE USAGE INTENTIONS OF MUSIC DOWNLOAD STORES: GENDER AND AGE DIFFERENCES Markus Makkonen, Veikko Halttunen and Lauri Frank University of Jyväskylä, Department of Computer Science and Information Systems - P.O. Box 35, FI-40014 University of Jyväskylä, Finland

ABSTRACT This paper examines the applicability of the theory of planned behaviour (TPB) in explaining the usage intentions of music download stores as well as the gender and age differences in the core constructs of TPB and their interrelationships. The examination is based on the analysis of an online survey sample of 1 418 Finnish consumers through structural equation modelling (SEM) and multiple group analysis. The results of the analysis suggest that TPB can successfully be applied to explain about half of the total variance in the usage intentions, and that attitude towards using the stores is by far the most important explanatory factor, followed by subjective norm towards their usage. In contrast, the effect of perceived behavioural control over their usage was found to be only marginal. There are also some significant differences in the core constructs of TPB and their interrelationships between men and women as well as across age groups. Based on these findings, implications for the business models of digital music retailing are provided. KEYWORDS Music download stores, the theory of planned behaviour, usage intentions, gender and age differences

1. INTRODUCTION During the past decade, the Internet has slowly but steadily emerged as one of the main channels for purchasing and selling recorded music. In 2009, already a quarter of the recorded music industry’s global revenues came from digital channels – constituting a $4.2 billion market (IFPI, 2010). However, despite its popularity, digital music retailing still seems to remain a rather uncharted area in terms of consumer behaviour (Makkonen et al, 2010). For example, apart from a few notable exceptions (e.g., Chu & Lu, 2007; Kunze & Mai, 2007; Kwong & Park, 2008; Bounagui & Nel, 2009), very few academic studies have attempted to explain and predict consumer behaviour in the context of digital music retailing by applying the theories and models traditionally used in consumer research. This can be considered a critical concern for the future of digital music retailing because an understanding of the fundamental needs, wants and expectations of individual consumers is obviously one of the core requirements for the systematic design and development of tomorrow’s business models and success stories in this topical area (Amberg & Schröder, 2007). To address this problem, the present paper examines the applicability of one of the best-known theories for explaining and predicting consumer behaviour – the theory of planned behaviour (TPB) – in explaining the usage intentions of music download stores. In this paper, music download stores are defined as online stores selling music as downloadable files on a pay-per-download basis (e.g., iTunes Store). In addition, the paper provides an examination of the gender and age differences in the core constructs of TPB and their interrelationships. Both of the examinations are based on the analysis of an online survey sample of 1 418 Finnish consumers through structural equation modelling (SEM) and multiple group analysis. The paper begins by providing a brief introduction to TPB in Section 2 and proceeds with a description of the employed data gathering, measurement and data analysis methods in Section 3. Section 4 reports the main results of the study, and these results are discussed further in Section 5, which also outlines some important topics for future research. Finally, the main limitations of the study are briefly described in Section 6.

22

IADIS International Conference on Internet Technologies & Society 2010

2. THE THEORY OF PLANNED BEHAVIOUR The theory of planned behaviour (TPB – Ajzen, 1985, 1991) is an extension of the well-known theory of reasoned action (TRA – Fishbein & Ajzen, 1975; Ajzen & Fishbein, 1980) and one of the most widely used theories for explaining and predicting human behaviour. During the past 30 years, TPB has been successfully applied to examine human behaviour in numerous areas (Ajzen, 2010). One of the most popular application areas has been consumer behaviour. However, only a few studies (e.g., Kwong & Park, 2008) have applied TPB to examine consumer behaviour in the context of digital music retailing, although a number of applications can be found in the context of digital music piracy and illegal peer-to-peer (P2P) file sharing (e.g., Al-Rafee & Cronan, 2006; Cronan & Al-Rafee, 2008). The core constructs of TPB and their hypothesised interrelationships are illustrated in Figure 1 (Ajzen, 1991). The two most central constructs of TPB are behaviour and intention, the latter of which captures the motivational factors that influence the performance of a behaviour. In other words, intention indicates how hard individuals are willing to try and how much effort they are willing to exert in order to perform a behaviour. The core hypothesis of TPB is that the stronger the intention to perform the behaviour, the more probable is its performance. Another core hypothesis of TPB is that intention, in turn, is determined by three antecedent factors: attitude towards the behaviour, subjective norm towards it and perceived behavioural control over it. Attitude captures individuals’ positive and negative evaluations of performing the behaviour, whereas subjective norm captures individuals’ perceptions of the social pressure to perform or to not perform it. Respectively, perceived behavioural control refers to individuals’ sense of self-efficacy or ability to perform the behaviour. The more positive the attitude and subjective norm towards the behaviour and the more perceived behavioural control there is over it, the stronger is the intention to perform the behaviour and, consequently, the more probable is also its performance. Of course, the relative importance of the three antecedent factors varies from individual to individual and also depends on the situation under investigation. For some individuals and situations, attitudinal evaluations may be more important than normative and control ones, whereas for others, normative or control evaluations may dominate.

Attitude

Subjective norm

Intention

Behaviour

Perceived behavioural control

Figure 1. The theory of planned behaviour (Ajzen, 1991)

In addition to influencing behaviour indirectly through intention, perceived behavioural control is hypothesised to influence behaviour directly by acting as a proxy for actual control (Ajzen, 1991). However, in the present paper, this relationship and the relationship between intention and behaviour are not examined further because the primary focus is on the usage intentions of music download stores as well as on explaining them with attitudinal, normative and control evaluations.

3. METHODOLOGY 3.1 Data Gathering To examine the applicability of TPB in explaining the usage intentions of music download stores as well as the gender and age differences in the core constructs of TPB and their interrelationships, a self-administered online survey was conducted among Finnish consumers. A self-administered online survey was selected as

23

ISBN: 978-972-8939-31-1 © 2010 IADIS

the data gathering method because of its cost-effectiveness in gathering the large amount of quantitative data that was required for the study. The survey questionnaire was composed using the LimeSurvey 1.87+ software, and before the actual survey, it was pre-tested using several postgraduate students and industry experts. Based on their comments, some minor improvements were made. The actual survey was launched in June 2010, and it was online for three weeks. During this time, the survey link was promoted by sending multiple invitation e-mails through the internal communication channels of our own university as well as through an electronic mailing list provided by a Finnish retail chain, which contained 5 000 e-mail addresses of their randomly sampled regular customers. In addition, the survey link was posted to two websites promoting online competitions and surveys, as well as to two music related discussion forums. To raise the response rate, all of the respondents who completed the survey were also offered an opportunity to take part in a prize drawing, in which 41 gift cards with a total worth of 1 500 € were raffled among them. During the three weeks, 1 418 complete and valid responses were received (e.g., 29 responses had to be excluded from further analysis due to missing data in all of the items that were used to measure the TPB constructs). The mean response time for the survey was about 17 minutes, suggesting that the questionnaire was rather long for a self-administered online survey. This was also indicated by the relatively high drop-off rate, which was 25.9 %. However, we do not consider the response time or the drop-off rate too high in terms of suggesting severe respondent fatigue. The descriptive statistics of the survey sample are presented in Table 1. As can be seen, the sample can be characterised as very heterogeneous in terms of the gender, age, income and socioeconomic group of the respondents. It also contained relatively many respondents who had previously purchased music from a download store. The mean age of the respondents was 36.4 years (SD = 12.6 years), and the overall gender, age and income distributions of the sample corresponded quite well the gender and age distributions of the Finnish Internet population in 2007 as well as the income distribution of all Finnish income recipients in 2008 (Statistics Finland, 2010). Women and the youngest age group were slightly overrepresented, whereas men and the two oldest age groups were underrepresented. However, there were no indications of severe nonresponse bias in terms of the three variables. Table 1. Descriptive statistics of the sample Number

Percentage

Gender

Male Female

596 822

42.0 % 58.0 %

Age

–30 years 30–44 years 45– years

522 497 399

36.8 % 35.0 % 28.1 %

Annual gross income per person

–15 000 € 15 000–29 999 € 30 000– € Missing

480 381 380 177

33.9 % 26.9 % 26.8 % 12.5 %

Socioeconomic group

Student Employed Unemployed Pensioner Other Missing

338 782 123 81 83 11

23.8 % 55.1 % 8.7 % 5.7 % 5.9 % 0.8 %

Has purchased music from a download store?

Yes No Missing

350 1018 50

24.7 % 71.8 % 3.5 %

Variable

3.2 Measurement Altogether, the survey questionnaire consisted of 108–112 items (depending on responses). However, only 13 of these items were used for the purpose of this paper. Two of the items measured the gender and age of the respondents, and the remaining 11 items measured the intention, attitude, subjective norm and perceived behavioural control constructs of TPB described in Section 2. The design of the measurement items followed the suggestions given by Ajzen (2006). For example, before the actual survey, two preliminary surveys were conducted in April and May 2010, and based on the responses from 66 and 56 university students and staff

24

IADIS International Conference on Internet Technologies & Society 2010

members, the most suitable items from the preliminary sets were selected to the final set. The items in the final set (translated from Finnish to English) and their sample means are listed in Table 2. Attitude was measured by three items, in which the respondents were asked to rate their attitudes towards purchasing music from a download store using a five-point semantic differential scale consisting of bipolar adjective pairs. As suggested by Ajzen (2006), the items were designed to capture both the experiential (ATT2) and the instrumental (ATT3) dimensions of attitudinal evaluations as well as overall attitude (ATT1). Subjective norm and perceived behavioural control were each measured by three items, in which the respondents rated statements concerning purchasing music from a download store using a five-point Likert scale ranging from strong disagreement to strong agreement. As suggested by Ajzen (2006), the normative items were designed to capture both the descriptive (SN1 and SN2) and the injunctive (SN3) dimensions of normative evaluations, whereas the control items were designed to capture both the capability (PBC1 and PBC2) and the control (PBC2 and PBC3) dimensions of control evaluations. Intention was measured similar to subjective norm and perceived behavioural control, but by two items only. Table 2. Measurement items of the constructs Item

Description

Mean

INT1 INT2 ATT1

I plan to purchase music from a download store in the next three months. I intend to purchase music from a download store in the next three months. The idea of me purchasing music from a download store in the next three months sounds good – bad. The idea of me purchasing music from a download store in the next three months sounds unpleasant – pleasant. The idea of me purchasing music from a download store in the next three months sounds foolish – wise. Many people close to me purchase music from download stores. Purchasing music from download stores is common among people close to me. Many people close to me think that purchasing music from download stores is a good idea. If I wanted to, I could purchase music from a download store in the next three months. I possess the necessary knowledge, skills and other resources to purchase music from a download store in the next three months. Excluding my own unwillingness, there is nothing that would prevent me from purchasing music from a download store in the next three months.

1.813 1.808 2.692

ATT2 ATT3 SN1 SN2 SN3 PBC1

PBC2 PBC3

2.737 2.647 2.548 2.326 2.710 3.891 3.919 4.036

3.3 Data Analysis The analysis of the gathered data was based on structural equation modelling (SEM) and multiple group analysis conducted using the Mplus Version 6 software (Muthén & Muthén, 2010). First, the applicability of TPB in explaining the usage intentions of music download stores was examined by estimating the TPB model for the whole sample and studying its fit to the data, parameter estimates and explanatory power. Model fit was evaluated using the χ2 test of model fit and four alternative fit indices: the comparative fit index (CFI), the Tucker-Lewis index (TLI), the root mean square error of approximation (RMSEA) and the standardised root mean square residual (SRMR). The reason for using multiple fit indices stemmed from the recommendations given by several scholars, who urge that model fit should not be evaluated solely on the basis of the χ2 test or any other single fit index – rather, a combination of several fit indices should be used (Byrne et al., 1989). For example, the χ2 test has been found to be sensitive to sample size and model complexity, and thus it tends to underestimate model fit in the case of large samples or complex models (Bentler & Bonett, 1980). On the other hand, the weakness of the four alternative fit indices is that there are no unambiguous lower or upper limits for determining sufficient or good model fit. However, it has commonly been suggested (e.g., Hooper et al., 2008) that in the case of CFI and TLI, values greater than or equal to 0.90 indicate sufficient model fit and values greater than or equal to 0.95 indicate good model fit. In the case of RMSEA, values less than or equal to 0.08 indicate sufficient model fit and values less than or equal to 0.05 indicate good model fit. The value of SRMR should typically be less than or equal to 0.05. Next, the gender and age differences in the core constructs of TPB and their interrelationships were examined by estimating the TPB model separately for each group and comparing the construct means and regression coefficients across the groups. However, before these comparisons could be meaningfully conducted, measurement invariance had to be established across the groups. At the minimum, the comparison of the regression coefficients requires configural and metric invariance, whereas the comparison of the construct means requires configural, metric and scalar invariance (Steenkamp & Baumgartner, 1998). The testing of these three types of measurement invariance was done using the testing procedure formalised by Steenkamp and Baumgartner (1998), in which increasingly strict constraints on parameter equality are added

25

ISBN: 978-972-8939-31-1 © 2010 IADIS

across the groups and the fit of the resulting nested models is compared. In the case of configural invariance, the constraints concern only the simple structure (pattern of non-null regressions) of the constructs, which must be equal across the groups. Metric invariance builds on configural invariance by constraining also the factor loadings to be equal across the groups, whereas scalar invariance builds on metric invariance by constraining also the item intercepts to be equal across the groups. If the addition of these constraints results in no significant deterioration in the model fit, the specific hypothesis on full measurement invariance is accepted. In the opposite case, it is rejected. If this is the case, the hypothesis on partial measurement invariance may be tested by relaxing the added constraints one by one based on the modification indices of the model until the deterioration in the model fit becomes insignificant. In this study, the significance of the deterioration in the model fit was evaluated based on the changes in the χ2 values and the χ2 test of difference using Satorra-Bentler (2001) scaling (Satorra-Bentler scaling had to be used because the models were estimated using the MLR estimator). However, because the χ2 test of difference suffers from the same sensitivity to sample size and model complexity as the χ2 test of model fit, also the changes in the four alternative fit indices were considered as suggested by Steenkamp and Baumgartner (1998).

4. RESULTS 4.1 Reliability and Validity Before examining the TPB model as well as the gender and age differences in its core constructs and their interrelationships more closely, the reliability as well as the convergent and discriminant validity of the constructs and their measurement items were first evaluated. Reliability was evaluated using the Cronbach’s alphas and composite reliabilities of the constructs, which are listed on the left side of Table 3. As can be seen, the Cronbach’s alpha and composite reliability of each construct was well above the commonly suggested lower limit of 0.7 (e.g., Gefen et al., 2000), thus indicating good reliability. Table 3. Cronbach’s alphas, composite reliabilities, AVEs, square roots of AVEs and correlations of the constructs Construct Intention Attitude Subjective norm Perceived behavioural control

Cronbach’s alpha

Composite reliability

AVE

Intention

Attitude

0.948 0.921 0.938 0.893

0.947 0.922 0.938 0.895

0.899 0.798 0.834 0.740

0.948 0.655 0.459 0.286

0.893 0.369 0.301

Subjective norm

0.913 0.098

Perceived behavioural control

0.860

Convergent validity was evaluated using the criterion suggested by Fornell and Larcker (1981), which states that the average variance extracted (AVE) of each construct should be greater than 0.5. The AVEs of the constructs are listed on the left side of Table 3. As can be seen, all of the constructs fulfilled the criterion, thus indicating good convergent validity. Discriminant validity was evaluated using another criterion suggested by Fornell and Larcker (1981), which states that for each construct, the square root of its AVE should be greater than its correlation with the other constructs. The right side of Table 3 lists the correlations between the constructs, with the square roots of the AVEs on the diagonal. As can be seen, all of the constructs fulfilled this criterion as well, thus indicating also good discriminant validity. In addition, convergent and discriminant validity was evaluated by conducting an exploratory factor analysis (EFA) for the constructs and their measurement items using the PASW Statistics 18 software. The results of the EFA are listed in Table 4. As can be seen, each of the measurement items loaded highly on one construct only, and this construct was the one that the item was designed to measure. This provides further support for the good convergent and discriminant validity of the constructs and their measurement items.

26

IADIS International Conference on Internet Technologies & Society 2010

Table 4. EFA of the measurement items using Promax rotation (κ = 4) Item

Intention INT1 INT2 ATT1 ATT2 ATT3 SN1 SN2 SN3 PBC1 PBC2 PBC3

Attitude 0.904 0.934 0.164 -0.081 0.005 -0.007 0.035 -0.028 0.009 0.024 -0.029

Subjective norm 0.054 0.032 0.762 0.994 0.910 -0.033 -0.035 0.092 0.046 -0.006 -0.045

Perceived behavioural control

0.008 0.003 0.031 0.008 -0.020 0.968 0.963 0.811 -0.008 -0.038 0.046

-0.001 0.005 0.016 0.024 -0.044 -0.006 -0.038 0.051 0.819 0.902 0.894

4.2 Model Estimation The estimation of the TPB model was done using the MLR (robust maximum likelihood) estimator due to the non-normal distributions of nearly all of the measurement items. The initial TPB model (in which intention was measured by two items and the other three constructs by three items each and no correlation was allowed between the measurement errors) fitted to the data fairly well. The χ2 test rejected the model (χ2(38) = 199.011, p < 0.001), but as discussed in Section 3.3, this was probably caused more by the sample size and model complexity than by actual problems in the model fit. The four alternative fit indices suggested good or at least sufficient fit to the data (CFI = 0.977, TLI = 0.967, RMSEA = 0.055, SRMR = 0.033). However, two exceptionally high modification indices of the model indicated that the model fit could still be significantly improved by either allowing intention to be measured also by the item ATT1 (MI = 73.712) or allowing the measurement errors of the items ATT2 and ATT3 to correlate (MI = 70.384). Because the latter modification can be easily justified also by theoretical arguments (e.g., the two items measured the same construct in a very similar manner and were positioned side by side in the survey questionnaire), a decision was made to implement it. 0.129**

ATT1

0.386***

ATT2

1.000a 0.812***

Attitude 0.766***

0.207*** 0.500***

ATT3

0.190***

SN1

0.540***

0.128***

SN2

1.000a 0.977*** 0.840***

0.470***

1.000a

Subjective norm

0.218***

0.162***

INT2

0.131***

Intention

(R2 = 52.4 %)

SN3

INT1

1.006***

0.087*** 0.573***

PBC1

0.446***

PBC2

0.467***

PBC3

1.000a 1.055*** 0.959***

Perceived behavioural control

Figure 2. Estimated TPB model (a = fixed to 1, * = p < 0.05, ** = p < 0.01, *** = p < 0.001)

The fit indices of the final model suggested an even better fit to the data. The χ2 test still rejected the model (χ2(37) = 131.317, p < 0.001), but the other fit indices indicated an excellent fit (CFI = 0.987, TLI = 0.980, RMSEA = 0.042, SRMR = 0.030). In addition, none of the modification indices of the model anymore stood out as being exceptionally high. The parameter estimates of the final model are presented in Figure 2. As can be seen, the regressions of intention on attitude, subjective norm and perceived behavioural control were all statistically very significant (p < 0.001) and positive as had been hypothesised by TPB. The regression of intention on perceived behavioural control was very weak (β = 0.087), whereas the regressions of intention on subjective norm (β = 0.218) and attitude (β = 0.540) were relatively strong. Together attitude,

27

ISBN: 978-972-8939-31-1 © 2010 IADIS

subjective norm and perceived behavioural control explained about half (52.4 %) of the total variance in intentions.

4.3 Gender Differences The examination of the differences in the construct means and regression coefficients between men and women followed the phased testing procedure described in Section 3.3, and its results (in terms of changes in model fit) are summarised in Table 5. The procedure began by estimating the TPB model illustrated in Figure 2 separately for men and women without any additional constraints between the groups. The resulting full configural invariance model fitted to the data very well. Only the χ2 test rejected the model (χ2(74) = 180.401, p < 0.001), whereas the other fit indices suggested a good fit (CFI = 0.985, TLI = 0.978, RMSEA = 0.045, SRMR = 0.033). Thus, the hypothesis on configural invariance between the groups was accepted. Next, metric invariance was tested by constraining the factor loadings to be equal between the groups and comparing the fit of the resulting full metric invariance model to the fit of the full configural invariance model. The χ2 test suggested no significant deterioration in the model fit (∆χ2(7) = 7.518, p > 0.05), and this was supported by the other fit indices as well. Thus, also the hypothesis on full metric invariance between the groups was accepted. Next, scalar invariance was tested by constraining also the item intercepts to be equal between the groups and comparing the fit of the resulting full scalar invariance model to the fit of the full metric invariance model. This time the χ2 test suggested significant deterioration in the model fit (∆χ2(7) = 48.247, p < 0.001), and this was supported by the other fit indices as well, although the deterioration seemed to be not as severe as had been indicated by the χ2 test. By far the highest modification index (MI = 42.805) was associated with the intercept of the item PBC2, suggesting its non-invariance between the groups. Thus, the hypothesis on full scalar invariance was rejected and the testing proceeded with partial scalar invariance. This was tested by relaxing the constraint concerning the intercept of the item PBC2 and re-comparing the fit of the resulting partial scalar invariance model to the fit of the full metric invariance model. The χ2 test no longer suggested significant deterioration in the model fit (∆χ2(6) = 4.714, p > 0.05), and this was supported by the other fit indices as well. Thus, the hypothesis on partial scalar invariance (in which the intercept of the item PBC2 is non-invariant between the groups), was accepted. Finally, the invariance of the regression coefficients was tested by constraining also them to be equal between the groups and comparing the fit of the resulting model to the fit of the partial scalar invariance model. The χ2 test suggested no significant deterioration in the model fit (∆χ2(3) = 3.887, p > 0.05), and this was supported by the other fit indices as well. Thus, the hypothesis on the full invariance of the regression coefficients between the groups was accepted. Table 5. Tests of measurement invariance between men and women Model Full configural invariance Full metric invariance Full scalar invariance Partial scalar invariance Full regression invariance

CFI

TLI

RMSEA

SRMR

χ2

df

Scaling correction factor

∆χ2

∆df

p

0.985 0.985 0.979 0.985 0.985

0.978 0.980 0.974 0.981 0.981

0.045 0.043 0.049 0.041 0.041

0.033 0.034 0.038 0.035 0.036

180.401 187.975 235.202 192.644 197.445

74 81 88 87 90

1.100 1.092 1.083 1.085 1.090

7.518 48.247 4.714 3.887

7 7 6 3

0.377 < 0.001 0.581 0.274

Although no full scalar invariance could be established between the groups, the construct means can still be meaningfully compared because each construct was measured by at least two items that had invariant factor loadings and item intercepts between the groups (Steenkamp & Baumgartner, 1998). A summary of this comparison is presented in Table 6, which also lists the values of the regression coefficients, the coefficients of determination and the non-invariant intercepts of the item PBC2 estimated for the final full regression invariance model. Note that the construct means of men had to be fixed to zero due to requirements related to model identification, meaning that men acted as a reference group for women. As can be seen, men and women did not differ significantly in terms of attitude, but women had a significantly stronger subjective norm and weaker perceived behavioural compared to men. Additionally, the actual intention of women to use music download stores was slightly weaker compared to men. Overall, the TPB model explained 53.6 % of the total variance in intentions among men and 51.6 % among women.

28

IADIS International Conference on Internet Technologies & Society 2010

Table 6. Construct means (α), regression coefficients (β), non-invariant intercepts (ν) and coefficients of determination (R2) for men and women (a = fixed to 0, * = p < 0.05, ** = p < 0.01, *** = p < 0.001) Group Men Women

αINT

αATT

αSN

αPBC

0.000a -0.107*

0.000a 0.096

0.000a 0.330***

0.000a -0.477***

βINT, ATT 0.544***

βINT, SN 0.224***

βINT, PBC

νPBC2

R2

0.070**

4.379*** 4.064***

53.6 % 51.6 %

4.4 Age Differences As in the case of gender, the examination of the differences in the construct means and regression coefficients across age groups followed the phased testing procedure described in Section 3.3, and its results (in terms of changes in model fit) are summarised in Table 7. The procedure began by estimating the TPB model illustrated in Figure 2 separately for the age groups of under 30 years, 30–44 years and 45 years or over without any additional constraints across the groups. The resulting full configural invariance model fitted to the data very well. Only the χ2 test rejected the model (χ2(111) = 211.925, p < 0.001), whereas the other fit indices suggested a good fit (CFI = 0.986, TLI = 0.979, RMSEA = 0.044, SRMR = 0.033). Thus, the hypothesis on configural invariance across the groups was accepted. Next, metric invariance was tested by constraining the factor loadings to be equal across the groups and comparing the fit of the resulting full metric invariance model to the fit of the full configural invariance model. The χ2 test suggested no significant deterioration in the model fit (∆χ2(14) = 7.941, p > 0.05), and this was supported by the other fit indices as well. Thus, also the hypothesis on full metric invariance across the groups was accepted. Next, scalar invariance was tested by constraining also the item intercepts to be equal across the groups and comparing the fit of the resulting full scalar invariance model to the fit of the full metric invariance model. This time the χ2 test suggested significant deterioration in the model fit (∆χ2(14) = 37.987, p < 0.001), and this was supported by the other fit indices as well, although the deterioration once again seemed to be not as severe as had been indicated by the χ2 test. The highest modification index (MI = 12.612) was associated with the intercept of the item SN3 in the age group of under 30 years, suggesting its non-invariance between this group and the other groups. Thus, the hypothesis on full scalar invariance was rejected and the testing proceeded with partial scalar invariance. This was tested by relaxing the constraints concerning the intercept of the item SN3 in the age group of under 30 years and re-comparing the fit of the resulting partial scalar invariance model to the fit of the full metric invariance model. The χ2 test still suggested significant deterioration in the model fit (∆χ2(13) = 25.558, p < 0.05), but because the ∆χ2 value was already very close to the acceptance limit (∆χ20.05(13) = 22.362) and the suggestion was also no longer supported by the other fit indices, the decision was made to accept the hypothesis on partial scalar invariance (in which the intercept of the item SN3 is non-invariant between the age groups of under 30 years and the other age groups). This decision also received support from the modification indices of the model, none of which anymore stood out as being exceptionally high. The highest ones (MI = 6.203) were associated with the intercepts of the items SN1 and SN2 in the age group of under 30 years. Finally, the invariance of the regression coefficients was tested by constraining them to be equal across the groups and comparing the fit of the resulting model to the fit of the partial scalar invariance model. The χ2 test suggested significant deterioration in the model fit (∆χ2(6) = 19.629, p < 0.01), and this suggestion was supported by the other fit indices as well, especially SRMR. Thus, the hypothesis on the full invariance of the regression coefficients was rejected and the testing proceeded with the partial invariance of the regression coefficients. The highest modification index (MI = 19.730) was associated with the regression of intention on subjective norm in the age group of 45 years or over, suggesting its non-invariance between this group and the other groups. Thus, the constraints concerning it were relaxed and the fit of the resulting model was recompared to the fit of the partial scalar invariance model. The χ2 test no longer suggested significant deterioration in the model fit (∆χ2(5) = 4.343, p > 0.05), and this was supported by the other fit indices as well. Thus, the hypothesis on the partial invariance of the regression coefficients (in which the regression of intention on subjective norm is non-invariant between the age group of 45 years or over and the other age groups) was accepted.

29

ISBN: 978-972-8939-31-1 © 2010 IADIS

Table 7. Tests of measurement invariance across age groups Model Full configural invariance Full metric invariance Full scalar invariance Partial scalar invariance Full regression invariance Partial regression invariance

CFI

TLI

RMSEA

SRMR

χ2

df

Scaling correction factor

∆χ2

∆df

p

0.986 0.987 0.984 0.985 0.983 0.985

0.979 0.983 0.981 0.982 0.980 0.983

0.044 0.040 0.043 0.041 0.043 0.040

0.033 0.034 0.038 0.036 0.049 0.039

211.925 220.194 257.864 245.109 269.900 250.381

111 125 139 138 144 143

1.086 1.081 1.072 1.071 1.079 1.076

7.941 37.987 25.558 19.629 4.343

14 14 13 6 5

0.892 < 0.001 0.019 0.003 0.501

Although no full scalar invariance could be established across the groups, also in this case the construct means can still be meaningfully compared because each construct was measured by at least two items that had invariant factor loadings and item intercepts across the groups (Steenkamp & Baumgartner, 1998). A summary of this comparison is presented in Table 8, which also lists the values of the regression coefficients, the coefficients of determination and the non-invariant intercepts of the item SN3 estimated for the final partial regression invariance model. Note that the construct means of the age group of under 30 years had to be fixed to zero due to requirements related to model identification, meaning that this age group acted as a reference group for the other age groups. As can be seen, attitude was significantly more positive in the age group of 30–44 years compared to the other age groups, whereas subjective norm was significantly weaker in the age group of under 30 years and perceived behavioural control was significantly weaker in the age group of 45 years or over. All of the other differences in the construct means were statistically insignificant, as was the regression of intention on subjective norm in the age group of 45 years or over. Additionally, the actual intention to use music download stores did not differ significantly across the age groups. Overall, the TPB model explained 54.0 % of the total variance in intentions in the age group of under 30 years, 51.9 % in the age group of 30–44 years and 50.0 % in the age group of 45 years or over. Table 8. Construct means (α), regression coefficients (β), non-invariant intercepts (ν) and coefficients of determination (R2) for different age groups (a = fixed to 0, * = p < 0.05, ** = p < 0.01, *** = p < 0.001) Group –29 years 30–44 years 45– years

αINT

αATT a

0.000 -0.015 0.017

a

0.000 0.248** -0.107

αSN a

0.000 0.635*** 0.682***

αPBC

βINT, ATT

βINT, SN

βINT, PBC

a

0.000 -0.075 -0.847***

0.522***

0.291*** 0.066

0.084***

νSN3

R2

2.471***

54.0 % 51.9 % 50.0 %

2.276***

5. DISCUSSION AND FUTURE RESEARCH This paper examined the applicability of the theory of planned behaviour (TPB) in explaining the usage intentions of music download stores as well as the gender and age differences in the core constructs of TPB and their interrelationships. Its results indicate that TPB can indeed be successfully applied to explain the usage intentions of music download stores. As hypothesised by TPB, the usage intentions regressed positively on both attitude and subjective norm towards using the stores as well as on perceived behavioural control over their usage. Attitude was by far the most important explanatory factor for the usage intentions, followed by subjective norm. In contrast, the effect of perceived behavioural control on the usage intentions was found to be only marginal. Together attitude, subjective norm and perceived behavioural control explained about half of the total variance in the usage intentions. Although this explanatory power can be considered satisfactory when compared to several other TPB studies (Ajzen, 1991), it still raises a question regarding the existence of other factors that might explain the remaining half. For example, could some other existing theory or model provide an even better explanatory power or should entirely new models and theories be crafted specifically for the purpose of digital music retailing? These questions obviously cannot be answered in the context of the present paper but provide an interesting topic for future research. The results also revealed some significant gender and age differences in the core constructs of TPB and their interrelationships. For example, when comparing men and women, women seemed to have a stronger subjective norm towards using music download stores but weaker perceived behavioural control over their usage. Additionally, their actual usage intentions were slightly weaker. In contrast, when comparing different age groups, the age group of 30–44 years seemed to have a slightly more positive attitude towards using

30

IADIS International Conference on Internet Technologies & Society 2010

music download stores, whereas perceived behavioural control over their usage was significantly weaker in the age group of 45 years or over. Both of these age groups also seemed to have a stronger subjective norm compared to the age group of under 30 years, although in the age group of 45 years or over, subjective norm was not found to have any influence on the usage intentions. However, no differences in the actual usage intentions were found across the age groups. Of these findings, especially the relatively negative attitude and weak subjective norm in the age group of under 30 years can be considered surprising, and it can perhaps be best explained by the popularity of alternative music acquisition channels, such as digital music piracy and illegal P2P file sharing, among this consumer segment (Bhattacharjee et al., 2003). Another interesting finding was the insignificant influence of subjective norm on the usage intentions in the age group of 45 years or over, which contradicts the prior findings by Venkatesh et al. (2003), suggesting that subjective norm should be more salient among elderly individuals, especially elderly women. An interesting additional finding was also the fact that the explanatory power of TPB did not seem to significantly differ across the groups, although it was slightly stronger for men than women and also seemed to weaken with age. All in all, the results provide some interesting implications for the business models of music download stores. First and foremost, it seems that the business models should concentrate on improving the attitudinal evaluations towards using the stores because attitude was found to be by far the most important explanatory factor for the usage intentions. Attitudinal improvement seems to be especially important in the age groups of under 30 years and 45 years or over, in which attitudes were found to be slightly more negative compared to the age group of 30–44 years. Second, the business models should also concentrate on improving the normative evaluations towards using the stores because also subjective norm was found to be an important explanatory factor for the usage intentions, although only in the two youngest age groups. Normative improvement seems to be especially important among men and in the age group of under 30 years. Third, although perceived behavioural control over using the stores was found to have only a marginal effect on the usage intentions, its importance should not be overlooked either. This is because in addition to indirect effects on behaviour through intention, perceived behavioural control may potentially exert significant direct effects on behaviour as discussed in Section 2. Therefore, improvements in the control evaluations over using the stores are also important. In this respect, women and the age group of 45 years or over seem to be the most critical consumer segments. Unfortunately, changes in attitudinal, normative and control evaluations are typically not easy to accomplish. This is especially true in the case of attitudinal evaluations, although various theories for attitudinal change (e.g., learning theories, attribution theories, cognitive consistency theories, high- and lowinvolvement information processing) have been proposed in prior literature (Sheth & Mittal, 2004). Typically, the systematic manipulation of the evaluations requires the elicitation of the belief composites underlying the aggregate constructs so that cognitive, affective and conative appeals for changing them can be designed and implemented (Ajzen, 1991; Sheth & Mittal, 2004). For example, what kind of beliefs do people possess (1) on the outcomes of using music download stores, (2) on the social pressures to either use or not use them and (3) on the factors that either facilitate or impede their usage? Therefore, the elicitation of these belief composites should be one of the main focuses of future research on digital music retailing.

6. LIMITATIONS We consider this paper to have three main limitations. First, because a self-administered online survey was employed as a data gathering method, the results cannot be directly generalised to the whole Finnish population, but only to the Finnish Internet population. Second, the paper focused only on the main effects of gender and age on the core constructs of TPB and their interrelationships, and did not investigate the potential interactions of the two variables. Third, as discussed above, the paper also did not perform any further examination of the belief composites underlying the aggregate constructs, which obviously poses some limitations on the practical applicability of the results.

31

ISBN: 978-972-8939-31-1 © 2010 IADIS

REFERENCES Ajzen, I., 1985. From intentions to actions: A theory of planned behavior. In J. Kuhl and J. Beckman (Eds.), Actioncontrol: From cognition to behavior (pp. 11–39). Springer, Heidelberg, Germany. Ajzen, I., 1991. The theory of planned behavior. Organizational Behavior and Human Decision Processes, Vol. 50, No. 2, pp. 179–211. Ajzen, I., 2006. Constructing a TPB Questionnaire: Conceptual and Methodological Considerations. Retrieved from http://www.people.umass.edu/aizen/pdf/tpb.measurement.pdf Ajzen, I., 2010. The Theory of Planned Behavior: A Bibliography. Retrieved from http://www.people.umass.edu/aizen/ tpbrefs.html Ajzen, I. and Fishbein, M., 1980. Understanding Attitudes and Predicting Social Behavior. Prentice-Hall, Englewood Cliffs, NJ, USA. Al-Rafee, S. and Cronan, T. P., 2006. Digital Piracy: Factors that Influence Attitude Toward Behavior. Journal of Business Ethics, Vol. 63, No. 3, pp. 237–259. Amberg, M. and Schröder, M., 2007. E-business models and consumer expectations for digital audio distribution. Journal of Enterprise Information Management, Vol. 20, No. 3, pp. 291–303. Bentler, P. M. and Bonett, D. G., 1980. Significance tests and goodness-of-fit in the analysis of covariance structures. Psychological Bulletin, Vol. 88, No. 3, pp. 588–606. Bhattacharjee, R. D. et al., 2003. Digital Music and Online Sharing: Software Piracy 2.0? Communications of the ACM, Vol. 46, No. 7, pp. 107–111. Bounagui, M. and Nel, J., 2009. Towards understanding intention to purchase online music downloads. Management Dynamics, Vol. 18, No. 1, pp. 15–26. Byrne, B. M. et al., 1989. Testing for the Equivalence of Factor Covariance and Mean Structures: The Issue of Partial Measurement Invariance. Psychological Bulletin, Vol. 105, No. 3, pp. 456–466. Chu, C.-W. and Lu, H.-P., 2007. Factors influencing online music purchase intention in Taiwan: An empirical study based on the value-intention framework. Internet Research, Vol. 17, No. 2, pp. 139–155. Cronan, T. P. and Al-Rafee, S., 2008. Factors that Influence the Intention to Pirate Software and Media. Journal of Business Ethics, Vol. 78, No. 4, pp. 527–545. Fishbein, M. and Ajzen, I., 1975. Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research. Addison-Wesley, Reading, MA, USA. Fornell, C. and Larcker, D. F., 1981. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. Journal of Marketing Research, Vol. 18, No. 1, pp. 39–50. Gefen, D. et al., 2000. Structural Equation Modeling and Regression: Guidelines for Research Practice. Communications of the Association for Information Systems, Vol. 4, Article 7. Hooper, D. et al., 2008. Structural Equation Modelling: Guidelines for Determining Model Fit. Electronic Journal of Business Research Methods, Vol. 6, No. 1, pp. 53–60. IFPI, 2010. Digital Music Report 2009 (IFPI Digital Music Reports). Retrieved from http://www.ifpi.org/content/library/ DMR2010.pdf Kunze, O. and Mai, L.-W., 2007. Consumer adoption of online music services: The influence of perceived risks and riskrelief strategies. International Journal of Retail & Distribution Management, Vol. 35, No. 11, pp. 862–877. Kwong, S. W. and Park, J., 2008. Digital music services: consumer intention and adoption. Service Industries Journal, Vol. 28, No. 10, pp. 1463–1481. Makkonen, M. et al., 2010. The Acquisition and Consumption Behaviour of Modern Recorded Music Consumers: Results from an Interview Study. In S. Krishnamurthy (Ed.), Proceedings of the IADIS International Conference eCommerce 2010, pp. 36–44. Múthen, B. and Muthén, L., 2010. Mplus. Retrieved from http://www.statmodel.com Satorra, A. and Bentler, P. M., 2001. A Scaled Difference Chi-square Test Statistic for Moment Structure Analysis. Psychometrika, Vol. 66, No. 4, pp. 507–514. Sheth, J. N. and Mittal, B., 2004. Customer Behavior: A Managerial Perspective (2nd ed.). Thomson South-Western: Mason, OH, USA. Statistics Finland, 2010. Statistics Finland. Retrieved from http://www.stat.fi Steenkamp, J.-B. E. M. and Baumgartner, H., 1998. Assessing Measurement Invariance in Cross-National Consumer Research. Journal of Consumer Research, Vol. 25, No. 1, pp. 78–90. Venkatesh, V. et al. 2003. User Acceptance of Information Technology: Towards a Unified View. MIS Quarterly, No. 27, No. 3, pp. 425–478.

32

IADIS International Conference on Internet Technologies & Society 2010

CAN VIRTUAL WORLD PROPERTY BE CONSIDERED A DIGITAL GOOD? Nicholas. C. Patterson and Michael Hobbs Deakin University - Waurn Ponds, Victoria, Australia 3217

ABSTRACT What types of goods should be considered digital goods? This paper discusses the question of whether virtual property, such as items available in virtual world environments like Linden Lab’s Second Life and Blizzard’s World of Warcraft, should be considered a valid digital good. The makeup of a virtual property items are explored in this paper and their key features compared and contrasted with that of digital goods. Common examples of digital goods include: electronic books, software, digital music and digital movies. These goods are considered a tangible commodity, that is they have an unlimited supply and secondly they are in a digital/binary form (a sequence of 1’s and 0s’). When looking at why a virtual property items should be included in the category of ‘digital goods’, it is important to consider how items in a virtual world come to exist and how the availability of these items are often controlled by publishers and developers. The aim of this paper is show that digital goods should not be limited to the traditional views such as electronic books, software, music and movies; but in fact the term ‘digital good’ should also include the active market of virtual property items. KEYWORDS Digital goods, virtual property, virtual world environments, real money trading, virtual property theft, piracy

1. INTRODUCTION Virtual worlds fall into two categories, virtual world environments (VWEs) and virtual world games (VWGs). The distinct difference between these two is that VWEs are essentially a persistent online social space which aims to represent the real world, whereas the latter revolves around gaming activities, accumulation of points and increasing levels – although VWGs can still be used for online, social interaction. Within virtual worlds each user is represented by an avatar, a digital representation of themselves that can be used to collect a form of personal goods within the virtual world, aptly named virtual property. These virtual property items can be gathered from many sources within virtual worlds; they might come from performing missions or quests, creating items within the environment or purchasing them with the use of virtual currency. Virtual property can also be purchased with real world money as there are legitimate and illegitimate markets for virtual property items. Users can visit websites such as ItemBay (ItemBay 2005) which is a legitimate virtual property auction site and purchase virtual property items (of their choice) and once the purchase is made the virtual property is sent to the users designated avatar in a virtual world. Most of the cyber crime in virtual worlds comes from the theft of these virtual property items and then selling them for real money, a process known as Real Money Trading (RMT). Virtual worlds are becoming increasingly popular; with this popularity so are the occurrences of cyber crime, so the security of these online worlds is becoming increasingly important. Since the advent of high speed internet digital goods have become prevalent in the online world. Users can purchase physical goods online, as well as purchase and download a myriad of different digital goods such as music, movies, video games and electronic books, just to name a few. The digital goods market operates by a user purchasing a product with real money (usually through a credit card transaction), after which they are permitted to download and use a digital version of that product on their home computer or mobile phone. It is important to note that the digital goods market is also prone to crime, in the form of piracy where illegitimate copies of the digital goods are made available to other users. These alternate

33

ISBN: 978-972-8939-31-1 © 2010 IADIS

sources can be downloaded without having to pay the purchase price; commonly through peer-to-peer file sharing technologies such as bit-torrent. The goal of this paper is to present evidence that many similarities exist between traditional digital goods and the relatively new form of goods named virtual property. These two electronic items share many of the same benefits, as they also suffer from similar problems. The advantages of both digital goods and virtual property include: no physical inventory, no shipping, instant consumer gratification, instantaneous transactions and often are cheaper than physical items. Whereas, the issues that exist for both digital goods and virtual property items include: inadequate legal support, vulnerability to crime, availability manipulation, and difficulty in allowing customers with a demonstration of the item before purchase. Due to the distinct similarities between the two, virtual property should now be included in the family of digital goods. Virtual property and virtual worlds suffer from the crime of virtual property theft, a crime that is not yet fully recognized from a legal point of view. The limited capabilities of the law (in this area), often results in cyber thieves avoiding punishment. If virtual property is considered in the family of digital goods, owners of these items could benefit from greater legal recognition in national and international courts of law. This paper is structured as follows. Section 2 presents our definition and description of digital goods and highlights there features against those of virtual property items. Section 3 discusses the value of both digital goods and virtual property, as identified by the merchant and users, respectively. This section also details piracy and theft issues which are common with both of these goods. Section 4 ends the paper with concluding remarks to reinforce the overall consensus.

2. DIGITAL GOODS AND VIRTUAL PROPERTY 2.1 What is a Digital Good? So what exactly is a digital good? Quah (2003, p. 7) states that economists have traditionally viewed digital goods essentially as ideas, such as; scientific knowledge, engineering blueprints and technological innovation. Rayna (2007, p .19) on the other hand argues that a digital good is an item that is distributed in a digital format; which can be anything that is encoded in binary form as a continuous stream of 1s and 0s. Haltiwanger and Jarmin (2000, p. 6) enforces this theory by stating that digital goods such as books, movies and music; are items that can be delivered to consumers in a digital form over the internet, so therefore it is possible for these goods to bypass traditional distribution channels. Rayna (2007, p. 19) further expands on the discussion explaining that these digital goods are generally items that are used for entertainment purposes, so movies, music, video games and books. Digital goods have a high initial production cost, and a very low, if not zero reproduction cost. Digital goods have characteristics of a public good in the fact that they can be shared with others and this sharing does not reduce the consumer’s usefulness of the product (Bhattacharjee, Gopal et al. 2003, p. 108). These characteristics actually inadvertently allow for the widespread and quite often illegal distribution worldwide (Bhattacharjee, Gopal et al. 2003, p. 108). Haltiwanger and Jarmin (2000, p. 10) enforces this statement by saying that digital products are characterized by high fixed costs (for example writing a book) and low marginal costs (say emailing an electronic book file). Rayna (2007, p. 25) declares 4 key points that he believes that are common with all digital goods: • They are accessible to the public • They contain information which has a value (yet their value can be subjective) • They do not suffer from erosion • They can be duplicated In terms of being accessible to the public; digital goods can be accessed by consumers with simply an internet connection, web browser and in most cases a form of payment option such as credit card. These items in fact contain a value which can be subjective, they are generally priced based upon factors such as popularity, release date and rarity; these prices are determined often purely on an individual merchant basis. For example using the music industry, if on the exact same day a popular artist releases a new album the merchant price is often higher than say when a not so popular artist releases a new album on the same day. In terms of not suffering from erosion, these digital goods due to being electronic are made up of a series of

34

IADIS International Conference on Internet Technologies & Society 2010

binary data which does not suffer from ‘wear and tear’ as physical items do. The last point being they can be duplicated means that due to these items being digital and often found on personal computers, they can be quite easily cloned or ‘pirated’ without having to pay any extra cost (this will be discussed further in section 3.4).

2.2 What is a Virtual Property Item? As discussed in the introduction virtual worlds can fall into two categories: that of virtual world environments (VWEs) and virtual world games (VWGs). Virtual worlds can have a great numbers of users amounting in the millions. Blizzard Entertainment operates the award winning fantasy virtual world game named World of Warcraft is played by more than 11.5 million subscribers (Blizzard.Entertainment 2008). These virtual worlds allow for people with internet connections to live a portion of their life in a digital form; allowing them to form virtual friendships, build and acquire virtual property and form social organizations (Keighley (cited in Lastowka and Hunter 2003)). In these virtual worlds people are now trading real money for ‘property’ existent purely in these virtual worlds environments, which then creates value in a virtual world (Kennedy 2008, p. 96). This process of real money trading (RMT) has grown rapidly over the past few years. It was initially viewed with curiosity and an example of the strange and unpredictable nature of the internet, but now it has become a profitable market in its own right and has created several fortunes for various people (Castranova (cited in Kennedy 2008, p. 96)) . Virtual world operators are beginning to introduce a method called ‘microtransactions’ where users pay a small amount of money for a virtual property item introduced by the virtual world operator, which usually has no competitive advantage within the world. In 2009 World of Warcraft started offering virtual pets to its user base for a small price; these were the first virtual world items sold for real world money in this VWG (Mastrapa 2009). Mastrapa (2009) goes on to say that it is not uncommon for users to pay real world money for virtual property items in ‘free to play’ VWGs in order for the operators to make their money off of item sales. As where subscription based VWGs the virtual property is often gathered mostly through gameplay mechanics as well as time and effort on the user’s part through questing or killing non player characters (NPCs). When explaining what virtual property is, it can be beneficial to detail the characteristics of virtual property. Virtual property has the same characteristics as digital property outlined by Rayna (Rayna 2007) in the previous section 2.1 . Is virtual property available to the public? Yes if they use one of the many virtual world environments available. Does virtual property contain information which has value? Yes virtual property contains information that is valuable both in the real and virtual world. Does virtual property erode quickly? Due to it being a digital item it can’t physically wear out. Can the value of virtual property be subjective? Yes value of virtual property items is often determined by the users of these virtual worlds and is often based on the rarity or usefulness of the item. Can virtual property items be stolen or pirated? Virtual property items can be transferred or traded, if this process is done without the consent of the owner it is considered virtual property theft.

2.3 Comparison of Digital Goods and Virtual Property This section discusses compares and contrasts, digital goods and virtual property items. Rayna (2007, p. 18) explains that digital goods exist and are available to the public in very large quantities and in some cases their value is not known to the consumer or the producer. This is similar to virtual property in virtual worlds, where items are abundant in quantity; in one specific VWG such as World of Warcraft there could be many thousands of clones/copies of the exact same virtual property item; which can be acquired in many different ways. It’s possible for each avatar to have their own copy of a virtual item; for example two avatars in the VWE World of Warcraft (Blizzard.Entertainment 2004) can each have a ‘barbarian axe’ (which could either be gained through purchase or gameplay mechanics). The value of these items is often not known, so therefore is determined by virtual world operators and varies between different realms (regions of the same world). Digital goods can come from various sources, for example official distributors, pirate peer-to-peer networks and social networks. The quality and reliability of these particular sources is often uncertain at best (Rayna 2007, p. 18). This directly correlates with virtual property; it can also come from many different

35

ISBN: 978-972-8939-31-1 © 2010 IADIS

sources. These can be acquired through doing virtual world quests or missions, created by users, direct trade for virtual currency, virtual world auction house trades, and purchase on World Wide Web (WWW) markets such as ItemBay for real money through RMT. These sources are defiantly uncertain at best; due to factors such as sellers (both in virtual world markets and WWW markets) can often be deceitful and often in fact can be cyber criminals peddling stolen goods. Virtual property items exist in an environment which essentially has very limited legal avenues for prosecuting criminals which commit virtual world crimes such as virtual property theft. In many cases where a crime has occurred, such as virtual property theft it is difficult to apply real world laws. As an example, protecting the rights to a virtual world avatar, the law of unfair competition can be adapted. This law essentially protects unregistered trademarks and also trade dress, which could allow for a users virtual world avatar to qualify as an unregistered trademark (which is a mark that is unregistered, and if prior rights can be claimed can still receive protection) or receive trade dress protection to cover packaging or appearance of a product (Stephens 2002, p. 7). Even though digital goods exist in an online environment and are electronic goods, not physical items they don’t have the same difficulty in the realm of law as virtual property. They can benefit from areas of law such as intellectual property rights and copyright. Digital goods are essentially electronic forms of information, entertainment or culture, they are a type of good that the quality of the product cannot be determined prior to the purchase by the user (Nelson 1970) (Nelson 1974). Consumers of these digital goods can sometimes experience the product before purchase through free samples (Wright and Lynch 1995); there are various sampling techniques, in the case of music the merchant might let the user listen to 30 seconds of a song before purchase is made. The distinct difference here with virtual property items is that the user can not trial the item before purchase and has no idea how an item will perform prior to purchase or collection, only once the user has gained control of the item will they know how effective or ineffective it is. Sampling is a method that cannot work for virtual property, because you need the entirety of a virtual property item to gain a perspective on how it looks and functions. Due to these virtual property items often having distinct uses in VWGs, sampling could put the issue of cheating in to question, where one user could continually utilize samples of high powered items to achieve a competitive advantage against other users. To summarize virtual property items and digital goods are similar in many ways, both are available in very large quantities and often the value is not known to the consumer. Both items can come from a variety of sources and not limited to one particular supply merchant, but these streams can often be uncertain at best in terms of factors such as security and safety.

3. VALUE, TRADING AND SECURITY OF DIGITAL GOODS AND VIRTUAL PROPERTY Digital goods and virtual property both have value associated with them, either explicitly as marked price on a digital good or through the subjective value of a virtual property item obtained through many long hours playing a virtual world game. In this section we discuss the issue of value further in regards to digital goods and virtual property, in addition to the mechanisms of how both these items are traded and the security concerns users experience when trading.

3.1 The Value of Digital Goods The question of how the value of a digital property item is defined is complex. Rayna (2007, p. 25) states that digital goods such as music, software or video games actually need to be experienced multiple times before a true value can be known to the consumer. Some examples of what digital goods are valued at are the current top selling video game is valued at $89.95 AU (Games.Warehouse 2010), the current top selling music album is valued at $24.99 AU (Sanity.Entertainment 2010), the current top BluRay version of a movie is valued at $34.99 AU (Sanity.Entertainment 2010). Digital goods are often valued through consumer experience, so in other words the value of these goods cannot be determined without the consumer first trying them (Hill 2007, p. 18). So when legal samples are not available, users may actually turn to using a pirated copy of the product they desire in order to put a value on the digital good and then decide whether or not to make a legal purchase of that product (Hill 2007, p. 18).

36

IADIS International Conference on Internet Technologies & Society 2010

So for example with digital music, the user might download a particular band’s album and decide they like it, so then when the band’s new album comes available will make a legal purchase of it. In regards to the digital good in the form of software, network effects are quite important in defining value. What is meant by network effects is the value of owning various software products is an important function of the number of other people who own the same product (Hill 2007, p. 18). Using the example of Microsoft Office product, the ability to exchange Office files with other users of Office creates a value for owners of Office; therefore is an increasing function of the number of other consumers in the network (Hill 2007, p. 18). On the issue availability of digital goods within the real world, the publisher can essentially create an unlimited number of copies of a good, which often is determined by judging potential popularity and estimated sales. This amount can easily be increased when products exceed expectations. The cost with digital goods is initially set by the publisher, and then secondary markets or retailers can either choose to increase or decrease the cost to the consumer dependant on factors such as ongoing sales and popularity.

3.2 The Value of Virtual Property Virtual property items are arbitrary items, which only have use within virtual worlds which that virtual property item is related too. For example trying to utilize a pair of virtual shoes from the VWE Second Life (Linden.Research.Inc 1999) in the VWG World of Warcraft will not work. Whereas digital goods have multiple uses and generally widespread compatibility, such as watching a movie or reading a book all can be done from the same personal computer. The value of virtual property initially comes from the virtual world operator; they give each virtual property item an initial value, determine its availability to users through techniques such as ‘percentage drop rate’ (where users in VWGs attack enemies and once beaten, items will become available on the corpse) and each is given a distinct use. Once an owner comes in possession of a virtual property item, they can manipulate the economy by setting a price which suits them. The supply of these virtual property items is being handled by the virtual world operator and manipulation of the economy. Most virtual property items are still available in infinite quantities but their availability is limited to what the virtual world operators decide is a good amount to have within the world. Within VWGs you want to have a suitable amount of every grade of virtual property that is among the population to stabilize the economy, so for example a lot of commonly acquired items, less rare quality items and even lesser epic quality items. For example if you had more epic items than common grade items, the economy would be totally thrown out of control; as these items are worth more and often have greater uses. So if every user is reaping the rewards of having such hard to find items, this would ultimately lead to imbalance. Virtual property items often have a specific ‘grade’ (e.g. common, rare or epic) which determines worth. These ‘rarer’ items often perform vastly better or contain special abilities for the user and therefore are more sought after. When it comes to a virtual axe in the VWG Guild Wars (NCSoft 2009), these items have distinct attributes and ‘grade’ which give the item a specific use. Due to this, these virtual property items can often have an implicit value set through estimating how good or bad they are based on potential performance but the user has to experience the use of the item multiple times before a value can truly be attached. To outline what virtual property is worth and what it is selling for in various markets, Ku & Gupta (2008, p. 428) provide some examples of what virtual property is selling for in real world markets; a high level virtual character in the world of the popular game Lineage (NCSoft 2009) can fetch upwards of $800 USD and in a Yahoo (Yahoo.Inc 1994) auction for a Mahjong account with 3 million Mahjong currency will fetch around the same price of $800 USD. In the first week of January 2010 a virtual space station in the virtual world game Project Entropia sold for $330,000 United States dollars (Thier 2010). Project Entropia thrives with a 820,000 strong user base that inhabit a planet called Calypso, users utilize a form of currency called Project Entropia Dollars (PEDs) which can be exchanged for real U.S dollars at a 10 to 1 ratio (Thier 2010). User to user transactions in Project Entropia in the year of 2008 exceeded $420 million dollars (Thier 2010).

37

ISBN: 978-972-8939-31-1 © 2010 IADIS

3.3 Real Money Trading of Virtual Property This section will discuss the process called Real Money Trading or RMT. Where real world money is paid to obtain virtual property, which then can then in turn be traded for virtual money within the virtual world and converted back into real world money.

Figure 1. A model of virtual property trading in a virtual world game

The model presented in Figure 1 (Patterson and Hobbs 2010, p. 163) is a representation of how virtual property trading can occur in VWGs in particular. When analyzing this model, it’s important to point out the key features which help to highlight that virtual property has both a digital value and real world value. The virtual world and the entities that represent game players in the game (referred to as Game Avatars in the figure); the possessions of the game players (Virtual Property); the mechanisms to transfer virtual property between players (Trading); and finally, the interface between the virtual and real worlds (Input and Output). This model outlines that first there is some form of value that is input into the virtual world, this is most often real world money and is used to purchase virtual property items (could also be in the form of time and effort for the creation/collection/completion of items). This virtual property can then be traded between avatars in the virtual world. Once the new owner (avatar) gains this virtual property they can go ahead and trade it on legitimate markets for real world money. This form of trading is essential to the virtual world marketplace due to the inability to clone these items. Virtual property generally maintains its value unlike digital goods which can be pirated or cloned illegally (piracy is the purchase of counterfeit products at a discount to the actual price of the copyrighted product or the illegal file sharing of copyrighted material

Internet Real World

Consumer

Real World $

Online Merchant

Transfer of Digital Goods

DELIVERY

Figure 2. A model of digital goods trading

usually over peer-to-peer networks for free (Hill 2007, p. 9)). Section 3.5 will expand and elaborate on crimes that exist for both digital goods and virtual property; piracy for digital goods and deliberate theft for virtual property. The model presented in Figure 2 is a representation of how digital goods trading can occur. When analyzing this model, it’s important to point out the key features which help to highlight that digital goods are essentially information or a bit stream of 1’s and 0’s that are traded in an online market place. The entity that represents the person purchasing a digital good is referred to as ‘Consumer’ in the figure; the item being purchased (Digital Goods); the mechanisms to transfer digital goods between consumer and merchant (Delivery); and finally, the connection between the merchant and consumer (Internet).

38

IADIS International Conference on Internet Technologies & Society 2010

3.4 Piracy and Theft A very important aspect of virtual goods is the fact that they are now subject to the crime of theft. This has a direct correlation to digital goods, in that regardless of their attributes or function, they are being illegally reproduced (pirated) (Rayna 2007, p. 26). Digital piracy is the purchase of counterfeit products at a discount to the actual price of the copyrighted product and also the illegal file sharing of copyrighted material usually over peer-to-peer networks for free (Hill 2007, p. 9). Essentially digital piracy is proportionally equivalent to the theft of intellectual property (Hill 2007, p. 10). Rayna (2007, p. 19) explains that a large amount of the distribution and exchange of digital goods often occurs in secretive environments where in fact the monetary value of these products essentially cease to exist and every individual can freely gain ‘clones’ of these goods without permission from the creator or original merchant. This also correlates to virtual property items as VWEs are now a target for criminals who are looking for virtual property (such as virtual currency) because it contains real world value. These criminals deliberately break into user’s accounts, steal virtual property and virtual avatars in the game and then sell them, usually on the black market, for anywhere from hundreds to thousands of dollars (Spring 2006). One plausible reason why computer criminals have opted into virtual world crime is that it brings less risk than traditional forms of crime, as there is little chance that police will be able to prosecute them for stealing a magic potion virtual item in a virtual world for example (Spring 2006), even if they are caught. To emphasize this point, in the United States especially among other countries, these cyber criminals benefit greatly from very loose or nonexistent virtual property laws and it is believed game designers are reluctant to push for legal recognition of virtual property for fear of being held liable for theft (Spring 2006).

4. CONCLUSION This paper discusses why virtual property should be considered a digital good. It highlights that digital goods and virtual property are in fact an entertainment item that is distributed in a digital formats. The actual worth of these items can often be undetermined but an approximate value can be applied by examining the information that is embedded within. A virtual property item is also an item that is used for entertainment purposes but also can be considered a tangible commodity, they can be sold for real money in the same way a digital good can. It is important to note that digital goods and virtual property are both digital in nature and are not considered a physical item that can be handled by the user. They are both purchased in a digital format accessible purely by some kind of technological device such as a personal computer. To give an example that describes this scenario better; when an individual purchases a digital movie over the internet or a virtual property item such as a ‘virtual sword’, they don’t have a real world item but merely a binary stream of 1s and 0s. This can mean certain things for the user such as the items will not erode over time, they are often cheaper, purchases can be made with instant gratification, in the case of digital property items they can be cloned and they bring into question various legal issues. The methods by which digital items are sold is commonly through online vendors and in the case of electronic books they are sold through websites such as Amazon (Amazon.com 1995). Virtual property items can be sold in a variety of different ways both virtual world trading between avatars and also WWW trading between two individual people through websites such as ItemBay. The key factor in the trade supply chain of virtual property items is that they essentially are limited in terms of quantity; this factor is manipulated by the virtual world operators themselves. In terms of digital goods these items are ultimately unlimited in supply and often rely on predicted market sales estimates. Crime is a factor of both; digital goods and virtual property items. This common crime between these two is ultimately classed as theft. Digital goods suffer from piracy or unlicensed cloning and virtual property items with deliberate theft via cyber criminals compromising user’s accounts and stealing all the valuable items. To summarize, this paper has given the facts about both digital goods and virtual property: It is clear that virtual property, although existent purely within virtual worlds, benefits and suffers from the same characteristics as digital goods so therefore in fact should be considered a digital good by nature. Digital goods are more widely recognized as a form of ‘property’ so with that certain benefits arise, especially in the

39

ISBN: 978-972-8939-31-1 © 2010 IADIS

field of law. If an individual steals or commits piracy, they can be held liable for this in a court of law, if an individual steals a virtual property item it’s still very hard to prosecute these criminals. For virtual world crime to be reduced, virtual property needs to be considered a digital good, this can bring in many benefits such as greater legal recognition which at the moment is a vast problem with the abundance of virtual property theft and real money trading.

REFERENCES Amazon.com, 1995. Amazon.com Incorporated. [online] Available at: [Accessed 1 November 2010]. Bhattacharjee, S., R. D. Gopal, et al. 2003. Digital music and online sharing: software piracy 2.0?, Commun. ACM, Vol. 46, No. 7, pp. 107-111. Blizzard.Entertainment, 2004. World of Warcraft Community Site. [online] Available at: [Accessed 1 November 2010]. Blizzard.Entertainment, 2008. Press Releases. [online] Available at: [Accessed 1 November 2010]. Games.Warehouse, 2010. Games.Warehouse. [online] Available at: [Accessed 1 November 2010]. Haltiwanger, J. and R. S. Jarmin 2000. Measuring the Digital Economy. MIT Press, Cambridge, MA. Hill, C. 2007. Digital piracy: Causes, consequences, and strategic responses, Asia Pacific Journal of Management, Vol. 24, No.1, pp. 9-25. ItemBay, 2005. Professional online game virtual currency supplier and buyer and seller of game accounts. [online] Available at: [Accessed 1 November 2010]. Kennedy, R. 2008. Virtual rights? Property in online game objects and characters., Information & Communications Technology Law, Vol. 17, No.2, pp. 95-106. Ku, Y. and S. Gupta, 2008. Online Gaming Perpetrators Model. Intelligence and Security Informatics. Springer-Verlag, Heidelberg, Berlin. pp. 428-433. Lastowka, G. and D. Hunter. 2003. The Laws of the Virtual Worlds. Public Law and Legal Theory Research Paper Series. University of Pennsylvania Law School, Pennsylvania. Vol. 26, No. 03-10. Linden.Research.Inc, 1999. Second Life Official Site. [online] Available at: [Accessed 1 November 2010] Mastrapa, G., 2009. Uh-Oh: World of Warcraft Introduces Microtransactions. [online] Available at: [Accessed 1 November 2010] NCSoft, 2009. Lineage - Popular fantasy MMORPG from NCSoft. [online] Available at: [Accessed 1 November 2010] Nelson, P. 1970. Information and Consumer Behavior, The Journal of Political Economy, Vol. 78, No. 2, pp. 311-329. Nelson, P. 1974. Advertising as Information, The Journal of Political Economy, Vol. 82, No. 4, pp. 729-754. Patterson, N. and M. Hobbs., 2010. A Multidiscipline Approach to Governing Virtual Property Theft in Virtual Worlds. What Kind of Information Society? Governance, Virtuality, Surveillance, Sustainability, Resilience. Brisbane, Australia, pp. 161-171. Quah, D. 2003. Digital Goods and the New Economy. CEP Discussion Papers. London, London School of Economics. Vol. 563. Rayna, T., 2007. Digital goods as public durable goods. Ph.D. University of Aix-Marseille. Sanity.Entertainment, 2010. Sanity Entertainment. [online] Available at: [Accessed 1 November 2010]. Spring, S. 2006. Virtual Thievery, Newsweek (Atlantic Edition). Vol. 148, No.24, pp. 10-10. Stephens, M. 2002. Sales of in-game assets: An illustration of the continuing failure of intellectual property law to protect digital-content creators. Texas Law Review, Vol. 1513, No. 80. Thier, D., 2010. Virtual Space Station Sells for Price of an Actual House. [online] Available at: [Accessed 1 November 2010]. Wright, A. A. and J. G. Lynch, Jr. 1995. Communication Effects of Advertising Versus Direct Experience When Both Search and Experience Attributes are Present. The Journal of Consumer Research, Vol. 21, No. 4, pp. 708-718. Yahoo.Inc, 1994. Yahoo!. [online] Available at: [Accessed 1 November 2010].

40

IADIS International Conference on Internet Technologies & Society 2010

CONSTRUCTION OF TRUST IN ONLINE AUCTION INTERACTIONS Sanna Malinen and Jarno Ojala Tampere University of Technology - Human-centered Technology - P.O.Box 589 - 33101 Tampere - Finland

ABSTRACT Trust experienced between partners is a precondition for business transactions. As trust is known to emerge especially in face-to-face interaction, it takes a longer time and more effort for it to evolve in an online context. This study describes users’ viewpoints on how trust is experienced and constructed in interactions between users of an online auction site. The results are based on 24 qualitative interviews with active users who are both selling and buying items in a popular Finnish online auction site, ‘Huuto.net’. The results show that a reputation system that is based on user feedback is essential for the evaluation of other users and their reliability. However, the interviewees found some flaws in the reputation system, and therefore they felt a need to look for cues of trustworthiness and reliability in other ways, such as asking additional questions, looking for additional information from the internet, and reading the advertisements carefully. We suggest that these more advanced ways of evaluating the reliability of other users are evolving with time and experience. As a practical result, we aim to point out which kinds of design elements of the service support the experience of trust, and present design guidelines for facilitating trust in online auctions. KEYWORDS Trust, reputation system, online auction, e-commerce, C2C, social interaction.

1. INTRODUCTION Trust is the basis of interpersonal relationships, and as a social phenomenon, it emerges from and maintains itself within the interactions of people (Weber & Carter, 2003). Trust is known to emerge especially in personal face-to-face interactions, and therefore the online context has been considered as challenging for the evolution of trust between people (Friedman et al, 2000; Toma, 2010). In computer-mediated communication the nature and amount of information available from others are altered (Hancock & Dunham, 2001). Trust plays an important role in business transactions between people as well. Especially for consumer-toconsumer (C2C) e-commerce and online trading sites, in order to be successful, the trust experienced for both the system and other people are essential, as a climate of trust eases and facilitates cooperation between people and adoption of the service (e.g. Shneiderman, 2000). The more trustworthy people perceive the system as being, the more willingly they will transact. Even though trust is built between two persons, technology has an important role in the formation of trust, as it can either assist or hinder the process (Friedman et al, 2000). Many different systems for facilitating trust on e-commerce sites have been developed in order to compensate for the lack of face-to-face presence in an online context. For example, systems that are based on feedback and ratings from others have been found useful for communicating users’ reputations, and thus facilitating trust between the users of the system. As trust is an important prerequisite for successful transactions, it is essential to understand how people interpret of each others’ trustworthiness, and how the technology shapes these impressions. To understand how trust is experienced and formed between people on an online auction site, we conducted a qualitative study for 24 users of a popular Finnish online auction site, Huuto.net (http://www.huuto.net/fi/). In the present study we aim to find out how the trust or mistrust that is experienced affects the user experience, and how the reliability of business partners can be evaluated in an online auction site. On the basis of the user interviews, we describe what kind of information about others is important for the formation of online reputation. As a practical result, we conclude the findings from the user interviews by presenting how the climate of trust can be supported with the successful design and policy of online trading sites.

41

ISBN: 978-972-8939-31-1 © 2010 IADIS

2. RELATED WORK Trust is about expectations of the future (Shneiderman, 2000). The term ‘trust’ implies to a belief in other people’s good intentions; that a person will behave reasonably and do what he or she says (Preece, 2004). However, there is an element of risk in the definition of trust: one can never be sure about the actions of another. As Luhmann (1979) says, if one could, there would not be a need for trust. Because online environments are used by people from different backgrounds, and often anonymously, social interactions can sometimes be risky and unpredictable (Jensen et al, 2002). In online interaction it is more difficult to assess the potential for harm and goodwill of other people, and cues that can be drawn from the environment are essential for the establishment of trust in other people (Friedman et al, 2000). In previous studies of computer-mediated communication, it has been noted that the evolution of trust takes more time in an online context, as the facelessness and anonymity make it difficult to identify others and create enduring relationships with them (e.g. Hancock & Dunham, 2001). In face-to-face interaction people are able to draw inferences from non-verbal elements and cues which are absent online, and therefore they can evaluate and respond to each other’s emotions and thoughts more accurately (Feng et al, 2003). To overcome the challenges that anonymity, facelessness, and dependency on a technological system impose on e-commerce, many popular C2C transaction sites, such as eBay (http://www.ebay.com), have developed their own reputation systems in order to facilitate trust and minimize abuse. There are several types of reputation systems; some of them, e.g., giving ratings, require explicit activity from users, while others collect data about users’ previous activities and provide information on what kinds of patterns the users follow (Jensen et al, 2002). In particular, peer-based reputation systems which are based on recommendations from others play a significant role in decision -making, because the experiences of other people are known as most valuable information in the evaluation of trustworthiness (Jensen et al, 2002). People can also experience trust in technological systems. The experience of the safety and reliability of the system affects trust, as does the awareness of what people tend to do in the service. Friedman et al (2000) present the characteristics that support and facilitate trust in technological systems, and claim that in online commercial transactions people are vulnerable to trust violations in at least two ways; loss of money and loss of privacy. In order to avoid financial harm, mechanisms that minimize such violations are needed, e.g., an insurance system with a promise to compensate for possible financial harm can create a sense of responsibility. As for privacy violations, users need to be aware of what kind of data is collected on their actions and how the service provider is using it (Friedman et al, 2000). In the list of guidelines for designing trust in web -services, Shneiderman (2000) emphasizes that users are invited to participate by ensuring trust and that their actions can be accelerated through clarifying responsibilities. Users are more likely to participate in transactions when they are assured that they are engaging in a trusting relationship, and their engagement will continue if the policies of transactions, including each participant’s responsibilities, are clearly explained (Shneiderman, 2000). Unlike the traditional face-to-face market, in which the buyer is able to see and examine the product before the purchase is made, in an online market the buyer needs to pay the product before receiving it. In ecommerce the risk is usually on the buyer’s side, and that is known to be one of the major obstacles for the development of e-commerce (Jarvenpaa & Tractinsky, 1999). Perception of trustworthiness is an important factor when a seller is chosen by potential buyers. Previous research (Strader & Ramaswami, 2002; Ye et al, 2009), has investigated factors that contribute to seller trustworthiness and affect the selection of the seller, and the results indicate that the most important factors that contribute to the trustworthiness of a seller are reputation and the quality of previous transactions, from which honesty and good intentions can be directly evaluated. On the contrary, of minimal importance for trustworthiness were the type of the seller (is it an individual or a business), and how long the seller has been in the business, which indicates that an individual can be perceived as reliable as a business (Strader & Ramaswami, 2002). The study of Strader & Ramaswami (2002) also demonstrates that trustworthiness can act as an incentive for a seller, since buyers are even willing to pay more when buying from a trusted seller.

42

IADIS International Conference on Internet Technologies & Society 2010

3. METHOD AND DATA 3.1 Service Studied Huuto.net, the leading online auction site in Finland, was founded in 1999. In September 2010, there were over 1.4 million registered users and over 990,000 items on sale on the service. The service is owned and administrated by the European media group Sanoma, and it is intended for both private and professional sellers. The registered users are able to sell and make bids for items. The selling is organized as an auction, but users can also define a solid price for their items. Putting items up for sale is completely free of charge, but there are also additional marketing features in the service that cost extra. Huuto.net follows the protocol of real-life auctions, so that users can see the highest offer and the history of the bids that were made during the auction. The users are encouraged, but not obliged, to give feedback to their business partners after each transaction. The current feedback system consists of a sum of all the negative, neutral, and positive feedback given by others and ratings with the three-point scale can be complemented with a free-form textual description. As a result, each registered user has a publicly visible history of past transactions on the service that serves as their online reputation as well. Each user has a personal profile page to which they can add a brief written description of themselves, and along with the score received through peer reputation this forms the online presentation of the user.

3.2 Method and Analysis In order to understand user experiences and the social interaction practices of the online auction users, a qualitative interview study was conducted. Through the face-to-face interviews, we wanted to understand the overall transaction process in an online auction and, particularly, what kind of information they are interested in when conducting business transactions. The themes of the semi-structured interviews covered their online shopping and browsing practices, the history of their transactions, experiences relating to trust and reputation, and how they interact with other members. In this paper, we introduce the results that relate to trust and reputation. Before the interview, each participant filled in a brief questionnaire about their background information, such as age, educational background, and their usage of online shopping sites and social media. The duration of the interviews was approximately 1.5 hours and they were able to use a PC during the session in order to illustrate their responses if needed. All the interviews were recorded in audio format. In order to conduct a content analysis, all of the interviews were transcribed and uploaded into the qualitative data analysis software QSR NVivo 8. With NVivo, the texts were analyzed by reading them through carefully and looking for recurring themes. The findings were divided into main categories using content analysis as a method. The approach of content analysis was summative (Hsieh & Shannon, 2005), as it started with the identification and quantification of certain themes with the purpose of understanding their contextual use of the words.

3.3 The Participants The research data consist of 24 user interviews that were collected in February and March 2010. 20 of the participants were selected from the database of the online auction site by the service provider, on the basis of their recent activity and place of residence, and four pilot interviewees were recruited by the research team. All 24 participants were active users of Huuto.net, who had been both selling and buying items through the service during the last couple of months. 12 of the participants were female and 12 male, their ages ranging from 22 to 61 years and their mean age being 39 years (four participants did not report their age). Their professional backgrounds varied; seven of them worked in the field of ICT, six were from the fields of health and welfare, three were students, and two reported themselves as being private entrepreneurs. All the participants were long-term users of the service that was being studied as they had been registered members of the online auction site from three to ten years, the mean being 7 years.

43

ISBN: 978-972-8939-31-1 © 2010 IADIS

The majority of the participants visited the service daily (63%) or weekly (25%). Half of the participants (n=12) reported mostly selling items, 9 reported selling and buying equally, and only 3 mentioned mostly buying via online auctions. For the two private entrepreneurs, the online auction site was an important sales channel through which they sold their products to customers. They were also asked about their usage of social media and 63% of the respondents reported using social networking sites; the most popular social media sites were Facebook and YouTube. Nine of them did not use social media at all. As for their usage of other online shopping sites than Huuto.net, they seem to be quite active in e-shopping as all but two respondents reported having gone shopping in electronic marketplaces and using several online shops and market-places regularly.

4. RESULTS 4.1 Importance of Trust The interviewees had different expectations regarding trust in other users and the level of trust they experienced affected their activities as sellers and buyers. Some of them trusted others unless they saw signs of dishonesty, whereas others were suspicious and wanted to investigate others’ reputations thoroughly before making a bid. Presumably, buying was perceived as riskier than selling, since it is the buyer who is assumed to send money and pay before seeing the product. For this reason, buyers inspected the reputation more closely than sellers. However, being able to see the reputation of the business partner was important for sellers as well. Participants were asked in the interviews if they had encountered any problems in transactions, and surprisingly, the majority had not had any negative experiences. Only three reported having experienced misuse which required intervention from the administrator or authorities. Given the amount of transactions that had proceeded, they had a relatively small amount of experience of misuse or cheating. The presence of an intermediary improved the reliability of the transaction process, as in the event of conflicts, it is easy to check the details, such as price or postage, from the system database where information is recorded and stored for a couple of months after the deal is closed. Therefore, the system itself was considered reliable, although they admitted that the presence of other people can sometimes create a sense of mistrust, as the users are able to cheat if they want to. ”I see this as a reliable site. Of course, the facelessness of others can cause some mistrust, but it is not the service itself that can be unreliable, it is the users.” (female, 32 years) In the interviews, everyone agreed that without any trust in other users, transacting online would be impossible. However, in order to become a trustworthy business partner you have to possess some previous history. Therefore, the most difficult stage for the members is being a newcomer without any history of previous transactions on the service. In cases where a user has no history at all, it is impossible for others to estimate whether the person is a newcomer or someone whose previous account was closed as a result of misuse. Because it is difficult to evaluate the honesty of a newbie, in order to minimize risks, all the interviewees claimed that they would rather not buy anything from someone with no reputation at all. ”They have nothing to lose. If you have a history or even some negative feedback, then you have something to lose.” (male, 56 years) In cases where there was something suspicious about the business partner or a lack of reputation, the users needed additional methods to ensure that the deal would be carried out successfully. Meeting face-toface and being able to examine the product personally were important ways to avoid risks. The price of the product also affected the experience of trust to some extent: if the item was cheap, the risk taken was smaller and the potential buyer is less cautious about reliability. “I am quite precise when the item costs more than, let’s say 20, euros. In that case, I will look more thoroughly what the item is like and what the seller is like.” (female, 36 years) “I would not buy anything from a person who does not have any feedback unless I am able to meet them in person and pick up the product myself.” (female, 27 years) In order to overcome reliability problems, many participants preferred buying from certain sellers that they already knew to be reliable. Therefore regular contacts were formed between sellers and buyers, especially among users with shared interests, e.g., collectors. Regular contacts were perceived as being

44

IADIS International Conference on Internet Technologies & Society 2010

beneficial for both parties, and there were more negotiations and more flexibility in the rules in dealings between them.

4.2 The Reputation System ”If you can see yourself that a person has plenty of (positive) feedback, you can send items before the payment. Because two hundred people can’t be wrong.” (female, 30 years) According to the interviewees, the feedback system that is based on peer rating plays an important role in online auction sites. The reliability of the system depends on users’ motivation to obtain a better reputation and to maintain the reputation they have achieved. Maintaining a good reputation is particularly important for heavy sellers and buyers, and they expected others to want to protect their reputation as well, because a good reputation is a prerequisite for doing business successfully. The interviews show that especially the heavy sellers took their reputation very seriously because they saw that even a small amount of non-positive feedback would result in their losing trading partners and would harm their business. The reputation system was perceived as an essential feature in the assessment of reliability, and the interviewees saw that the system helps them in avoiding the biggest risks. But they admitted that people can still find ways to cheat others in an online auction, and there are some flaws in the current implementation of the reputation system. As the system is based on the assumption that both participants are giving honest feedback after each transaction, its reliability will be open to doubt in situations when people either give false feedback or do not give feedback at all. One major problem of the current system identified by the majority of interviewees was that they felt that giving negative feedback is not customary. Only in the most obvious cases of lying or cheating did they think that negative feedback can be given. Otherwise they felt difficulties in making such a strong statement to another person, because that would be harmful for him or her. Many interviewees also mentioned being afraid of revenge when giving negative feedback, as giving feedback is a two-way process and they would probably receive negative feedback in turn. Some also stated that they could not be honest with the feedback because the three-point scale of ratings was considered too harsh. ”It is difficult to give negative feedback to anyone, even though it would be the right thing to do. Because you’ll get a negative one back, that’s for sure.” (female, 27 years) As negative feedback was perceived as being so harmful, the participants reported that if any problems occurred, they would rather cancel the deal than take the risk of any damage to their reputation. As a result, problems in the transaction process often remain invisible for other users since they are not documented at all. Other reliability problems of a peer reputation system mentioned by the users were that the feedback was not given for each transaction, because not everyone is interested in giving it. The system also makes it possible to gain a positive reputation in terms of a score, but still cheat on some occasions, when the amount of non-positive feedback remains small and does not affect their reputation remarkably. As a conclusion of the current flaws of the reputation system, it can be stated that in order to be encouraging for users, the feedback system should be as accurate as possible. Even though giving feedback is fast and easy, it is not of real value for the users if information remains inaccurate, and therefore less reliable.

4.3 Evaluation of Seller Trustworthiness Trustworthiness is especially important for online sellers. We wanted to find out what kinds of things are assessed as signs of reliability, and vice versa; what are the things interpreted as suspicious regarding the perception of trustworthiness. The interviewees were asked to describe the situations in which they had felt mistrust in other users, and the situations most often mentioned as arousing mistrust were these:  There is not enough information about the previous history of an individual user, and therefore not enough information for assessing reliability.  The information about a user is contradictory: when there is both negative and positive feedback it can be difficult to make overall estimations.  The reliability of the feedback can sometimes be questioned: negative feedback does not always mean cheating, since feedback can be given unfairly.

45

ISBN: 978-972-8939-31-1 © 2010 IADIS

 There is not enough information about a product on sale; for example, the description is too brief or general and lacking important details.  There is no opportunity to check the product outside the online auction: checking if the product is as promised is especially important in connection with expensive items.  The business partner has a rude communication style or does not respond to questions. The interviews show that virtual reputation is a primary tool for estimating others; their reliability is assessed on the basis of the feedback score and their previous activities as business partners. However, since the explicit reputation information provided by the system was not seen as completely reliable, the users reported having adopted other ways for finding information on a seller’s trustworthiness. For example, reading the selling advertisements carefully was considered a good way of finding out more information about a particular seller. The advertisements can give some cues about credibility and the person behind a user profile; good and detailed photographs and well-written text are considered signs of trustworthiness, whereas brief product descriptions that lack details and photographs reduced the trust in a seller. Surprisingly, some interviewees also stated that too much praise and too positive descriptions may seem suspicious as well: ”I am always assessing the reliability from the text written in the advertisement. In particular, the overwhelming praise looks suspicious. If you’ve got a good product that’s enough, and there’s no need to exaggerate it - - And using terms like ‘superb’ or ‘magnificent’ is useless because I want to decide myself whether the product is good or not.” (female, 28 years) Many of the interviewees admitted that because the descriptions of products are always written in a positive way, their reliability can be questioned. As one of the participants, a 51-year-old male, says: ”As for the advertisements, everyone claims to be selling only new and perfect stuff.” Therefore they felt a need to look for implicit cues of trustworthiness too. The experienced users of this study, who were familiar with the current online auction practices, had learned to recognize cheats on the basis of their advertisements. The evaluation of a seller’s reliability was perceived as an important part of their decision-making process, and therefore information on trustworthiness was sought from different sources. On the basis of the user interviews, several strategies for finding additional information on a particular seller can be identified. By analyzing the advertisements the potential buyers were looking for cues about the seller and his or her expertise. The quality of the advertisement, including informative and well-written descriptions and pictures of good quality, can communicate the seller’s credibility and knowledge about the product on sale. In addition, pictures and detailed descriptions were taken as a proof that a seller actually possesses the product that is on sale. From the way in which a seller responds to feedback it is possible to draw conclusions about his or her trustworthiness. When evaluating the seller, they wanted to see if the seller has non-positive ratings in a peer reputation system, and the way in which the seller has commented on negative or neutral feedback given by others may show whether it is justified or not. Potential buyers appreciated expertise and knowledge about the items on sale, and they were measured by putting additional questions, sometimes even irrelevant ones, in order to make sure that a seller is selling a real product. Additional questions were also put in order to evaluate the communication skills of the seller: a fluent and explicit communication style was taken as a sign of openness and reliability, whereas rude responses or no response at all or using bad language may repel potential buyers. Information about the seller was searched from discussion forum in order to find more details for the assessment of reliability. In particular, if there was something suspicious in a certain nickname, additional information was searched from the discussions of the online auction discussion forum or from other websites through search engines. We suggest that these above-mentioned strategies for finding additional information were an important part of the buyer selection since the majority of users mentioned having looked for more implicit information than just the score of peer ratings. Because finding implicit information may require previous experiences of transactions and evolve over time, we assume that they are typical, especially for the more advanced users, like the interviewees of this study.

46

IADIS International Conference on Internet Technologies & Society 2010

4.4 Facilitating Trust Online As a practical result, we will conclude our findings from the empirical user study as following five design recommendations for facilitating trust between users on online transaction sites.

4.4.1 Visible User Histories and Previous Activities in the System The history of transactions plays a major role when looking for cues of trustworthiness. Therefore, it is important to be able to see the whole history of transactions, including sold and unsold items of the seller, in the system.

4.4.2 Detailed and Easy-to-Use Feedback System Evaluations from other users are considered the most valuable source of information, and therefore ratings from others are a core feature for gauging others’ reliability. As giving feedback requires explicit action from users, it should be made fast and easy to use. If the feedback system remains unused, it has no real value for the system.

4.4.3 Social Features to Ensure Good Communication Misunderstandings and disappointments cannot be fully avoided in transactions between people. In order to solve disputes and negotiate deals, people need opportunities for fluent person-to-person communication.

4.4.4 Active Control and Interventions by the Administrators In conflicts, users expect the administrator to solve the dispute. When there is awareness of active administration and knowledge that problems are being solved fast by interventions on the part of the administrators, trust in the system increases and people are encouraged to use it. Awareness of control may also prevent part of the misbehavior.

4.4.5 Clear Policy and Visible Rules In unwanted and unexpected situations, users need to have clear rules on how to proceed, and the opportunity to check the rules from the service. When the rules of the auction are stated clearly, the number of misunderstandings decreases.

5. CONCLUSIONS This study aims to describe how trust is formed in interaction between users of C2C online auction sites. Similarly to previous research (Strader & Ramaswami, 2002), online reputation and the quality of the user history are the most important factors that contribute to the perception of trustworthiness. However, the user interviews indicate that there are many additional factors affecting the formation of trust between the potential business partners in the transaction process. In this study, we have investigated the experiences of active users in order to understand how trust is created and how it can be facilitated in an online auction context. As Friedman et al (2000) claim, people trust other people, not technology. Thus, for a climate of trust, it is essential that the technological system supports the important cues in the assessment of the trustworthiness of other people. For the participants in the study presented here, reputation based on peer reviews was considered as essential in making judgments about others. However, there were some deficiencies in the reputation system and it was not considered completely reliable. For example, they did not want to give negative feedback about minor abuses as they felt that the current three-point scale of the feedback system did not always match their conceptions. The experienced users of this study were quite familiar with the flaws in the reputation system, and they had developed their own strategies for making judgments about others’ reliability. They were looking for reliability cues from users’ profile information and product descriptions, viewed the history of their previous transactions, tested reliability by putting additional questions, and searched for information from discussion forums or through search engines.

47

ISBN: 978-972-8939-31-1 © 2010 IADIS

ACKNOWLEDGEMENT The authors would like to express their appreciation to Johan Saarela for assisting in this study, and Sanoma for providing us the possibility of conducting this case study. This research was part of the project "PROFCOM – Product-Internationalization with Firm-Hosted Online Communities", mainly funded by Tekes, the Finnish Funding Agency for Technology and Innovation (decision number 2283/31/07).

REFERENCES eBay, http://www.ebay.com/ (last retrieved on 23th September, 2010) Feng, J. et al, 2003, Interpersonal trust and empathy online: a fragile relationship. Proceedings of CHI 2003, Ft. Lauderdale, Florida, USA, pp. 718–719. Friedman, B. et al, 2000, Trust online. Communications of the ACM, Vol. 43, No. 12, pp. 34–40. Hancock, J.T. & Dunham, P.J., 2001, Impression Formation in Computer-Mediated Communication Revisited. An analysis of the Breadth and Intensity of Impressions. Communication Research, Vol. 28, No. 3, pp. 325-347. Hsieh, H. & Shannon, S.E., 2005, Three Approaches to Qualitative Content Analysis. Qualitative Health Research, Vol. 15, No 9, pp. 1277-1288. Huuto.net, http://www.huuto.net/fi/ (last retrieved on 23th September, 2010) Jarvenpaa, S. & Tractinsky, N., 1999, Consumer Trust in an Internet Store: A Cross-Cultural Validation. Journal of Computer-Mediated Communications, Vol. 5, No. 2, pp. 1-35. Jensen, C. et al, 2002, Finding Others Online: Reputation Systems for Social Online Spaces. Proceedings of CHI 2002, Minneapolis, Minnesota, USA, pp. 447-454. Luhmann, N., 1979. Trust and Power. John Wiley & Sons, London. Preece, J., 2004, Etiquette, Empathy and Trust in Communities of Practice: Stepping-Stones to Social Capital. Journal of Universal Computer Science, Vol. 10, No. 3, pp. 294-302. Shneiderman, B., 2000, Designing Trust into Online Experiences. Communications of the ACM, Vol 43, No. 12, pp. 5759. Strader, T.J. & Ramaswami, S.N., 2002, The value of seller trustworthiness in C2C online markets. Communications of the ACM, Vol. 45, No. 12. pp. 45–49. Toma, C.L., 2010, Perceptions of Trustworthiness Online: The Role of Visual and Textual Information. Proceedings of CSCW 2010, Savannah, Georgia, USA, pp. 13-21. Weber, L.R. & Carter, A.I., 2003. The Social Construction of Trust. Kluwer Academic/Plenum Publishers, London. Ye, Q. et al, 2009, The Impact of Seller Reputation on the Performance of Online Sales: Evidence from TaoBao Buy-ItNow (BIN) Data. SIGMIS Database, Vol. 40, No. 1, pp. 12-19.

48

IADIS International Conference on Internet Technologies & Society 2010

THE EFFECTS OF SOCIOECONOMIC CHARACTERISTICS AND CONSUMER INVOLVEMENT ON THE ADOPTION OF MUSIC DOWNLOAD STORES AND PAID MUSIC SUBSCRIPTION SERVICES Markus Makkonen, Veikko Halttunen and Lauri Frank University of Jyväskylä, Department of Computer Science and Information Systems - P.O. Box 35, FI-40014 University of Jyväskylä, Finland

ABSTRACT This paper investigates the effects of three socioeconomic characteristics (gender, age and income) and one personality variable (consumer involvement in music) on the adoption of music download stores and paid music subscription services in Finland. The investigation is based on the analysis of an online survey sample of 1 447 Finnish consumers through contingency tables, the Pearson’s χ2 tests of independence and the Cramér’s V coefficients. The results of the analysis suggest that statistically significant dependencies exist between almost all of the investigated variables and the adoption of download stores and subscription services and that these dependencies also differ from each other between the stores and services. For example, the diffusion of subscription services seems have occurred rather homogenously across age and income groups, whereas the diffusion of download stores has been driven by more mature consumers with higher income. These findings and the explanatory factors behind them should be taken into consideration when crafting future business models for digital music retailing. KEYWORDS Music download stores, paid music subscription services, adoption, socioeconomic characteristics, consumer involvement

1. INTRODUCTION During the past decade, there has been a drastic shift from physical to digital in the recorded music business. The sales of CDs and other physical formats have fallen sharply since the turn of the millennium, and today more and more recorded music is being purchased and sold digitally over the Internet. In 2009, already a quarter of the recorded music industry’s global revenues came from digital channels – constituting a $4.2 billion market (IFPI, 2010). However, this increase in the digital sales has not been able to offset the sharp drop in the sales of physical formats (IFPI, 2010). One main reason for this seems to be digital music piracy, although its total effects on the sales of recorded music remain a controversial issue (e.g., Oberholzer-Gee & Strumpf, 2007 vs. Liebowitz, 2008). Another major reason seems to be that the business models used in digital music distribution far too seldom match the fundamental needs, wants and expectations of individual consumers, thus resulting in low rates of adoption and usage. For example, Amberg and Schröder (2007) found this to be the case with the business models of digital music distribution in the German market. To improve the situation, there seems to be a desperate need for more studies on consumer behaviour in the context of digital music distribution. Some initial studies on the topic are already available (e.g., Amberg & Schröder, 2007; Chu & Lu, 2007; Kunze & Mai, 2007; Kwong & Park, 2008; Bounagui & Nel, 2009). However, most of the studies thus far have concentrated on consumer behaviour only in the context of digital music piracy and illegal peer-to-peer (P2P), whereas studies concentrating on consumer behaviour also in the context of legal digital music retailing have been much rarer (Makkonen et al., 2010). The aim of the present paper is to address this imbalance by investigating the adoption of music download stores and paid music subscription services in Finland. Its primary focus is on the effects of three socioeconomic characteristics

49

ISBN: 978-972-8939-31-1 © 2010 IADIS

(gender, age and income) and one personality variable (consumer involvement in music) on their adoption as well as on the differences and similarities in these effects between the stores and services. The study follows a hypothetico-deductive research model. First, hypotheses on potential effects are derived from prior research in Section 2. After this, these hypotheses are tested using the methodology described in Section 3. Section 4 reports the main results of these tests, and the results are discussed further in Section 5, which also outlines some important topics for future research. Finally, the main limitations of the study are briefly described in Section 6.

2. THEORETICAL BACKGROUND The theoretical background of the study is based on the diffusion of innovations (DOI) theory formalised by Rogers (2003), which investigates how new ideas, products and services spread in a social system. According to the theory, the members of a social system do not all adopt an innovation at the same time. Instead, the adoption occurs in a step-by-step process over time. In other words, an innovation is first adopted by the most innovative members of a social system, then by the slightly less innovative ones, and so forth. But what actually determines how early or late an individual adopts an innovation, and are those who adopt an innovation earlier or later than others characterised by some specific traits or qualities? These are some of the key questions of DOI in particular and of the marketing of new products and services in general. To explore them, adopters are typically classified into one or more adopter categories based on their relative time of adoption. For example, Rogers (2003) describes five different adopter categories: innovators, early adopters, early majority, late majority and laggards. When these adopter categories are explored in more detail, some common traits and qualities typically emerge. These traits and qualities are classified by Rogers (2003) into three categories: (1) socioeconomic characteristics, (2) personality variables, and (3) communication behaviour. This paper concentrates on the first two categories by investigating the effects of three socioeconomic characteristics (gender, age and income) and one personality variable (consumer involvement in music) on the adoption of music download stores and paid music subscription services. In this paper, music download stores are defined as online stores selling music as downloadable files on a pay-per-download basis (e.g., iTunes Store). In contrast, paid music subscription services are defined as online services also selling music as downloadable files or streaming content, but basing their business primarily on flat rate periodic fees rather than on pay-per-download or payper-usage pricing (e.g., Rhapsody and Spotify). The potential effects of gender, age, income and involvement on the adoption of download stores and subscription services are discussed in more detail in the following three subsections.

2.1 Gender and Age The effects of gender and age on the adoption of innovations remain a controversial issue (Rogers, 2003). This is especially true if adoption is examined on a general or global level, but also when it is investigated in the context of some specific domains. For example, in the context of online shopping, several studies have suggested that men are more avid shoppers than women and that online shopping is also positively associated with age. However, several other studies have found no support for such effects. (Chang et al., 2005; Zhou et al., 2007) According to Chang et al. (2005), these conflicting findings can perhaps be best explained by the fact that gender and age do not directly affect adoption, but only exert indirect effects. This view is also supported by the studies of Morris and Venkatesh (2000), Venkatesh and Morris (2000) as well as Venkatesh et al. (2000), who found gender and age to be important moderators of the interrelationships between the adoption of information technology and its numerous determinants, such as attitude, subjective norm, perceived behavioural control, perceived usefulness and perceived ease of use. While there is no prior research that has specifically studied the effects of gender and age on the adoption of download stores and subscription services, several studies have found gender and age to be important determinants of digital music piracy and illegal P2P file sharing, suggesting that these activities are most prevalent among young men (e.g., Bhattacharjee et al., 2003; Chiang & Assane, 2008; Coyle et al., 2009). This, in turn, can be hypothesised to influence also the adoption of download stores and subscription services through both substitutory and complementary effects. Thus, the following hypotheses are proposed:

50

IADIS International Conference on Internet Technologies & Society 2010

H1store H1service H2store H2service

There is dependency between gender and the adoption of download stores. There is dependency between gender and the adoption of subscription services. There is dependency between age and the adoption of download stores. There is dependency between age and the adoption of subscription services.

2.2 Income Unlike in the case of gender and age, there seems to be a common consensus on the positive effects of income on the adoption of innovations (Rogers, 2003). Typically, earlier adopters are assumed to have higher levels of income and wealth compared to later adopters. There are two main arguments behind this assumption (Rogers, 2003). First, higher levels of income and wealth may be a prerequisite for adopting some innovations. For example, some innovations may be extremely costly to adopt in their early life stages and may require high initial investments of capital. Besides money, their adoption may also require access to some other resources, such as specific information sources or communications channels, which are only available to individuals with higher levels of income and wealth. Second, the adoption of innovations also nearly always entails at least some level of risk and uncertainty, and individuals with higher levels of income and wealth are often more able to cope with these than are other potential adopters. Like in the case of gender and age, there is no prior research that has specifically studied the effects of income on the adoption of download stores and subscription services. However, higher income has been found to result in an increase in the adoption of online shopping (Chang et al., 2005; Zhou et al., 2007) and a decrease in digital music piracy and illegal P2P file sharing (e.g., Bhattacharjee et al., 2003; Coyle et al., 2009). Thus, the following hypotheses are proposed: There is dependency between income and the adoption of download stores. H3store H3service There is dependency between income and the adoption of subscription services.

2.3 Involvement Involvement is a construct with many different conceptualisations and definitions. It is commonly defined as “a person’s perceived relevance of the object based on inherent needs, values and interests” (Zaichkowsky, 1985). It is also common to distinguish between two different types of involvement: enduring involvement and situational involvement (Houston & Rothschild, 1987). Enduring involvement is the degree of interest a person feels towards an object on an ongoing basis, whereas situational involvement is the degree of interest that relates to some specific situation (Sheth & Mittal, 2004). Of these two types, the present paper focuses primarily on enduring involvement. Whether involvement is enduing or situational, it has been found to influence consumer behaviour in several different ways. For example, involved consumers often tend to be more active in searching and processing information about products and services and thus they also tend to become more knowledgeable about them (Sheth & Mittal, 2004). Because of this greater knowledge, involved consumers also often act as opinion leaders and lead users for new products and services and typically adopt them earlier than do consumers who are less involved (Rogers, 2003). In addition, involved consumers have been found to be more susceptible to new and innovative modes of shopping (Venkatraman, 1989). Thus, the following hypotheses are proposed: There is dependency between involvement and the adoption of download stores. H4store H4service There is dependency between involvement and the adoption of subscription services.

3. METHODOLOGY To test the hypothesis proposed in Section 2, a self-administered online survey was conducted among Finnish consumers. A self-administered online survey was selected as the data gathering method because of its costeffectiveness in gathering the large amount of quantitative data that was required by the study. The survey questionnaire was composed using the LimeSurvey 1.87+ software, and before the actual survey, it was pretested using several postgraduate students and industry experts. The actual survey was launched in June 2010, and it was online for three weeks. During this time, the survey link was promoted by sending multiple

51

ISBN: 978-972-8939-31-1 © 2010 IADIS

invitation e-mails through the internal communication channels of our own university as well as through an electronic mailing list provided by a Finnish retail chain, which contained 5 000 e-mail addresses of their randomly sampled regular customers. In addition, the survey link was posted to two websites promoting online competitions and surveys, as well as to two music related discussion forums. To raise the response rate, all of the respondents who completed the survey were also offered an opportunity to take part in a prize drawing, in which 41 gift cards with a total worth of 1 500 € were raffled among them. Altogether, the survey questionnaire consisted of 108–112 items (depending on responses). However, only eight of these items were used for the purpose of this paper. These items (translated from Finnish to English) are presented in the Appendix. Gender, age and income were each measured by one item. The measurement scale of gender was nominal (male or female) while age was originally measured using an interval scale but was later categorised into five age groups (under 25 years, 25–34 years, 35–44 years, 45–54 years and 55 years or over). Income, which referred to annual gross income per person, was measured using an ordinal scale. The scale originally consisted of ten income groups, but their number was later reduced to five (under 10 000 €, 10 000–19 999 €, 20 000–29 999 €, 30 000–39 999 € and 40 000 € or over). Consumer involvement was measured using a scale consisting of three statements, each of which were rated by the respondents using a five-point Likert scale ranging from strong disagreement to strong agreement. The statements were adapted from an article by Mittal (1995) and were based on the perceived importance dimension of the Consumer Involvement Profiles (CIP) by Laurent and Kapferer (1985). The decision to use only this dimension and to omit the other four dimensions (perceived pleasure value, perceived sign value, perceived risk importance and perceived risk probability) of CIP was based on the argumentation given by Mittal (1989), who considers only the perceived importance dimension as involvement proper and the other four dimensions as its antecedents. Therefore, it is sufficient to measure only this dimension when we are interested in involvement itself and not in its antecedents. Cronbach’s alpha for the scale was 0.950, suggesting good reliability. To simplify the analysis, the scale was later reduced to one ordinal variable consisting of five categories: very low, low, moderate, high and very high involvement. The adoption of download stores was measured by asking the respondents whether or not they had ever purchased music from a download store. Those who had purchased were classified as adopters, whereas those who had not purchased were classified as non-adopters. Of course, the respondents also had an opportunity to not answer the question, in which case their status remained unknown. In contrast, the adoption of subscription services was measured by providing the respondents a list of seven services (plus the option “Others”) and asking them to tick off the services to which had subscribed. If they had subscribed to any of the services, they were classified as adopters. Otherwise, they were classified as non-adopters. The survey data was analysed using the PASW Statistics 18 software. Because most of the variables were measured using either nominal or ordinal scales and we wanted to explore not only linear, but also nonlinear dependencies between them, the analysis was based on contingency tables, the Pearson’s χ2 tests of independence and the Cramér’s V coefficients. The χ2 tests were first used to test whether the dependencies suggested by the contingency tables were statistically significant. If this was the case, the contingency tables and Cramér’s V coefficients were used to further investigate the type and strength of the dependencies.

4. RESULTS Altogether, 1 447 complete and valid responses were received. The mean response time for the survey was about 17 minutes, suggesting that the questionnaire was rather long for a self-administered online survey. This was also indicated by the relatively high drop-off rate, which was 25.9 %. However, we do not consider the response time or the drop-off rate too high in terms of suggesting severe respondent fatigue. Descriptive statistics of the survey sample are presented in Table 1. Of the 1 447 respondents, 42.3 % were men and 57.7 % were women. Their mean age was 36.4 years (SD = 12.7 years), and 19.4 % belonged to the age group of under 25 years, 33.1 % to the age group of 25–34 years, 19.2 % to the age group of 35–44 years, 17.8 % to the age group of 45–54 years, and 10.5 % to the age group of 55 years or over. In terms of income, 23.6 % belonged to the income group of under 10 000 €, 16.4 % to the income group of 10 000– 19 999 €, 20.4 % to the income group of 20 000–29 999 €, 14.0 % to the income group of 30 000–39 999 €, and 12.6 % to the income group of 40 000 € or over. In addition, 13.0 % of the respondents did not disclose their income information. Overall, the gender, age and income distributions of the sample matched quite well

52

IADIS International Conference on Internet Technologies & Society 2010

the gender and age distributions of the Finnish Internet population in 2007 as well as the income distribution of all Finnish income recipients in 2008 (Statistics Finland, 2010). Women, the age group of 25–34 years and the income groups of under 9 999 € and 30 000–39 999 € were slightly overrepresented, whereas men, the age group of 55 years or over and the income group of 10 000–19 999 € were underrepresented. However, there were no indications of severe non-response bias in terms of these three variables. Table 1. Descriptive statistics of the sample Number

Percentage

Gender

Male Female

612 835

42.3 % 57.7 %

Age

–24 years 25–34 years 35–44 years 45–54 years 55– years

281 479 278 257 152

19.4 % 33.1 % 19.2 % 17.8 % 10.5 %

Annual gross income per person

–9 999 € 10 000–19 999 € 20 000–29 999 € 30 000–39 999 € 40 000– € Missing

342 237 295 202 183 188

23.6 % 16.4 % 20.4 % 14.0 % 12.6 % 13.0 %

Music involvement

Very low Low Moderate High Very high Missing

43 145 299 459 481 20

3.0 % 10.0 % 20.7 % 31.7 % 33.2 % 1.4 %

Has purchased music from a download store?

Yes No Missing

351 1039 57

24.3 % 71.8 % 3.9 %

Has used a paid music subscription service?

Yes No

154 1293

10.6 % 89.4 %

Variable

Most of the respondents expressed high levels of involvement in music. Overall, 64.9 % expressed either very high or high involvement, 20.7 % expressed moderate involvement and only 13.0 % expressed either low or very low involvement. However, the adoption rates of download stores and subscription still remained relatively low. Only 24.3 % had purchased music from a download store, whereas 71.8 % had not. The adoption rate of subscription services remained even lower. Only 10.6 % had used a paid music subscription service, whereas 89.4 % had not. The dependencies between the explanatory and adoption variables are analysed further in the following four subsections. Note that due to missing data in some of the variables, the number of responses included in each analysis varies slightly according to the analysed dependency.

4.1 Gender and Adoption Tables 2. and 3 show the adoption rates of download stores and subscription services for men and women as well as the results of the χ2 tests. The χ2 tests supported the hypothesised dependencies between gender and adoption in the case of both download stores (χ2(1) = 5.078, p = 0.024, V = 0.060) and subscription services (χ2(1) = 18.411, p < 0.001, V = 0.113). Therefore, both H1store and H1service were accepted. However, although the dependencies were found to be statistically significant, the Cramér’s V coefficients suggested that they were relatively weak. In both cases, men seemed to be more apt adopters than women. Overall, 28.3 % of men had purchased music from download stores and 14.7 % had used a paid music subscription service. The corresponding figures for women were 23.0 % and 7.7 %.

53

ISBN: 978-972-8939-31-1 © 2010 IADIS

Tables 2. and 3. Adoption of download stores (χ2(1) = 5.078, p = 0.024, V = 0.060) and subscription services (χ2(1) = 18.411, p < 0.001, V = 0.113) between men and women Yes

Has adopted download stores?

No

N

Has adopted subscription services?

Yes No

N

Male 28.3 %

Female 23.0 %

All 25.3 %

71.7 % 586

77.0 % 804

74.7 % 1 390

Male 14.7 %

Female 7.7 %

All 10.6 %

85.3 % 612

92.3 % 835

89.4 % 1 447

Tables 3. and 4. Adoption of download stores (χ2(4) = 66.522, p < 0.001, V = 0.219) and subscription services (χ2(4) = 6.146, p = 0.188, V = 0.065) across age groups Has adopted download stores?

Yes No

N

Has adopted subscription services?

Yes No

N

–24 21.0 %

25–34 34.3 %

35–44 31.0 %

45–54 19.8 %

55– 4.0 %

All 25.3 %

79.0 % 262

65.7 % 452

69.0 % 274

80.2 % 253

96.0 % 149

74.7 % 1 390

–24 9.3 %

25–34 12.5 %

35–44 11.5 %

45–54 10.5 %

55– 5.9 %

All 10.6 %

90.7 % 281

87.5 % 479

88.5 % 278

89.5 % 257

94.1 % 152

89.4 % 1 447

Tables 5. and 6. Adoption of download stores (χ2(4) = 22.424, p < 0.001, V = 0.136) and subscription services (χ2(4) = 10.603, p = 0.031, V = 0.092) across income groups

Yes

19.4 %

10 000– 19 999 € 22.0 %

No

80.6 % 325

78.0 % 227

73.5 % 287

71.1 % 197

20 000– 29 999 € 11.2 %

30 000– 39 999 € 11.4 %

88.8 % 295

88.6 % 202

–9 999 € Has adopted download stores? N

Yes

7.0 %

10 000– 19 999 € 12.7 %

No

93.0 % 342

87.3 % 237

–9 999 € Has adopted subscription services? N

20 000– 29 999 € 26.5 %

30 000– 39 999 € 28.9 %

40 000– €

All

37.4 %

25.8 %

62.6 % 179

74.2 % 1 215

40 000– €

All

15.8 %

11.0 %

84.2 % 183

89.0 % 1 259

Tables 7. and 8. Adoption of download stores (χ2(4) = 16.363, p = 0.003, V = 0.109) and subscription services (χ2(4) = 11.697, p = 0.020, V = 0.091) across involvement levels Has adopted download stores?

Yes No

N

Has adopted subscription services?

Yes No

N

Very low 14.3 %

Low 22.3 %

Moderate 21.0 %

High 23.9 %

Very high 31.6 %

All 25.4 %

85.7 % 42

77.7 % 139

79.0 % 290

76.1 % 440

68.4 % 465

74.6 % 1 376

Very low 11.6 %

Low 9.7 %

Moderate 7.0 %

High 9.6 %

Very high 14.3 %

All 10.7 %

88.4 % 43

90.3 % 145

93.0 % 299

90.4 % 459

85.7 % 481

89.3 % 1 427

Table 9. Summary of the χ2 tests, Cramér’s V coefficients and tested hypotheses (A = accepted, R = rejected) Dependency Gender x download store adoption Gender x subscription service adoption Age x download store adoption Age x subscription service adoption Income x download store adoption Income x subscription service adoption Involvement x download store adoption Involvement x subscription service adoption

54

N

χ2

df

Asymp. Sig.

Cramér’s V

1 390 1 447 1 390 1 447 1 215 1 259 1 376 1 427

5.078 18.411 66.522 6.146 22.424 10.603 16.363 11.697

1 1 4 4 4 4 4 4

0.024 < 0.001 < 0.001 0.188 < 0.001 0.031 0.003 0.020

0.060 0.113 0.219 0.065 0.136 0.092 0.109 0.091

H H1store H1service H2store H2service H3store H3service H5store H5service

A/R A A A R A A A A

IADIS International Conference on Internet Technologies & Society 2010

4.2 Age and Adoption Tables 3 and 4 show the adoption rates of download stores and subscription services across different age groups and the results of the χ2 tests. The χ2 tests supported the hypothesised dependencies between age and adoption in the case of download stores (χ2(4) = 66.522, p < 0.001, V = 0.219), but not in the case of subscription services (χ2(4) = 6.146, p = 0.188, V = 0.065). Therefore, only H2store was accepted, whereas H2service was rejected. However, also in the case of download stores, the Cramér’s V coefficient suggested that the dependency was relatively weak, although it seemed to be considerably stronger than in the case of gender. The adoption rate also did not seem to increase or decrease linearly with age. Instead, it first increased from 21.0 % to 34.3 % when moving from the age group of under 25 years to the age group of 25– 34 years, but then began to decrease at an accelerating pace.

4.3 Income and Adoption Tables 5 and 6 show the adoption rates of download stores and subscription services across different income groups and the results of the χ2 tests. The χ2 tests supported the hypothesised dependencies between income and adoption in the case of both download stores (χ2(4) = 22.424, p < 0.001, V = 0.136) and subscription services (χ2(4) = 10.603, p = 0.031, V = 0.092). Therefore, both H3store and H3service were accepted, although the Cramér’s V coefficients once again suggested that the dependencies were relatively weak. In the case of download stores, the adoption rate seemed to increase more or less linearly with income. In the case of subscription services, the dependency seemed to be more nonlinear. The adoption rate increased from 7.0 % to 12.7 % and from 11.4 % to 15.8 % at the two extremes of the income distribution, but there were no significant changes between the three middle income groups.

4.4 Involvement and Adoption Tables 7 and 8 show the adoption rates of download stores and subscription services for different levels of involvement and the results of the χ2 tests. The χ2 tests supported the hypothesised dependencies between involvement and adoption in the case of both download stores (χ2(4) = 16.363, p = 0.003, V = 0.109) and subscription services (χ2(4) = 11.697, p = 0.020, V = 0.091). Therefore, both H5store and H5service were accepted. The strength of the dependencies was about the same as in the case of income. In the case of both download stores and subscription services, the adoption rate also did not seem to increase or decrease linearly with involvement. In the case of download stores, there were significant changes only at the extremely low or high levels involvement, but not between the three moderate levels of involvement. In the case of subscription services, the adoption rate first decreased from 11.6 % to 7.0 % when moving from very low to moderate level of involvement and then again increased from 7.0 % to 14.3 % when moving from moderate to very high level of involvement.

5. DISCUSSION AND FUTURE RESEARCH Table 9 summarises the results of the χ2 tests, their connections to the tested hypothesis and the Cramér’s V coefficients. As can be seen, all of the hypotheses except for H2service were accepted, meaning that statistically significant dependencies were found between the explanatory and adoption variables in all cases except for age and the adoption of subscription services. However, according to the Cramér’s V coefficients, the observed dependencies were all relatively weak. The strongest one (V = 0.219) was observed between age and the adoption of download stores, and the weakest one (V = 0.060) between gender and the adoption of download stores. Despite their relative weakness, a closer investigation of the dependencies using the contingency tables still revealed some interesting findings. In terms of gender, the most interesting finding was that although men seemed to be more apt adopters of both download stores and subscription services than women, the gender differences were much more evident in the case of subscription services than in the case of download stores. However, when moving from gender to age and income, the situation was reversed. In the case of download stores, a dependency was found

55

ISBN: 978-972-8939-31-1 © 2010 IADIS

between age and adoption as well as income and adoption. These dependencies were also very much like those predicted by the prior research, although it was somewhat surprising that consumers aged between 25– 44 years, not the youngest age group, were the most apt adopters of download stores. In the case of subscription services, however, no dependency was found between age and adoption. In addition, the dependency between income and adoption was observable only at the two extremes of the income distribution. In other words, it seems that the diffusion of subscription services has so far occurred rather homogenously across age and income groups, whereas the diffusion of download stores has been driven by more mature consumers with higher income. Based on the conducted study, it is impossible to provide any conclusive or definite explanations for these findings. One explanation could relate to the different life stages of the two innovations because at least in Finland, subscription services are a slightly newer concept compared to download stores. This could perhaps partly explain the more heterogeneous diffusion of subscription services between genders, but not their more homogeneous diffusion across income groups. In fact, assuming that earlier adopters are typically characterised by higher levels of income and wealth as discussed in Section 2.2, just the opposite should be true. Another explanation could relate to the interactions of gender and age with other determinants of adoption, such as perceived usefulness and perceived ease of use, which were discussed in Section 2.1. However, also these interactions seem to provide only partial explanations at best. For example, they do not fully explain why the dependency between gender and adoption could be observed in the case of both download stores and subscription services while the dependency between age and adoption was observable only in the case of download stores. Nor do they provide a sufficient explanation for the nonlinear type of the latter dependency. Yet another explanation could relate to the substitutory effects between other music acquisition channels. For example, if we assume that young men with lower income are the most active users of illegal peer-to-peer (P2P) file sharing and similar illegal music acquisition channels as suggested in Section 2.1 and that these channels are more perfect substitutes for download stores than for subscription services, we can use this argumentation to explain both the laggard adoption of download stores among younger consumers with lower income and the less evident gender differences in their adoption. However, also the confirmation of this explanation calls for further examination. In terms of involvement, the findings are more or less in line with the prior research. In the case of both download stores and subscription services, those who were most involved in music also seemed to be the most apt adopters. However, the adoption rates did not increase as linearly with the level of involvement as could have been assumed. In the case of download stores, changes in the adoption rate were observed only in the case of extremely low or high involvement, but not in the case of moderate involvement. This would seem to suggest that a scale consisting of only three categories would be sufficient in measuring this construct. In the case of subscription services, the situation was even more exceptional. Those with moderate involvement seemed to be the most laggard adopters, and the adoption rate increased when the level of involvement either decreased or increased. It is difficult to find any consistent explanation for this observation. Of course, one explanation could relate to the fact that subscription services are particularly appealing to two different music consumer segments: the “heavy users”, who see subscription services as a cost-effective means to meet their huge consumption needs and therefore use them to complement their usage of other music acquisition channels, as well as the “light users”, whose consumption needs are much more occasional, causing them to be less interested in actually owning the music they are using. All in all, the findings suggest that the investigated socioeconomic characteristics and consumer involvement in music have had significant effects on the adoption of both download stores and subscription services in Finland. However, there seem to be some interesting differences in these effects between the stores and services. The explanations behind these differences and the effects themselves should be better understood when crafting future business models for digital music retailing because a good understanding of how and why the diffusion processes have occurred in the past allows the actors operating in the recorded music industry to better prepare themselves also for forthcoming challenges. Therefore, their further examination should be one of the main focuses of future research on this topical area. In addition, the findings confirm that there exists great growth potential in digital music retailing. In the case of download stores, the greatest growth potential seems to reside in consumers aged under 25 years and 45 years or over, as well as in consumers with limited income. In the case of subscription services, there seems to reside great growth potential in all consumer segments, but especially in female consumers, in consumers aged 55 years or over as well as in consumers with limited income. Therefore, future research should also focus on the expedients for reaching these consumer segments. Some exemplary expedients could include more easy to

56

IADIS International Conference on Internet Technologies & Society 2010

use stores and services with special pricing schemes and other similar means that take into better account the fundamental needs, wants and expectations of individual consumers, as mentioned by Amberg and Schröder (2007). After all, it seems that in the modern music marketplace characterised by an abundance of content and alternative channels for acquiring it, the customer is perhaps more a king than ever before.

6. LIMITATIONS We consider this paper to have three main limitations. First, because a self-administered online survey was employed as a data gathering method, the results cannot be directly generalised to the whole Finnish population, but only to the Finnish Internet population. Second, because the dependencies between the explanatory and adoption variables were investigated only one dependency at a time, it is also impossible to say anything about the more complex interactions between the variables. Their examination would require the use of more advanced analysis methods, such as log-linear modelling. Third, like many other diffusion studies, the study also conceptualised adoption as a rather simplified construct by classifying the adopters and non-adopters into only two categories instead of also considering, for example, their relative degree or time of adoption. This simplification obviously results in a somewhat reduced picture of the phenomenon under investigation. In addition, the overall adoption threshold used to differentiate between adopters and nonadopters can be considered relatively low because no continued usage of the two innovations was required. If a higher threshold had been used, the observed adoption rates probably would have been even lower.

REFERENCES Amberg, M. and Schröder, M., 2007. E-business models and consumer expectations for digital audio distribution. Journal of Enterprise Information Management, Vol. 20, No. 3, pp. 291–303. Bhattacharjee, R. D. et al., 2003. Digital Music and Online Sharing: Software Piracy 2.0? Communications of the ACM, Vol. 46, No. 7, pp. 107–111. Bounagui, M. and Nel, J., 2009. Towards understanding intention to purchase online music downloads. Management Dynamics, Vol. 18, No. 1, pp. 15–26. Chang, M. K. et al., 2005. Literature derived reference models for the adoption of online shopping. Information & Management, Vol. 42, No. 4, pp. 543–559. Chiang, E. P. and Assane, D., 2008. Music piracy among students on the university campus: Do males and females react differently? Journal of Socio-Economics, Vol. 37, No. 4, pp. 1371–1380. Chu, C.-W. and Lu, H.-P., 2007. Factors influencing online music purchase intention in Taiwan: An empirical study based on the value-intention framework. Internet Research, Vol. 17, No. 2, pp. 139–155. Coyle, J. R. et al., 2009. “To buy or to pirate”: The matrix of music consumers’ acquisition-mode decision-making. Journal of Business Research, Vol. 62, No. 10, pp. 1031–1037. Houston, M. J. and Rothschild, M. L., 1978. Conceptual and Methodological Perspectives in Involvement. In S. Jain (Ed.), Research Frontiers in Marketing: Dialogues and Directions. American Marketing Association, Chicago, IL, USA, pp. 184–187. IFPI, 2010. Digital Music Report 2009 (IFPI Digital Music Reports). Retrieved from http://www.ifpi.org/content/library/ DMR2010.pdf Kunze, O. and Mai, L.-W., 2007. Consumer adoption of online music services: The influence of perceived risks and riskrelief strategies. International Journal of Retail & Distribution Management, Vol. 35, No. 11, pp. 862–877. Kwong, S. W. and Park, J., 2008. Digital music services: consumer intention and adoption. Service Industries Journal, Vol. 28, No. 10, pp. 1463–1481. Laurent, G. and Kapferer, J.-N., 1985. Measuring Consumer Involvement Profiles. Journal of Marketing Research, Vol. 22, No. 1, pp. 41–53. Liebowitz, S. J., 2008. Research Note: Testing File Sharing’s Impact on Music Album Sales in Cities. Management Science, Vol. 54, No. 4, pp. 852–859. Makkonen, M. et al., 2010. The Acquisition and Consumption Behaviour of Modern Recorded Music Consumers: Results from an Interview Study. In S. Krishnamurthy (Ed.), Proceedings of the IADIS International Conference eCommerce 2010, pp. 36–44.

57

ISBN: 978-972-8939-31-1 © 2010 IADIS

Mittal, B., 1989. A theoretical analysis of two recent measures of involvement. Advances in Consumer Research, Vol. 16, No. 1, pp. 697–702. Mittal, B., 1995. A Comparative Analysis of Four Scales of Consumer Involvement. Psychology & Marketing, Vol. 12, No. 7, pp. 663–682. Morris, M. G. and Venkatesh, V., 2000. Age Differences in Technology Adoption Decisions: Implications for a Changing Workforce. Personnel Psychology, Vol. 53, No. 2, pp. 375–403. Oberholzer-Gee, F. and Strumpf, K., 2007. The Effect of File Sharing on Record Sales: An Empirical Analysis. Journal of Political Economy, Vol. 115, No. 1, pp. 1–42. Rogers, E. M., 2003. Diffusion of Innovations (5th ed.). Free Press, New York, NY, USA. Sheth, J. N. and Mittal, B., 2004. Customer Behavior: A Managerial Perspective (2nd ed.). Thomson South-Western: Mason, OH, USA. Statistics Finland, 2010. Statistics Finland. Retrieved from http://www.stat.fi Venkatesh, V. and Morris, M. G., 2000. Why Don’t Men Ever Stop to Ask for Directions? Gender, Social Influence, and Their Role in Technology Acceptance and Usage Behavior. MIS Quarterly, Vol 24, No. 1, pp. 115–139. Venkatesh, V. et al., 2000. A Longitudinal Field Investigation of Gender Differences in Individual Technology Adoption Decision Making Processes. Organizational Behavior and Human Decision Processes, Vol. 83, No. 1, pp. 33–60. Venkatraman, M. P., 1989. Involvement and Risk. Psychology & Marketing, Vol. 6, No. 3, pp. 229–247. Zaichkowsky, J. L., 1985. Measuring the Involvement Construct. Journal of Consumer Research, Vol. 15, No. 3, pp. 341–352. Zhou, L. et al., 2007. Online Shopping Acceptance Model – A Critical Survey of Consumer Factors in Online Shopping. Journal of Electronic Commerce Research, Vol. 8, No. 1, pp. 41–62.

APPENDIX 1. Gender: □ Male □ Female 2. Age: ______ 3. Annual gross income per person: □ Under 5000 € □ 5 000 – 9 999 € □ 10 000 – 14 999 € □ 15 000 – 19 999 € □ 20 000 – 24 999 € □ 25 000 – 29 999 € □ 30 000 – 39 999 € □ 40 000 – 49 999 € □ 50 000 – 59 999 € □ Over 59 999 € □ No response 4. What do you think of the following statements (1 = strongly disagree … 5 = strongly agree)? 1 2 3 4 5 Music is very important to me □ □ □ □ □ I have a strong interest in music □ □ □ □ □ Music matters a lot to me □ □ □ □ □ 5. Have you ever purchased music from a download store? □ Yes □ No □ No response 6. Have you ever subscribed to any of the following paid music subscription services? Current Former subscriber subscriber DNA Musalaajakaista □ □ Last.fm (paid version) □ □ □ □ Nokia Comes With Music □ Nokia Music Store streaming service □ Radio Rock subscription service □ □ Sonera Music Player □ □ Spotify (paid version) □ □ Other paid music subscription service □ □

58

IADIS International Conference on Internet Technologies & Society 2010

THE ROLE OF INFORMATION TECHNOLOGY IN PHARMACEUTICAL SUPPLY CHAINS Ladislav Kochman, Dr Tomayess Issa and Dr Paul Alexander Curtin University - Perth, Western Australia

ABSTRACT This research examines IT in pharmaceutical supply chains from an Australian perspective, particularly to see if benefits can be aligned with previously developed general functional IT roles as summarised in previously existing frameworks. The research focuses on pharmaceutical distribution businesses, themselves key parts of the health care supply chain. The research attempts to analyse and evaluate how IT impacts these supply chains, and in what way the use of IT improves or impacts the supply chain and supply chain management. The research also relates the findings to a framework of functional roles that IT facilitates in supply chain management (SCM). In a study by Auramo et al. (2005), three functional roles of IT in SCM were implied and included transaction execution, collaboration and co-ordination and decision making, and similar roles are examined. This study confirms the existence of similar roles though difference in relationships and emphasis of these roles is observed. KEYWORDS Supply Chain, Information Technology, Roles, Pharmaceutical industry

1. INTRODUCTION There is much academic work on the relationship between IT and supply chains, measurement of the precise and quantifiable benefits that IT is having on organisations, and on how information has played a part in the evolution of supply chain management within business. Researchers have shown that there is a clear link between improvements in supply chains, information sharing, IT and supply chain practice. This has included improvement in the efficiency of conducting activities along the supply chain, minimising inventories with supply chain members, improving cycle times and ensuring an adequate level of flexibility so that the supply chain can respond to market changes. This study considers the functional role of IT in the distribution sector of the pharmaceutical industry, a component of the health care supply chain. This industry has stringent and non-typical information requirements in some aspects, which has lead to some conservatism in operations and resulted in a dearth of academic literature relating to it.

1.1 Health Care Supply Chains The pharmaceutical industry is facing repeated challenges and increasing demands to service communities and society. These challenges affect all the participants of the overall health care supply chain (Pedroso & Nakano 2009). Health care supply chains have significant differences from others, including its complexity, its highly customised nature, reduction of end customer cost sensitivity (health insurance companies and governments pay the major costs), the intermediary role of prescribing doctors which further buffer end customers from some supply chain impacts, and an emphasis on intangible infrastructure such as intellectual property and research, which skews resource emphasis away from physical supply, and finally, stringent regulatory control in both supply chain design and operations (Pedroso & Nakano 2009).

59

ISBN: 978-972-8939-31-1 © 2010 IADIS

1.2 Information in Pharmaceutical Supply Chains In much of industry, supply chains’ material and demand information flows and coordination are the cornerstones of their management. With health care supply chain companies there can be a greater challenge. Consumers do not have direct discretion over their choices and doctors must prescribe ethical drugs before the end user has access to them. Technical information flows must therefore be rich, redundant and delivered early to multiple points. So in essence, the industry has a regular channel flow, found in most supply chains, of product and order information, as well as a specialised channel required for those who create the demand. The flow of technical information to the market is essential for end users to be aware of new products and services. Adequate information flows are especially critical in an industry that is either specialised or technology driven and where products or services have specific implications (Pedroso & Nakano 2009). Pharmaceutical companies as key members of health care supply chains, have a special role to play when it comes to flows of demand information as well as flows of technical information. As the pharmaceutical industry is positioned in the upstream part of the overall health care supply chain, management of upstream information must be adequate as this information carries the customer order-driven demand data. The downstream technical information is also critical, as this information is what helps signal the demand through the chain. Downstream technical information also contains certain risks in that supply chain members may not understand the long term (planning) message, or it may be modified due to regulatory pressures (Pedroso & Nakano 2009). Zhang, He & Tan (2008) noted that because they participate in complex health care supply chains, visibility is important for pharmaceutical companies. Visibility involves people, processes, technology and information flow, and in the supply chain it allows an organisation to have the ability to collect and analyse data and match it to an organisation’s strategies. The key visibility issues in pharmaceutical supply chains include the large number of medicine categories, stringent track and trace requirements, high regulation of the environment including cold chain management, complex demand patterns which may be globally distributed (and have different regional regulations), strict control of product characteristics, and reverse logistics challenges due to potential drug recalls (Zhang, He & Tan 2008). Information sharing is a particular challenge for this industry. For item level data sharing to be a success, protection of proprietary information needs to be ensured. Maintaining adequate user authentication procedures, access control and data protection is therefore critical (Verisign. 2006), though this requirement is made easier in modern supply chains because having information available on open networks is already considered the norm in an e-business environment. Information sharing also reduces the uncertainty of both external and internal environments, which improves performance and reduces total costs through savings in inventory due to better demand forecasting (Lin, Huong & Lin 2002; Olhager & Selldin 2003). It is important for managers to exploit opportunities that allow effective supply chain practice as well as effective information sharing. The effectiveness of supply chain practices increases as the level of information sharing increases (Levary 2000; Zhou & Benton 2007).

1.3 The Impacts of IT on Pharmaceutical Supply Chains Wu et al (2006) provided insight into how IT could improve the supply chain processes, and this can be applied to pharmaceutical supply chains as well. They found that IT improved supply chain agility, reduction in cycle time, achieving higher efficiencies, and delivering products to customers on time in terms of four dimensions of supply chain capability; information exchange, co-ordination of materials, people and capital in transaction related activities, inter-firm activity integration , and supply chain responsiveness. Supply chain agility too, can be influenced by IT. Swafford et al. (2008) extended Wu et al’s (2006) observations by suggesting that supply chain agility could be better achieved through IT integration and flexibility. Other studies also found IT capability positively linked to organisational performance (Bharadwaj 2000; Kearns & Lederer 2003; Wamba et al. 2008) and that they improved competitive advantage (Earl 1993; Kathuria, Anandarajan & Igbaria 1999). Yet others concluded that while IT is important in supply chains, there is nevertheless a lack of clarity of the specific benefits presented by IT (Devaraj, Krajewski & Wei 2007; Hilt & Brynjolfsson 1996; Lee & Barua 1999; Poirier & Quinn 2003; Weill 1992). Auramo et al (2005) provided five propositions on the use and benefits of IT in supply chains (Table 1).

60

IADIS International Conference on Internet Technologies & Society 2010

Table 1. Five benefits of IT, after Auramo et al (2005)

1 2 3 4 5

Successful companies have developed focused e-business solutions for improving customer service elements that are most important in their business. Improved efficiency allows company personnel to focus more on critical business activities The use of e-business solutions improves information quality E-business solutions support planning collaboration and improved agility of the supply network When IT is coupled with process re-design, strategic benefits are created

Dehning et al. (2007) showed a positive relationship between a firm’s investments in IT based SCM systems and firm performance, and concluded that IT based SCM systems better enabled a firm to support external partners, (for example inbound and outbound processes) than actual internal operational processes such as manufacturing and assembly.

1.4 Frameworks of IT in Supply Chains The previous sections highlight the somewhat disjointed nature of studies related to both information use and IT in supply chains. While they recognised various impacts, they did not attempt to combine those holistically in to any kinds of frameworks. Gunasekaran et al. (2004), incorporating many of the observations from other authors, created a framework for the development of IT in effective supply chain management. Auramo et al. (2005) examined the functional roles of IT in managing supply chains. IT in supply chain management reduces the friction in transactions between supply chain partners through cost effective information flow. In addition IT serves a supporting role in the exchange of information for collaboration and coordination of supply chains. They also suggested that IT could be used as a decision support-making tool in the management of the supply chain. These concepts are shown in figure 1, which does suggest the potential for a role-based framework of IT operating in supply chains.

Figure 1. Functional roles of IT in SCM

2. RESEARCH OBJECTIVE, QUESTIONS AND METHOD The objective of this study was to analyse the use of IT in pharmaceutical distribution businesses, which represent a specific component of the health care supply chain, and to structure the specific functional benefits of the use of IT into a role framework such as that implied by Auromo et al (2005). The research questions are: (1) Does the role of IT in the pharmaceutical supply chain align in a similar way as those categorised in a study by Auramo et al. (2005) (2) What specific benefits does IT provide in the pharmaceutical supply chain? The first question involves the benefits of IT in pharmaceutical supply chains. This objective was addressed with the data collected from research question 2 that deals with IT, supply chains and the specific benefits derived from such functions. The second objective was to compare the findings with the three functional roles of IT (see Figure 3). Two case studies were used as the primary method of research. The data collections from the case studies were carried out via interviews from personnel who were employed in the distribution organisations. Interviews are a useful tool in research, allowing the collection of information from individuals through

61

ISBN: 978-972-8939-31-1 © 2010 IADIS

conversation (Kajornboon). Interviews are a suitable method of data collection in a number of situations including (Gray 2004): areas where there is a need to obtain highly personalised data; there are opportunities required for probing; and a good return rate is required. Structured interviews form the basis of the data collection and ensure that the same questions were asked by all the respondents in the two case studies. The target respondents were two major distributors of pharmaceutical products in Australia from whom a variety of data were collected. This was attempted through both the interviews as well as various other data collection sources such as documentation and public domain web sources

3. RESULTS 3.1 Functional Roles of IT To create the broadest scope possible for collecting data for this paper, various areas, or business segments, were observed within the targeted companies. These segments were selected on the basis that they formed key areas known inside supply chains where IT has an impact on various functional roles. Table 2 highlights which of the business segments, within the researched companies, were targeted in order to capture as much data as possible from the entire supply chain. As it is difficult to ensure that every single segment of IT and supply chains are captured during the interviews, the categorising of the segments at least ensured that key areas were focused on. Table 2 consolidates the research findings by highlighting the observed functional roles of IT within the companies to the categories mentioned by Auramo et al. (2005). These include transaction execution (1), collaboration and co-ordination (2) and support in decision making (3). Table 2. Functional roles and likely category observed Functional role observed

Category

Strategic network optimisation

2,3

Partnerships with 3PL providers

2,3

Transportation strategies

2,3

Managing suppliers

1,2,3

Interactions with customers – Point of sale (POS)

1,2,3

Interactions with customers - Ordering

1

Support for supply chain operations

1,2,3

Supplier purchasing process

1

Inventory decisions

2,3

Benchmarking

2,3

Demand planning and forecasting

3

Outbound operations – warehousing facilities

2,3

3.2 Strategic Supply Chain Decisions enabled by IT Strategic network optimisation was assisted with tools that allowed for the plotting of points of infrastructure to that of customer, employees or suppliers. Therefore there was collaboration in the collecting and the consolidation of data to be plotted and also co-ordination in calculating and defining where either a Head Office or Distribution Centre could be located. The use of data in this way allowed for a strategic decision to be made about where to locate the company’s assets. Therefore both collaboration/co-ordination and decision making support formed a part of network optimisation. In regards to the partnership with 3PL providers there was also a clear collaboration on behalf of both companies in finding the best options that made the partnership mutually beneficial.

62

IADIS International Conference on Internet Technologies & Society 2010

In addition, co-ordination was also accounted for between the activities of the two parties, since one had a clear objective in assisting with logistics. By these same mechanisms, it therefore could be argued that there was assistance in route planning which most certainly is a support for decision making processes. The transportation strategy could also be observed with the same frame of mind as that of the 3PL provider as there is again a clear co-ordination in planning routes through the use of GPS systems. Again, these IT enabled systems assist with decision making crucial to optimising delivery services at a strategic level.

3.3 Supplier Management and IT Managing suppliers was not just a one way approach for both of the companies studied. There was a significant level of partnership between the strategy of the companies and those of the suppliers. This partnership was also a lot more advanced than what would be expected with purely a transaction execution alliance. This was highlighted by the fact that the two parties could also share information that was more closely aligned by a form of collaboration i.e. sharing information with promotions and forecasts. This collaboration in turn, formed a basis on which future decision making with strategy could be based. Interactions with customers also provided a rich exchange of information between the companies and their customers. Many customers had in place a POS system that could communicate with the systems at the companies. Although this technology is not utilised by all customers, those that do have access to these systems have the ability to send through orders, allow for the viewing of past histories and the use of this information in managing their business. In terms of the key aspects of the information exchange between the parties, the simple execution of orders was a major point; however the customers also being able to access their histories showed collaboration through which future, even strategic, decisions could be made.

3.4 Customer Ordering and IT The customer ordering process was a clear attempt at improving the way the customers could make their order execution more reliable and efficient. Customers had the option of either ordering via phone; web based or dial up modem. Although information could be exchanged that allowed for the status of the order to be determined, the main purpose of this functionality was to allow and simplify the order execution process. As with most companies, the companies in this study had their information centres overseen by an integrated ERP system. The fact that ERP’s are sophisticated and complicated systems, hinted that these systems perform multiple functions. These included the ability to perform functions for the organisation that included the ability to execute orders, collaborate and co-ordinate with systems both internally and externally, and therefore to have the capacity to allow this information to support operational and strategic decision making processes. The supplier purchasing process was also very varied in the degree of sophistication that the IT systems provided functionality for. However, as with the ordering process between the companies and customers, the clear function with these systems was the ability to assist with the order execution process. Therefore, although it could be argued that the supplier ordering process was much richer than simply an ordering function, the primary purpose was with order simplification.

3.5 Inventory Management and IT The inventory management systems, on the other hand, provided for more than just a single function. There were a number of benefits realised from ensuring that adequate monitoring was in place for all movement of stock. Efficient inventory management systems have a number of benefits that the companies using them can derive. Most notably these include the ability to improve customer service and satisfaction, provide for a reduction in out of stock and over stocks. Inventory management systems can also enhance inventory carrying costs; improve working capital performance, balancing supply with demand efficiently. These benefits further supplement improving the overall ordering process and also inventory planning.

63

ISBN: 978-972-8939-31-1 © 2010 IADIS

3.6 Benchmarking and IT Benchmarking was not particularly an IT based activity. However the information extracted from business systems could be said to be with the assistance of IT enabled technology. For example, one part of benchmarking involved assessing the business from customer feedback which was all recorded for future purposes, this could be achieved either via a manual or electronic process. The information from which would then be used that to improve the service delivery. More importantly, internal benchmarking could be achieved with the visibility that IT enabled systems provided throughout the organisation. One such example involved an internal benchmark that focused on making sure that most products the company promised to sell where actually on the shelf. Furthermore, any stock outs would be monitored until availability was assured. For these reasons, benchmarking involved collaboration between various IT systems which could then be transformed to support decision making about purchasing and inventory strategies within the organisation.

3.7 Demand Forecasting and IT Demand planning and forecasting on the other hand could be viewed mostly from a purely strategic point of view. Although collaboration between systems within the companies would be undertaken, the primary role of being able to forecast would be to align the organisation to be a lot more responsive to uncertain or certain events. So, for the sake of this research, forecasting and demand planning is viewed mostly as a decision support making tool.

3.8 Warehouse Management and IT The warehousing management systems were also similar to the inventory systems in place, as the two could be somewhat interchangeable and at least formed a part of one another. The co-ordination and collaboration between the systems from the fact, that there is at least internally, an exchange of information between the various systems within the organisation. In addition, the picking systems in place with both of the companies whether employing a manual bar-coded or voice to pick system, allowed the tasks to be co-ordinated together with the warehouse management system. This system then ensured that tabs were kept on stock, orders and order locations relevant both to inventory and ordering processes. Furthermore, this information could provide a basis on which future decisions regarding warehouse layout and utilisation could be made. Therefore, the warehouse management systems of the organisations allowed for clear co-ordination and collaboration internally and for this information to then be transferred to a decision making process. The respondents highlighted how assistance with IT could provide for benefits that improved the overall value of the functional areas.

4. DISCUSSION In aligning the results to functional roles suggested by the literature, analysis indicated relevance of this study particularly with Auramo’s (2005) implied framework. Most of the functional roles observed assisted with collaborating and/or co-ordinating internal and external operations with the companies and also in supporting the decision making process from both an operational and strategic point of view. A smaller part of the functional roles on the other hand assisted with the transactional processes. The transactional function allowed for improved efficiency between the company and partners and in turn improved overall customer service. Together the three functional roles provided for enhanced visibility in the supply chain with which the derived benefits could be used to improve the performance of all partners in the supply chain. While Auramos’ (2005) view was supported overall, this study suggested one could be able to alter how the different roles in IT and supply chains could be classified. For example, when the purpose of IT was tilted towards collaboration and co-ordination then often, if not in all occasions, support in decision making was also a feature (Figure 4).

64

IADIS International Conference on Internet Technologies & Society 2010

Functional Roles of IT on Supply Chain Transaction Execution

Support in Decision Making Collaboration and coordination

Figure 2. IT roles as observed with the researched companies

The figure highlights how the original framework can be altered to signify a stronger link between collaboration and co-ordination and support for decision making, than as merely a completely separated arm. This could in some cases be expected, since when collaborating or co-ordinating internally or externally with business units or partners, the information that can be extracted from such functionality can then be used to assist with operational or strategic decision making.

5. CONCLUSION The research found that the functional roles of IT in supply chains concerned three main categories of classification under which most functional roles in organisations could be classed. In every aspect when IT was used to manage the supply chain, the three functional roles fell under under the classifications of transaction execution, collaboration and co-ordination -ordination and support in decision making. Therefore any attempt to use IT to assist in better managing supply chains observed a functional role classified into the above three roles. In terms of transaction execution, when recommending IT to assist with this role, companies note a reduction in friction with their transactions between supply chain partners by allowing cost effective information flow to be enabled. In addition, IT also serves as a supporting role in the exchange of information for collaboration and coordination in supply chains. This is an important function for organisations as information exchange allows for the creation of situations where the companies can work co-operatively with their partners, have the ability to share compatible objectives, allow for joint investment of resources including time, material and expertise and also have the ability to create situations for the mutual benefit of all participants. This highlights to participants how enabling IT to assist in collaboration with supply chain partners is a vital functional role between how IT can better manage an organisation’s supply chains, especially in maximising corporate goals and resources. The final functional role, support for decision making, also creates and paves the way for an important role that IT plays in better managing supply chains. Recommending IT solutions that enable support in decision making allows for the ability to complete complex analytical tasks that can then be used to operationally and strategically improve the position of the company. This allows the organisation to better position itself, and optimises the use of current information at hand for added competitive advantage. Furthermore, organisations are particularly recommended to consider segments within the business where IT can assist with collaboration and co-ordination between internal and external parties. Focusing on improvement in these roles also gives added weight to information that management can then use in the decision making process for the short and long term benefits of the company. For organisations to strengthen their supply chains with IT, the three functional areas recommended form the main focal points where significant improvements to the business can be made. IT allows businesses to leverage the technology to learn and respond to market changes quicker than could be achieved by competitors and also effectively co-ordinate supply chain activities. Therefore, concentrating ideas on improving these areas within the business can allow for significant benefits.

65

ISBN: 978-972-8939-31-1 © 2010 IADIS

REFERENCES Auramo, J, Kauremaa, J & Tanskanen, K 2005, 'Benefits of IT in supply chain management: an exploratory study of progressive companies', International Journal of Physical Distribution and Logistics Management, vol. 35, no. 2, pp. 82-100. Bharadwaj, AS 2000, 'A resource-based perspective on information technology capability and firm performance: An empirical investigation', MIS Quarterly, vol. 24, no. 1, pp. 169-96. Dehning, B, Richardson, VJ & Zmud, RW 2007, 'The financial performance effects of IT based supply chain management systems in manufacturing firms', The Journal of Operations Management 25, vol. 25, pp. 806-24. Devaraj, S, Krajewski, L & Wei, JC 2007, 'Impact of ebusiness technologies on operational performance: The role of production information integration in the supply chain', Journal of Operations Management vol. 25, pp. 1199-216. Earl, MJ 1993, 'Experiences in strategic information systems planning: Editor’s comments', MIS Quarterly, vol. 17, no. 1, p. 5. Gray, DE 2004, Doing research in the real world, Sage Publications, London. Gunasekaran, A & Ngai, EWT 2004, 'Information systems in supply chain integration and management', European Journal of Operational Research vol. 159, pp. 269-95. Hilt, L & Brynjolfsson, E 1996, 'Productivity, business profitability, and consumer surplus: Three different measures of information technology value', MIS Quarterly, vol. 20, no. 2, pp. 121-42. Kajornboon, AB, Using interviews as research instruments. Retrieved November 5, from http://www.culi.chula.ac.th/eJournal/bod/Annabel.pdf Kathuria, R, Anandarajan, M & Igbaria, M 1999, 'Linking IT applications with manufacturing strategy: An intelligent decision support system approach', Decision Sciences, vol. 30, no. 4, pp. 959-92. Kearns, GS & Lederer, AL 2003, 'A resource-based view of strategic IT alignment: How knowledge sharing creates competitive advantage', Decision Science, vol. 34, no. 1, pp. 1-29. Lee, B & Barua, A 1999, 'An integrated assessment of productivity and efficiency impacts of information technology investments: Old data, new analysis and evidence.', Journal of Productivity Analysis, vol. 12, no. 1, p. 2143. Levary, RR 2000, 'Better supply chains through information technology', Industrial Management, vol. 42, no. 3, pp. 2431. Lin, F, Huong, SH & Lin, SC 2002, 'Effects of information sharing on supply chain performance in electronic commerce', IEEE Transactions on Engineering Management, vol. 49, no. 3, pp. 258-68. Olhager, J & Selldin, E 2003, 'Supply chain management survey of Swedish manufacturing firms', International Journal of Production Economics, vol. 89, pp. 353-61. Pedroso, MC & Nakano, D 2009, 'Knowledge and information flows in supply chains: A study on pharmaceutical companies', International Journal of Production Economics, vol. 122, pp. 376-84. Poirier, CC & Quinn, FJ 2003, 'A survey of supply chain progress', Supply Chain Management Review, vol. 7, no. 5, pp. 40-7. Swafford, P, Ghosh, S & Murthy, N 2008, 'Achieving supply chain agility through IT integration and flexibility', International Journal of Production Economics, vol. 116, pp. 288-97. Verisign. 2006, An industry information framework for the pharmaceutical supply chain, Verisign. Retrieved Oct 1st, 2009, from http://www.verisign.com/static/040033.pdf Wamba, SF, Lefebvre, L.A. , Bendavid, Y & Lefebvre, E 2008, 'Exploring the impact of RFID technology and the EPC network on mobile B2B eCommerce: A case study in the retail industry', International Journal of Production Economics, vol. 112, no. 2, pp. 614-29. Weill, P 1992, 'The relationship between investment in information technology and firm performance: A study of the valve manufacturing sector.', Information Systems Research, vol. 3, no. 4, pp. 307-33. Wu, FS, Yeniyurt, D, Kim, S & Cavusgil. 2006, 'The impact of information technology on supply chain capabilities and firm performance: A resource based view.', Industrial Marketing Management, vol. 35, pp. 493-504. Zhang, N, He, W & Tan, PS 2008, 'Understanding local pharmaceutical supply chain visibility', SIMTech Technical Reports, vol. 9, no. 4, pp. 234-9. Zhou, H & Benton, W 2007, 'Supply chain practice and information sharing. Journal of Operations Management', Journal of Operations Management, vol. 25, pp. 1348-65.

66

IADIS International Conference on Internet Technologies & Society 2010

CONTRAIL: CONTENTS DELIVERY SYSTEM BASED ON A NETWORK TOPOLOGY AWARE PROBABILISTIC REPLICATION MECHANISM Yoshiaki Sakae, Masumi Ichien, Yasuo Itabashi, Takayuki Shizuno and Toshiya Okabe System Platforms Research Laboratories, NEC Corporation - 1753, Shimonumabe, Nakahara-ku, Kawasaki, Kanagawa, Japan

ABSTRACT There are some difficulties in developing a highly efficient content delivery system: where to deploy contents or replicas, real-time event handling, and selecting optimal network paths in terms of QoS and costs. We tackle these issues by combining a probabilistic contents/replicas deployment mechanism, a real-time event notification system and OpenFlow technology. In this paper, we especially describe the performance evaluation of our probabilistic replication technique which is a core component of ConTrail by simulations. The results show that our approach can reduce the average latency for contents acquisition more than existing approaches. Moreover, we describe how to achieve the QoS and save network costs by choosing suitable communication flow with OpenFlow. KEYWORDS CDN, Replica Management, Replica Deployment, Probabilistic Replication, OpenFlow

1. INTRODUCTION In addition to traditional contents created by professional content providers, CGM (Consumer Generated Media) 1 which are produced by non-professional end-users are increasing rapidly due to technological evolution such as the spread of video and audio recording devices, the spread of smart phones, a decrease in barriers to contents publicizing and so on(Wunsch-Vincent & Vickery 2007). The diversity of contents is increasing as well. To accommodate such circumstances, the importance of an efficient information delivery platform has been increasing. The new generation of services, such as services based on the geo-location information supported by GPS, services which are categorized into SNS (Social Networking Service), and information sharing services, is emerging one after another as a significant trend to widen the user experience. It is expected that further diversification and increasing of a new types of services will occur, so that there will also be more needs for information delivery platform. In this paper, we cover the contents delivery system which could be a basic substrate for advanced services. The contents delivery system for the CGM era has to cope with these issues:  The difficulty of contents deployment (planning) caused by ubiquitous contents production and consumption. A content usage pattern may differ geographically because its demand may vary due to local trends. For instance, it sometimes reveals the “local production for local consumption” type usage pattern.  It is hard to afford the space for all contents in a single storage system, so there is need for a mechanism which utilizes several storage systems cooperatively to handle huge amounts of contents.  Quite frequent contents creation and update: It is necessary to have a high performance event notification system.  QoS (Quality of Service), Cost of network: It is important for contents delivery platforms to consider QoS and cost of network by nature, because contents delivery service inherently tends to occupy network bandwidth for its primary purpose. 1

also known as UGC (User-Generated Content) or UCC (User-Created Content).

67

ISBN: 978-972-8939-31-1 © 2010 IADIS

We propose a new content delivery system (ConTrail) to overcome the above issues, with the following features: (1) USC: a distributed file system which stores contents over a storage cluster in a Data Center (DC), (2) NUSC forwards and deploys contents among USCs and provides a file system interface for applications, (3) RENS: a real-time event notification system which provides name resolution and propagates events rising from contents modifications to participants, and (4) OpenFlow controls communication flows with programmable rules.

2. CONTRAIL (CONTENTS TRAILING) 2.1 Overview of ConTrail ConTrail consists of the following building blocks and is typically structured as depicted in Figure 1.  USC is a reliable distributed file system running on a storage cluster in a DC (Data Center). When the USC stores a content item, it divides the content into chunks of data and replicates them to a physically different storage node for read performance and dependability. We assume the size of total contents is so big that one USC cannot have enough capacity for them solely.  NUSC manages the USCs running on DCs and organizes a tree-like overlay network according to the size of each DC. It provides VFS interface for applications such as a media server. When a media server cannot find a specific content item in the local USC, NUSC searches and forwards the content from one USC to another USC along the overlay network paths and probabilistically makes and deploys replicas of the content at USCs on the way to the destination.  Media Server (not shown in Figure 1) is a streaming server running on each DCs. It reads contents from NUSC and sends streaming media to end-user clients.  RENS is an event notification system. It immediately notifies only selected users of changes of the state (event) of the contents through overlay trees which will be changed in topology dynamically responding to the state of contents(Shizuno et al. 2009). The RENS is used for location management of contents, notification of contents states, and notification of NUSC information. The RENS’s topology of overlay network will differ from that of NUSC.  OpenFlow makes switches and routers programmable by providing a standardized programming interface(McKeown et al. 2008). NEC is a core member of OpeFlow consortium and has been developing OpenFlow switches and controller. We utilize OpenFlow to separate two different communication flows, namely the contents forwarding flow of NUSC and event messages of RENS, in order to achieve QoS and save on network costs. Tier 1 DC (Megalopolises' DC, large storage, NW) Tier 2 DC (Provincial city class DC, medium storage, NW) Tier 3 DC (Small town class DC, small storage, NW)

USC1 RENS @DC1

USC2 RENS @DC2

USC4 RENS @DC4

USC5 RENS @DC5

90%

USC3 RENS @DC3

USC6 RENS @DC6

50%

USC7 RENS 20% @DC7 Replication Probability

Figure 1. General structure of ConTrail

68

IADIS International Conference on Internet Technologies & Society 2010

2.2 Operation of ConTrail We describe the behavior of ConTrail in this section. When ConTrail is structured as depicted in Figure 1 and an end-user connected to DC6 requests the content A only stored on DC2, ConTrail will operate as follows. 1. The end-user makes a request to a media server running on DC6 for the content A. 2. The media server accesses the NUSC running on DC6 with the VFS interface for the content A. 3. The NUSC fails to find the content A in the USC running on DC6, and it inquires of RENS the location of content A. If OpenFlow configures the communication flow correctly, the query message to RENS may not heavily suffer from the contents forwarding flow by NUSC because these flows will be separated. 4. RENS resolves the location of the content A, assuming DC2 holds the content A here. 5. The NUSC running on DC6 requests the content A from the NUSC running on DC2. 6. The NUSC forwards content A from DC2 to DC6 through DC1 and DC3. 7. Each NUSC on the way to DC6 (DC1, DC3 and DC6) probabilistically makes and deploys a replica of the content A. Each NUSC will register itself as a replica node of the content A with RENS, if it decides to store the replica of the content A. Thereafter, this NUSC will be one of the contents holding nodes for the content A. 8. The media server running on DC6 sends content A to the end-user while the NUSC running on DC6 receives content A from DC3.

2.3 Merits of ConTrail We summarize improvements and their grounds by ConTrail as follows.  ConTrail is enabled to equip enough storage capacity for a large quantity and volume of contents, and to handle high frequency events efficiently.  NUSC stores contents among DCs in a distributed manner and provides a single file system image for applications by combining every USC running on DCs using RENS.  RENS supports ConTrail with the functionality of location management of contents and fast propagation of events.  OpenFlow enables selection of a network route for each communication flow, or NUSC flow and RENS flow, taking account of a tradeoff between performance and cost (e.g., prioritized network with low latency but high cost vs. best effort network).  ConTrail can efficiently deploy contents to appropriate DCs at low-cost with NUSC.  NUSC places content to the DC with high probability at where the content is heavily accessed. As a result, inter-DCs transit of contents is reduced, so that it is expected that we can lower the network cost. It may be considered that this characteristic in network usage is also compatible with the “local production for local consumption” type information usage pattern.  It is not necessary to configure a replication probability at each DC but it is enough to configure replication probabilities for the tiers in a tree-like overlay network. This approach makes NUSC have notable characteristics for contents deployment: low overhead/cost, ease in adjusting the replication probability, and ease in following the changes in number of DCs.  NUSC also combines probabilistic replica placement approach with LRU (Least Recently Used) algorithm for outdated replica deletion. The content which once boomed but now does not will be deleted automatically with this strategy. Therefore, NUSC can easily follow the time-series changes in contents popularity and adjust contents deployment.  ConTrail can save network cost.  As mentioned above, OpenFlow can adequately assign communication flows to the network routes taking account of its performance characteristics and cost.  OpenFlow also can manage the communication route between DCs. For example, if OpenFlow can select a “peering route”, which usually costs nothing, instead of a “transit route”, we can reduce network cost dramatically.

69

ISBN: 978-972-8939-31-1 © 2010 IADIS

3. EVALUATION In this paper, we only evaluate the performance and effectiveness of a probabilistic replica deployment mechanism of NUSC compared with other existing methods.

3.1 Related Approach The contents deployment approaches in the existing system are categorized into the following four approaches(Pathan & Buyya 2007)(Kangasharju et al. 2002). We implemented our approach and these four approaches as simple simulations and conducted evaluations with them.  Non-cooperative pull: In this approach, client requests are directed (using DNS redirection) to the media server running on their closest DC. If there is a cache miss, the DC pulls the content from the origin server. Most popular CDN providers (e.g., Akamai, Mirror Image) use this approach.  Cooperative pull: The cooperative pull approach differs from the non-cooperative approach in the sense that the DCs cooperate with each other to get the requested content in case of a cache miss. Using a distributed index, the DCs find nearby copies of requested content and store them in the cache.  Random push: The random push approach assigns contents to DCs randomly subject to the storage constraints.  Popularity push: In this approach, each DC stores the most popular contents, to the extent the storage constraint allows. This approach assumes that the popularity order of contents is known in advance.

3.2 Evaluation Procedure To measure the effectiveness of contents/replica placement approaches as mentioned, we chose “Content Transfer Time” which is a summation of time for steps 5, 6 and 7 of ConTrail’s operation described in 2.2. We configured our simulation to place DCs in the major cities in Japan and form the tree-like overlay network topology with network latency as shown in Figure 2. The other simulation conditions are as follows: the storage capacity at each DC is uniform, the number of end-user accesses to each DC is uniform, the number of contents is 1000, and the contents selection follows Zipf’s distribution (Figure 2). High Probability

Zipf’s law Big Bias in Popularity Small Bias in Popularity 4ms

Low High

Popularity

Low 1.25

1

1.5

2

1.5 1

0.25

USC Figure 2. Evaluation conditions.

We evaluated each content placement approach as in following steps. 1. Initially place contents to DCs at random. Placement of contents and replicas is completed with push-based approaches (random push and popularity push) at this point. 2. Repeat contents acquisition for enough time at each DC using each content placement approach. 3. Sum up the content transfer time.

70

IADIS International Conference on Internet Technologies & Society 2010

3.3 Evaluation Results Figure 3 and Figure 4 show the impact of replication probability on the content transfer time for our proposed approach, or NUSC. “proposal(100)”, “proposal(50)” and “proposal(10)” stand for our proposed approach with the replication probabilities for all tiers: 100%, 50%, 10% respectively. The vertical axis (Average Delay) shows the average of every content transfer time that occurred in a simulation. The horizontal axis (Cache size ratio) shows the percentage of the storage capacity for replicas of contents to the total amount of contents size. Therefore, cache size ratio 0% means that contents requests are accommodated directly by the DCs which have original contents and no replica is created. There is also no constraint on the number of replicas per content item, so that cache size ratio 100% doesn’t mean that every content have replica. The popular contents will have several replicas. The results apparently show that (1) when the cache size ratio is relatively low (= with fewer replicas), contents requests may be directed to the original contents far away, so that the average delay increases, (2) when the contents are accessed evenly to some extent (Figure 3 and the small popularity bias case in Figure 2), the cache size ratio has little impact on the average delay because we cannot expect high cache hit ratio inherently, and (3) when the some small set of contents are heavily accessed (Figure 4 and the big popularity bias case in Figure 2), the cache size ratio has an impact on the average delay and shows a curved line as depicted in Figure 4 because we can expect high cache hit ratio. And furthermore, we can derive an additional thought from the results as below. To cope with the undesirable phenomenon, in which the indiscreet replica creation causes the popular content mistakenly expires due to unpopular contents, we expect that the probabilistic replication technique of NUSC can be utilized as a filter installed in front of LRU because:  The smaller replication probability of NUSC tends to perform well from the viewpoint of average delay.  Above trend is remarkable when the bias in popularity is low (Figure 3).  Each USC independently manages replicas by LRU algorithm as mentioned.

Average Delay (ms)

3.5 3 2.5 2 1.5 proposal(100) proposal(50) proposal(10)

1 0.5 0 0

50 Cache size ratio (%)

100

Figure 3. The impact of replication probability on the content transfer time for our proposed approach with small popularity bias.

71

ISBN: 978-972-8939-31-1 © 2010 IADIS

Average Delay (ms)

3.5 proposal(100) proposal(50) proposal(10)

3 2.5 2 1.5 1 0.5 0 0

50 Cache size ratio (%)

100

Figure 4. The impact of replication probability on the content transfer time for our proposed approach with big popularity bias.

Figure 5 and Figure 6 show the performance comparison of our proposed approach with the existing approaches described in 3.1. We evaluate our approach only with 10% of replication probability (“proposal(10)”) because it shows the best performance in the preceding evaluation with 100%, 50% and 10% of probability. “nonco.pull” stands for the “Non-cooperative pull” approach. “co.pull” stands for the “Cooperative pull” approach. “co.push(rand.)” and “co.push(pop.)” stand for the “Random push” and “Popularity push” approaches respectively. The results show that (1) co.push(rand.) shows poor performance as expected, especially when the bias in popularity is high because the random contents placement naturally expects that contents requests occur uniformly to every content item, (2) nonco.pull doesn’t perform well because the contents requests have to be directed to the original contents and it involves long-distance content transfer. In addition, we emphasize the following points.  To carry out co.push(pop.) we have to grasp the popularity of all contents in advance, though it shows best performance generally.  Our approach shows comparable performance to co.push(pop) and doesn’t have to grasp the popularity of contents in advance. Therefore, our approach can follow changes in demand for contents and have high efficiency. 3.5

Average Delay (ms)

3 2.5 2

nonco.pull co.pull

1.5

co.push(rand.)

1

co.push(pop.)

0.5

proposal(10)

0 0

50 Cache size ratio (%)

100

Figure 5. The performance of our approach and others’ with small popularity bias.

72

IADIS International Conference on Internet Technologies & Society 2010

3.5

Average Delay (ms)

3 2.5 2

nonco.pull

co.pull

co.push(rand.)

co.push(pop.)

proposal(10)

1.5 1 0.5 0 0

50 Cache size ratio (%)

100

Figure 6. The performance of our approach and others’ with big popularity bias.

4. DISCUSSION AND FUTURE WORK The content and replica deployment approach we proposed as a part of NUSC can reduce the response time to receive the content and alleviate the work for planning contents placement. It not only has an adequate ability to follow the changes in demand for contents but also can accommodate localized contents well. However, there is room to tune the replication probability for each tier according to run-time environment, as mentioned in 3.4. Therefore, the ease of tuning replication probability may be a key factor to applying our ConTrail easily to production use. We invented the solution for the above issue and have applied for a patent. Generally speaking, the idea of the solution is based on the A/B testing. ConTrail will be configured to maintain the multiple states in which each NUSC instance has the different set of replication probability parameters. When ConTrail receives a content request from an end-user, it samples the NUSC instance and assigns the request to it. ConTrail records performance characteristics, such as the response time, the cache/replica hit ratio, network traffic, and so on, and adjust the sampling rate of the NUSC instance gradually. There are contrivance in how to share the resource between each NUSC instance and how to adjust the sampling rate. We would like to implement it for practical use. We can forecast that some kind of content which doesn’t have frequent access but keeps popularity for a long-term period tends to hard to keep replicas in each USC. Because the probabilistic replication technique of NUSC can be functioned as a filter which prevents aggressive replication, so that there is few chance to have the replica of such kind of content. To cope with this issue, we investigate the “life span propagation technique” which compensates a drop in the number of replicas of some contents in a certain area. The life span propagation technique is a technique which selectively propagates the information of replica deletion to its neighborhood USC to expand the life span of remaining replicas.

5. SUMMARY We are developing a prototype system of ConTrail which consists of USC, NUSC, RENS and OpenFlow. In this paper, we especially describe the structure of ConTrail and the performance of NUSC which is the main contribution to ConTrail. NUSC manages the USCs running on DCs and can efficiently take care of contents deployment with probabilistic replication mechanism. The evaluation results by simulations show that our

73

ISBN: 978-972-8939-31-1 © 2010 IADIS

approach can reduce the average latency for contents acquisition more than existing approaches, though the evaluation is not comprehensive. We also show that it is possible for ConTrail to achieve efficiency and low-cost by separating communication flows which have different characteristics using OpenFlow, a key technology for new generation networks.

ACKNOWLEDGMENT This research is partly supported by the National Institute of Information and Communications Technology, Japan.

REFERENCES Book Wunsch-Vincent, S. & Vickery, G., 2007. Participative Web And User-Created Content: Web 2.0 Wikis and Social Networking, Organization for Economic. Conference paper or contributed volume Kangasharju, J., Roberts, J. & W. Ross, K., 2002. Object replication strategies in content distribution networks. Computer Communications, 25(4), 376-383. McKeown, N. et al., 2008. OpenFlow: Enabling Innovation in Campus Networks. ACM SIGCOMM Computer Communication Review, 38(2), 69-74. Pathan, A.K. & Buyya, R., 2007. A Taxonomy and Survey of Content Delivery Networks. Shizuno, T. et al., 2009. Comparison of Data-Searching Algorithms for a Real-Time Information-Delivery System. Proceedings of the 2009 First Asian Conference on Intelligent Information and Database Systems, 430-435.

74

IADIS International Conference on Internet Technologies & Society 2010

MULTIAGENT WORKGROUP COMPUTING Ben Choi Computer Science, Louisiana Tech University, USA

ABSTRACT The future of computing is moving from personal computers to communities of self-organizing intelligent agents. Although currently most computers are networked and can communicate with each other, they cannot yet fully work together and help each other. This paper describes a framework for networked computers to work in groups where computers can help each other perform various tasks. In this framework, a computer acts autonomously like a person in a community. Computers, having various abilities and workloads, join together to form workgroups and to benefit from belonging to communities. Future personal computer will no longer be working alone for one person but will work with a large number of other computers helping other people. Any person using a computer will have access to not just the computing power of his/her own PC but vast computing power of a community of computers. This framework combines many key technologies, including intelligent agents, multi-agent system, object space, and parallel and distributed computing, into a new computing platform, which has been successfully implemented and tested. KEYWORDS Intelligent agents, cloud computing, multi-agent system, parallel distributed processing, social network.

1. INTRODUCTION Nowadays most computers are networked and can communicate with each other. Communicate is a key requirement for collaboration. The networked computers have provided a much faster and more effective media for people to communicate and to collaborate. The next stage is to create a platform for computers themselves to collaborate with each other. Current researches on parallel and distributed computing and grid computing attempt to employ a very large number of computers to solve very large computing problems. These researches focus solely on computing speed. They partition a very large computing problem into small pieces, send each pierce to be computed by a computer, and then wait for all the results. This centralized control method of computing simply ignores the problem of collaboration between computers. On the other hand, current researches on distributed file sharing based on peer-to-peer networks attempt to allow every person to share his/her files and storage spaces through a decentralized network. This distributed file sharing method facilitates sharing of storage spaces but ignores the needs to share computing power. Our projects attempt to create a platform for computers themselves to collaborate with each other to share computing power. In this platform, computers can help each other both in term of running applications and providing computing power. If a person needs to complete some tasks that are not capable on his own personal computer, his computer will ask other computers for help. His computer makes requests to other helping computers which complete the required computations and returns the results back to his computer. If a person working on certain job needs more computing power, her computer will ask other idle computers for help. Any person using a computer will have access to not just the computing power of his/her own computer but the vast computing power of a community of computers. Our projects combine many key technologies, including parallel and distributed computing, intelligent agents, multi-agent system, object space, and multicast protocol, to form a unified computing platform. The platform should require minimal user involvement and system administration. To achieve this, our projects extend the notions of intelligent agents (Plekhanova 2002) and multi-agent system (Shamma 2008, Dignum 2009) to conceive of a computer as a whole including its software and hardware as an active agent. A computer acts autonomously like a person in a community. Computers, having various abilities and workloads, join together to form workgroups where they can help each other both in terms of the abilities and

75

ISBN: 978-972-8939-31-1 © 2010 IADIS

the workloads. This in turn requires a share place for the computers to communicate with each other. To achieve this, our projects extend the concept of Object Space to become an Active Space, which can function as a rendezvous, a repository, a cache, a responder, a notifier, and a manager of its own resources. This further requires a computer to be able to broadcast its requests to some or all computers in the workgroup. To achieve this, our projects use multicast network protocols for the communication. The remaining of this paper is organized as follows. Section 2 outlines the related researches. Section 3 defines the framework of Multiagent Workgroup Computing. Based on the proposed framework, Section 4 describes an implementation of a platform for general computing, while Section 5 describes another implementation of a platform for high performance. And, Section 6 gives the conclusion and outlines the future research.

2. RELATED RESEARCHES Although currently most computers are networked and can communicate with each other, they cannot yet fully work together and help each other. The ability of a personal computer depends on the installed software and the processing power of its CPU. If a person needs some new applications and more computing power, he/she needs to buy new software and new computer. Current researches on collaboration focus on allowing people to work together. For instance, Microsoft NetMeeting provides a complete Internet conferencing solution. These researches do not intend to address the problem for computers themselves to collaborate. Current researches on parallel and distributed computing and grid computing attempt to employ a very large number of computers to solve very large computing problems (Berman 2003, Foster 2003, Joseph and Fellenstein 2004). For instance, Folding@home (Pande 2008) uses a very large number of personal computer and PlayStations to tackle previously intractable problems in computational biology. SETI@home (Anderson et al 2002) uses millions of personal computers (Volunteer computing (Miller et al 2009)) worldwide to search for extraterrestrial intelligence. These researches focus solely on computing speed and solving very large problems. Current researches on Cloud computing (Rittinghouse and Ransome 2009, Velte et al 2009) focus on delivering Web services. On the other hand, current researches on distributed file sharing based on peer-to-peer networks (Subramanian and Goodman 2005, Androutsellis-Theotokis and Spinellis 2004, Steinmetz and Wehrle 2005) attempt to allow every person to share his/her files and storage spaces through a decentralized network but ignore the needs to share computing power. Our computing platform uses a share space for intelligent agents to communicate with each other. This share space is built upon an extended notion of Object Space (Freeman et al 1999). An Object Space is a shared medium that simply acts as a rendezvous for agents to meet there either to serve or be served without the knowledge of each other identity, location, or specialization. Other variations of Object Space are JavaSpace (Freeman et al 1999), IBM’s TSpaces (Lehman 2010), TONIC (2008), JINI (2010), and TupleSpace (Carriero 2001). Object Space has also been used for other applications. One of the proposed applications (Engelhardtsen and Gagnes 2002) utilizes an Object Space as a repository of various roles where agents adapt to changing demands placed on the system by dynamically requesting their behavior from the space. A framework for cluster computing using JavaSpace has been described in (Batheja and Parashar 2001), which uses a network management module for monitoring the state of the agents and uses the state information to schedule tasks to the agents. JavaSpace has also been used for scientific computation (Noble and Zlateva 2001). These support the perspective that JavaSpace can be used for high performance computing.

76

IADIS International Conference on Internet Technologies & Society 2010

Figure 1. Computers joining to form two workgroups. (A ring depicts a workgroup manager and a circle depicts a computer.)

3. DEFINING MULTIAGENT WORKGROUP COMPUTING In this section, the framework for Multiagent Workgroup Computing is defined. This computing framework is formulated such that any computer on the Internet can join workgroups. A workgroup can also link to other workgroups forming a whole community of computers. The organization of the computers can mimic the organization of a community. A computer can belongs to several workgroups and benefit from them. Figure 1 depicts a simple organization of twelve computers to form two workgroups. The center of a workgroup is the workgroup manager that is depicted by a ring. The workgroup manager is a computer serving to provide and maintain a share space for communication. Each of the computers is depicted by a circle. Two of the computers join both workgroups and one of the computers joins another workgroup not shown in the figure. The two workgroups are linked together and one of the workgroups is linked to another workgroup not shown in the figure. A computer has the freedom to join or leave any workgroup at any time. It is a free community and the computing community evolves over time like human community. The ability of a workgroup is more than the sum of ability of its individuals. The function of a workgroup can be considered to be similar to the function of a discussion group. A computer needs a task to be done and posts the request on the workgroup. Another member in the workgroup reads the request, finishes the task, and posts the result on the workgroup. In this analogy, the workgroup serves two purposes: (1) it is a shared place for communication and (2) it is a depository of shared knowledge. Also similar to hosting a discussion group on a server, a computer can be used as a workgroup manager. The workgroup manager helps maintain the shared place for communication and organize the shared knowledge base. The workgroup provides a shared knowledge base for the computers. This function of the workgroup is also similar to the function of a discussion group. If we have a question, we may find the answer on the prior postings of our discussion group. In this case, we do not need to ask anyone to help. Similarly, if a computer needs a task to be done and can just find the results on the shared knowledge base, then there is no need to repeat the computation. The workgroup provides a mechanism for parallel and distributed computing. If a computer needs two tasks to be done, it can post both of them on a workgroup. Two other computers can concurrently work on the two independent tasks. This analogy can be extended to multiple tasks and multiple computers working in parallel. Multiagent Workgroup Computing provides a general framework for multiple computers to work together in groups to share computing power and knowledgebase. To show the capabilities, this computing framework has been implemented and tested by two research projects, which are described in more detail in the following sections.

77

ISBN: 978-972-8939-31-1 © 2010 IADIS

Figure 2. Agents joining space to form workgroups.

4. IMPLEMENTING A PLATFORM FOR GENERAL COMPUTING Based on the framework for Multiagent Workgroup Computing, a computing platform for general computing has been developed, implemented, and tested. For the implementation of this platform, several key technologies have been used, including multi-agents, Javaspace, code mobility, caching, and multicast network protocol (Williamson 1999, Wen 2001). Figure 2 shows an overview of varies agents joining Space to form workgroups. In general a workgroup may consist of a large number of agents and the agents may join several spaces (only five agents and two spaces are show in the figure). Agents and spaces are implemented using JINI and Javaspace API’s developed by Sun Microsystems (JINI 2010). A computer can run many agents and assume many roles. In this platform, we defined three types of agents (as shown in Figure 2 (Bingi 2010)). A computer requests other computers for help by using Requesting Agents. A computer serves other computers by using Special Function Agents and General Function Agents. A Special Function Agent can only perform a specific predefined task. A General Function Agent can perform any task that is specified by a Requesting Agent. Code mobility technique is used in this project to enable the platform for general computing. When a computer needs more processing power, it can send both the program code and the data through a Requesting Agent to a Space. The request, in this case, consists of both the program code and the data is stored on the Space. Another computer running a General Function Agent will monitor the Space and retrieve the pending request. The General Function Agent retrieves both the program code and the data. It uses the program code to process the data and generates the required results, which is then send back to the Space. The results are stored in the Space. The requesting computer, through the Requesting Agent, will then retrieve the results from the Space. In general a large number of requests can be send to a Space and a large number of computers will concurrently work on the requests, creating a general purpose, parallel and distributed computing platform. A Special Function Agent, unlike a General Function Agent, can only perform a specific predefined task. In this case, the Requesting Agent does not need to send the program code, but only the name of the specific function and the data. A Special Function Agent retrieves and processes the request that matches its specialty. This processing method is similar to remote execution. However, the communications, in this case, are all

78

IADIS International Conference on Internet Technologies & Society 2010

through Space. A Requesting Agent does not need to know the destination IP address of a Special Function Agent. A Space is used in this project not only as a share place for communications but also as a repository of knowledgebase. A sever computer can run the service of a Space. However, unlike other servers, this server does not process requests. Its main purpose is to serve as a share place for Agents to meet. In this project, we use multicast network protocol for an Agent to discover a Space to join. Thus, an Agent does not need to know the IP address of a Space. Through the multicast network protocol, it broadcasts its wish to join a Space. A Space responds to the request, and then establishes direct communication with the Agent. An Agent communicates with a Space, by placing requests on the Space and by retrieving results from the Space. Both the requests and the results are cached on the Space, which now serves as a repository of knowledgebase. The requests program codes are cashed on the Space, thus that the Requesting Agent does not need to resend the same program codes for used with different sets of data. The computed results are also cashed on the Space, thus that when another Requesting Agent needs the same request, it simply retrieves the results from the Space. This computing platform has been implemented in our lab using several computers each of which has a network card that supports multicast protocol. Many test cases have been successfully executed to verify the various functionalities of this platform (Bingi 2010). One test case, for example, tests the fault tolerance of this computing platform, in which a General Agent died (maybe due to computer malfunction) before completing a task. In this case, another General Agent was able to pick up and finish the task.

5. IMPLEMENTING A PLATFORM FOR HIGH PERFORMANCE Based on the framework for Multiagent Workgroup Computing, a computing platform for high performance computation has also been developed, implemented, and tested (Choi and Dhawan 2003, 2004). Although the framework is for general purpose, parallel and distributed computing, a search engine application that serves millions of users was chosen as a test case for implementing and testing the platform. Figure 2 shows the agent and space architecture designed for search engine (Choi 2001, 2006). Without going into too much detail, the search engine architecture consists of agents to handle requests, a space for searching, agents to handle search words, a space for searching words, and agents to retrieve search results. High performance of the architecture is achieved by simply adding more agents, spaces, or networks. Another feature of the architecture for high performance is the result of using space as cache. When an agent needs to perform certain task and finds that the result of the task is already stored in the space, there is no need to repeat the computations. The agent simply reads the results from the cache. This not only reduces repeated computations when several agents need the same result but also reduces the response time, which is practically beneficial for search engine applications. The architecture is highly scalable. Any number of agents, spaces, or networks can be added. Adding an Agent is as simple as connecting the agent to a network. The agent will then discover a space in the network and become part of the workgroup. It is a plug and play process. No manual configuration is needed. Similarly, adding a space is simply connecting the space to the network and the space will broadcast its present through the multicast network. Adding a network is as easy as connecting agents and spaces into the network. This is made possible by the fact that agents and spaces can be connected to multiple networks through multiple ports. High availability and fault tolerance is achieved through multiple agents, spaces, and networks. For instance, having multiple agents performing the same role, the failure of an agent only downgrades the performance and will not affect the overall functionality of the system. Replacing an agent can be as simple as disconnecting the agent from the network and connecting another one. Similarly, having multiple spaces, the failure of one space again will only downgrade the performance. An agent pending for a request to be completed will discover that the space is not available and will then send the request to another space. Similarly, having multiple networks, the failure of one network will only downgrade the performance in a larger extent. Agents and spaces can continue to communicate through their ports that are connected to a live network.

79

ISBN: 978-972-8939-31-1 © 2010 IADIS

Figure 3. Agent space architecture for search engines

6. CONCLUSION AND FUTURE RESEARCH This paper describes a framework for networked computers to work in groups where computers can help each other perform various tasks. A computer can run many agents and assume many roles. Agents join a workgroup through a share place of communication called Space. A sever computer can run the service of a Space. A Space is used in the framework not only as a share place for communications but also as a repository of knowledgebase. Based on the framework, two computing platforms have been implemented and tested. One platform is designed for general computing, which uses code mobility to allow a computer to specify a request by sending both the program code and the data. Another platform is designed for high performance and especially for use on search engine applications. Our experiences with these projects

80

IADIS International Conference on Internet Technologies & Society 2010

indicate that the proposed framework of Multiagent Workgroup Computing has many advantages, including high performance, scalability, and availability, requiring less human intervention, and providing natural fault tolerance. The proposed framework is also applicable for extending the notion of cloud computing. This framework allows computation to be highly parallel and distributed all over the networked computers, and allows personal computers to join together to form large computing communities. The future of computing is moving from personal computers to communities of self-organizing intelligent agents.

REFERENCES Anderson, David P.; Cobb, Jeff; Korpela, Eric; Lebofsky, Matt & Werthimer, Dan, “SETI@home: an experiment in public-resource computing,” Communications of the ACM, Volume 45, Issue 11, Pages: 56 - 61, November 2002. Androutsellis-Theotokis, Stephanos & Spinellis, Diomidis, “A survey of peer-to-peer content distribution technologies,” ACM Computing Surveys, Vol. 36, Issue 4, pp. 335-371, December 2004. Batheja and Parashar, “A Framework for Opportunistic Cluster Computing using JavaSpaces”, http: //www. caip. rutgers. edu/ TASSL/ Papers/ jinihpc-hpcn01.pdf , 2001 Berman, Fran (Editor); Fox, Geoffrey (Editor); Hey, & Anthony J.G. (Editor), Grid Computing: Making The Global Infrastructure a Reality, ISBN-10: 0470853190 Wiley, 2003. Bingi, S.C. “Code Mobility with Cache in Distributed Systems using Javaspaces”, Louisiana Tech University practicum report (supervised by Ben Choi), March 2010. Carriero, Nicholas and Gelernter, David, A Computational Model of Everything, CACM 44(11): 77-81, 2001. Choi, Ben. “Invention: Method and Apparatus for Individualizing and Updating a Directory of Computer Files”, United States Patent # 7,134,082, patent issued on November 7, 2006. Choi, Ben. “Making Sense of Search Results by Automatic Web-page Classifications,” WebNet 2001 -- World Conference on the WWW and Internet, pp.184-186, 2001. Choi, Ben and Dhawan, Rohit, “Agent Space Architecture for Search Engines,” IEEE/WIC/ACM International Conference on Intelligent Agent Technology, pp. 521-525, 2004. Choi, Ben and Dhawan, Rohit, “Distributed Object Space Cluster Architecture for Search Engines,” High Availability and Performance Computing Workshop, 2003. Dignum, Virginia (Editor), Handbook of Research on Multi-agent Systems: Semantics and Dynamics of Organizational Models, ISBN-10: 1605662569, Information Science Reference, 2009. Engelhardtsen and Gagnes, “Using JavaSpaces to create adaptive distributed systems”, http://www.nik.no/ 2002/ Engelhardtsen.pdf , 2002. Foster, Ian (Editor) & Kesselman, Carl (Editor), The Grid 2: Blueprint for a New Computing Infrastructure, ISBN-10: 1558609334, Morgan Kaufmann, 2003. Freeman, Eric; Hupfer, Susanne; and Arnold, Ken, JavaSpaces: Principles, Patterns, and Practice, Addison-Wesley, Reading, Massachusetts, 1999. JINI, "Jini Specifications and API Archive", http://java.sun.com/products/jini/, 2010 Joseph, Joshy & Fellenstein, Craig, Grid Computing: On Demand Series, ISBN 0131456601, Prentice Hall PTR, 2004 Kilduff, Martin and Tsai, Wenpin, Social Networks and Organizations, ISBN-10: 0761969578, Sage Publications Ltd, 2003. Lehman, T., et al, IBM Almaden research Center, http://www.almaden.ibm.com/cs/TSpaces/, 2010. Miller, Frederic P.; Vandome, Agnes F.; and McBrewster, John, Climateprediction.net: Personal computer, Parametrization (climate), Volunteer computing, Berkeley Open Infrastructure for Network Computing, University ... BOINC Credit System, FLOPS, Climate model, ISBN-10: 6130215304, Alphascript Publishing, 2009. Noble M. S. and Zlateva S., “Scientific computation with javaspaces,” in Proceedings of the 9th International Conference on High Performance Computing and Networking, June 2001. Pande, Vijay, “Folding@home distributed computing home page,” Stanford University, http://folding.stanford.edu/, 2008. Plekhanova, Valentina, Intelligent Agent Software Engineering, ISBN-10: 1591400465, IGI Global, 2002. Rittinghouse, John; Ransome, James, Cloud Computing: Implementation, Management, and Security, ISBN-10: 1439806802, CRC, 2009. Shamma, Jeff (Editor), Cooperative Control of Distributed Multi-Agent Systems, Wiley-Interscience, 2008.

81

ISBN: 978-972-8939-31-1 © 2010 IADIS

Steinmetz, Ralf (Editor), Wehrle, Klaus (Editor), Peer-to-Peer Systems and Applications, ISBN-10: 354029192X, Springer, 2005. Subramanian, Ramesh (Editor), Goodman, Brian D. (Editor), Peer to Peer Computing: The Evolution of a Disruptive Technology, ISBN-10: 1591404304, IGI Global, 2005. TONIC, “Scientific Computing with JAVA TupleSpaces”, http://hea-www.harvard.edu/ ~mnoble/tonic/doc/, 2008. Velte, Toby; Velte, Anthony; and Elsenpeter, Robert, Cloud Computing, A Practical Approach, ISBN-10: 0071626948, McGraw-Hill Osborne Media, 2009. Wen, Su; Griffioen, James; and Calvert, Kenneth, “Building multicast services from unicast forwarding and ephemeral state,” In OPENARCH 01, March 2001. Williamson, Beau, Developing IP Multicast Networks, Vol. 1, Cisco Press, 1999.

82

IADIS International Conference on Internet Technologies & Society 2010

INVESTIGATION OF ELEMENTS FOR LEADERSHIP BY HYBRID INTELLIGENT SYSTENMS Yuya Ushida*, Keiki Takadama* and Minjie Zhang** *The University of Electro-Communications - Chofugaoka 1-5-1, Chofu, Tokyo 182-8585 Japan **The University of Wollongong - Wollongong, NSW 2522 Australia

ABSTRACT This paper explores an element that is required for a leader by hybrid artificial intelligence systems. To investigate such elements, we utilize Adaptive Neuro Fuzzy Inference System (ANFIS) and Fuzzy genetic systems. We created training dataset for the hybrid intelligent systems based on our previous studies. After intensive analysis on learning the patterns of a leader’s performance by the hybrid systems, we have revealed the following implications: (1) to rely on others makes a positive difference to leader’s decision making, whereas (2) to consider how much a leader cares about who complains to whom affects negatively the decision making. KEYWORDS Adaptive Neuro Fuzzy Inference System, Genetic Fuzzy System, Leadership, Barnga, Hybrid intelligent systems

1. INTRODUCTION Hybrid intelligent systems are software systems that combine methods and techniques in the context of artificial intelligence. They have been becoming more important for the last decades, outperforming conventional computational intelligence techniques such as neural network and evolutionary computation, in terms of the capability of learning patterns and solving computationally-expensive problems. Such techniques have been successfully applied to a wide range of domains such as robotics (Ivan & Manuel 2008), or stock price prediction (Abraham et al. 2001) (Kasabov & Qun 1999). However, previous researches with hybrid intelligent systems have not put a target on social science, mostly focusing on engineering problems. For example, leadership can be regarded as one of the main subjects in social science and the enormous number of the researches has been done (Aamodt 1999) (Reicher et al. 2007) (Conger & Kanungo 1998). Despite of the fact that leadership has been a popular topic, the methodologies are basically psychology or sociology-based, rather than computationally-based. Considering the fact that hybrid intelligent systems are successfully applied to a variety of subjects and they have not been considered yet as one of the approaches in social science, especially on leadership, it is worth investigating on leadership by hybrid intelligent systems. From this background, this paper investigates elements required for leadership by hybrid intelligent systems. To investigate such elements, we utilize the hybrid intelligence systems called Adaptive NeuroFuzzy Inference System (Jang 1993) and Genetic Fuzzy Systems (Cordón 2001). More concretely, we let the hybrid systems learn a correlation between inputs and outputs from training data, which is created based on the result of agent-based simulation (Axerlod 1997) of Barnga game (Thiagarajan & Stainwachs 1990) in our previous study (Ushida, Hattori & Takadama 2010). The training data includes the result of leader’s action as inputs and the coefficients of parameters as outputs, and the parameters are actually the basis of leader’s decision making. By letting the systems learn the correlation between them, we reveal which parameter gives what effect on leader’s decision making. This research paper is organized as follows: Section 2 gives an explanation on hybrid intelligent systems focusing on adaptive neuro fuzzy inference system and genetic fuzzy system. The next section introduces several important ideas on our current work explaining Barnga, which is the simulation environment for measuring leadership’s performance, collective adaptive situation, and a leader’s decision making process in

83

ISBN: 978-972-8939-31-1 © 2010 IADIS

Barnga game. After showing experiments and their results and discussion in Section 4, we finally give a final remark for our research in Section 5.

2. HYBRID ARTIFICIAL INTELLIGENCE SYSTEMS This section provides information on hybrid artificial intelligence systems. Two hybrid systems are used in this paper, ANFIS (adaptive neuro fuzzy inference system) and genetic fuzzy system.

2.1 ANFIS (Adaptive Neuro Fuzzy Inference System) Adaptive neuro fuzzy inference system (Jang 1993) is the combination of neural network and fuzzy logic, which enables us to predict/classify training data for a variety of application domain. The whole image of ANFIS is illustrated as Figure 1. ANFIS uses a hybrid learning algorithm that combines the back propagation gradient descent and least square method to create a fuzzy inference system whose membership functions are iteratively adjusted according to a given set of input and output data. ANFIS employs the Sugeno fuzzy model (Takagi & Sugeno 1985) for generating fuzzy rules from a given input-output data set. The typical example can be written in the following form:

x2 is A2 AND… xm is Am THEN y = k0  k1 x1 k1 x1  ...km xm x1, x2 ......,xm are input variables, A1 , A2 ,......,Am are fuzzy sets, and y is a linear function of the IF

where

x1 is A1

AND

input variables, which is first-order polynomial. Jang’s ANFIS is normally represented by a six-layer feed forward neural network.

yi(1)  xi(1) , where

xi(1)

and

yi(1)

are the input and output neuron i in Layer 1, respectively.

nd

2 Layer is the layer, where the fuzzification is performed. ANFIS employs a bell activation function written as:

1 , x  ai 2bi 1 ( ) ci

yi(2) 

where

ai , bi

and

ci

( 2) i

are parameters that control, respectively, the center, width and slope of the bell

activation function of neuron i. Layer 3 is the rule layer. Each neuron in this layer corresponds to a single Sugeno-type fuzzy rule and the inputs are calculated by evaluating the operator product, represented by: n

yi(3)   x(ji3) , j 1

where n is the number of rules, and

x

( 3) ji and

(3) i are

y

the inputs and the output of neuron i in Layer 3. Note

that in this layer, the firing strength of each rule is calculated. Layer 4 is the normalization layer. The firing strength of each rule is calculated, which means that this layer evaluates the normalized firing strength. The computation is performed as follows:

yi(3) 

xi( 4) n

x j 1

,

( 4) ji

Layer 5 is the defuzzification layer. Each of neurons is connected to the output of neuron i in Layer 4 and the initial inputs, , , . A defuzzification neuron calculates the weighted consequent value of a given rule, as:

84

IADIS International Conference on Internet Technologies & Society 2010

yi(5)  xi(5) * k0  k1x1 k1x1  ...km xm , where

k0 , k1, k2 ,...km

are a set of consequent parameters of rule i. Layer 6 is the summation layer, where

each neuron calculates the sum of outputs of all defuzzification neurons and produce overall output y, n

yi(6)   xi(6) , i 1

ANFIS combines least mean square method (to optimize consequent parameters) and back-propagation (to adjust the premise parameters). There are two steps in the learning process: In the first step, training data is brought to the inputs, the premise parameters are assumed to be fixed and the optimal consequent parameters are estimated by an iterative least mean square procedure. In the second step, the patterns are propagated again, but this time the consequent parameters are assumed to be fixed and back-propagation is used to modify the premise parameters.

2.2 GFS (Genetic Fuzzy System) Genetic Fuzzy System (GFS) is a mixture of genetic algorithm and fuzzy logic and has been widely and successfully applied to control, classification and modeling problems (Klir & Yuan 1995) (Pedrycz & Gomide 1998). GFS is different from a normal genetic algorithm in that the form of chromosomes is written in IF-THEN rule form, which enables us to interpret a solution after performing GFS. Such solution is actually a set of fuzzy rules that is optimized. Figure 2 illustrates the overview of the genetic fuzzy system and here is the whole sequence of how it works: 1. Initialization: A sequence of bits is randomly created to generate a new chromosome and such process is repeated N times, where N is the number of chromosomes in population. 2. Evaluation: All the chromosomes are then evaluated according to the fitness function 3. Selection: Two chromosomes are picked up from the population stochastically, in a sense that a higher fitness chromosome has more chances to be selected. 4. Crossover: The selected chromosomes are then crossovered to each other to generate new ones. 5. Mutation: One of the bits in the chromosomes is inverted stochastically 6. Replacement: The new genes are replaced with the ones that have the lower fitness in the population. Then return to 2 until the number of iteration/generation specified in advance.

3. BARNGA AND COLLECTIVE ADAPTATION 3.1 Barnga Game Barnga (Thiagarajan & Stainwachs 1990) is a playing card game where players can experience cross-cultural situations. Let us describe the whole sequence of a Barnga game. (1) a certain number of the tables are prepared, (2) the players are divided into the number of the tables. (3) The cards are distributed for the players and each of them puts one card in turn. (4) A winner is decided among all players in a table. The winner is one who plays the strongest card, which is to be determined according to game rules There are several features that should be noted; (i) game rules are set differently in each table; (ii) players all do not know such differences among the tables, which represent the cross-cultural situations; (iii) after several games, some players are swapped within tables; (iv) the players then meet other players who nominate different winners, which leads to a conflict over a winner. To solve such conflict, a leader agent is decided among agents in a table, proposed in (Ushida, Hattori & Takadama 2010). A leader makes a decision of who should become a real winner from the ones that are nominated. Barnga is an example of highly-dynamic environment, since there is no fixed number of negotiation strategies. Under this circumstance, the players have to play the games smoothly to adapt to the situations, i.e., they need to play games without any conflicts. Here, we define the smooth game as the one which is played without any conflicts.

85

ISBN: 978-972-8939-31-1 © 2010 IADIS

3.2 Collective Adaptation Our previous study (Ushida, Hattori & Takadama 2010) has explored an agent design in Barnga game and found out that roles such as a leader are important to the highly-dynamic environment. We also found out that there are several adaptive situations in Barnga game, which becomes a measurement of how games were played. We employ this measurement to evaluate all the situations in the game by counting the number of situations obtained for each situation type. Its number is denoted for N. The number can be regarded as the result of the leadership in the games, since such situations are derived from the leader’s decision making. There are three adaptive situations, namely collective adaptive situation, non-collective adaptive situation, and intermediate situation. In the collective adaptive situation, denoted for N c , players all get to play roles such as a leader and to have the same rule which leads to smooth games. For the case where players do not share a rule and roles, we name it as non-collective adaptive situation and denote its number for N n . If the case observed is intermediate, in a sense that it can be regarded as both collective adaptive situation and noncollective adaptive situation, we name it as intermediate situation and denote the number for Ni . Note that there are some constraints for N as shown below. We use a notation that represents the number of all cases N as: N = { Nn , Ni , Nc }, where 0
Group A -> Group B). In this section, a group changing method is proposed and explained.

Figure 1. State diagram of mobile clients groups.

94

IADIS International Conference on Internet Technologies & Society 2010

A user in Group A can change to Group B, when the user gets on vehicles in city area. When the user uses subways or trains, the user changes its group to Group C-2 or Group D. A user in rest place of highway can change to C-2, when the user gets on vehicle. Therefore, users in Group A can change to Group B, C-1, C-2, and D. Users in Group B can change to Group A when users get off vehicle, and to Group C-2 when vehicle enters highway. Users in Group C-1 can change to only Group A, when users get off subway. Users in Group C-2 can change to Group A when users get off vehicle, and to Group B when vehicle enter city area via out ramp. Users in Group D can change to only Group A like Group C-1 users, when users get off. Figure 1 show group changing states of mobile clients as mentioned above.

3. PREFETCHING METHOD FOR HANDOVER Streaming media service for fast moving clients is hard to guarantee stable QoS. In each cell boundary, signal strength is weaker and bandwidth is lower than center area of cell. Therefore, mobile clients in cell boundary are possible to have low level of buffer. If handover is needed when buffer level is low, streaming media service could be stopped and waited until than buffer level is enough to resume streaming media service. In previous section, a detection method of moving direction and speed in each group is proposed. In this section, a prefetching method is proposed to keep stable streaming media service buffer state based on proposed detection method. Since bandwidth reduction and no transmission are occurred when a mobile client moves from center area of cell to cell boundary, stream data for period until handover finished is pre-sent to the mobile client to prevent low level of buffer. Using proposed prediction method, time period from now to handover is estimated before that the mobile client enters cell boundary which is bandwidth reduction area. Based on time period, stream data amount can be calculated for prefetching. A streaming server sends calculated prefetching stream data to the mobile client when network bandwidth is stable. Equation 5 shows how to estimate prefetching stream data size. Distance(m) can be calculated with current speed and location of the mobile client and a radius of previous Base Station (PBS) cell.

Equation 5.

After a handover is done, a mobile client is still in cell boundary of neighbor base station (NBS). Therefore, the mobile client is provided lower bandwidth than center area of cell. Since the mobile client requests media stream data to server in low bandwidth area after the handover, media stream data might be delayed to the mobile client. It causes low buffer level of the mobile client. To solve this problem, media streaming server sends stream data to the NBS before the handover to prevent low buffer lever of the mobile client after the handover. After the handover, the mobile client can receive stream data immediately to reduce delay of stream data retrieval. The amounts of stream data can be calculated with time period from the handover finish time to the time of enter the NBS center area as shown as Equation 6. Distance(m) can be calculated with current speed and location of the mobile client and a radius of NBS cell.

Equation 6.

When a mobile client does not perform a handover to predicted cell, prefetched stream data is not useful anymore for the mobile client and the mobile client cannot receive prefetched stream data, because prefetched stream data is not located in NBS for handover. It causes low buffer level of the mobile client. However, the mobile client can be notified NBS before the handover in network initiated handover of Mobile WiMAX. Therefore, if FMIPv6 or HMIPv6 is used for handover, the mobile client knows NBS. When prefetching base station (BS) is not NBS, the PBS can send forwarding request to prefetching BS. When prefetching mobile client receives forwarding request, prefetching BS sends prefetched stream data to the NBS. Since the NBS is near prefetching BS and both are connected with back bone network, forwarding data

95

ISBN: 978-972-8939-31-1 © 2010 IADIS

is done shortly. Therefore, prefetched stream data can be forwarded before the handover is done, because the handover takes a few time.

4. EXPERIMENTAL RESULTS AND ANALYSIS We simulate proposed methods for buffer state of a high speed moving neighbor base station in streaming media service. A simulation program is developed with c language base on Linux system. We assumes that buffer size of a neighbor base station is streaming data for 10 seconds, a cell radius is 1Km, a cell boundary is started from 0.8 of cell radius, a prefetching starting point is 0.7 of cell radius, and a handover is done at 0.9 of cell radius. Table 2 shows bandwidth variation according to client's speed. Average speeds and minimum bandwidths of experimental mobility are shown as Table 3. Bit-rates of experimental media data are in Table 4. Table 2. Bandwidth variation according to client’s speed High Mobility Full Mobility General Mobility Limited Mobility

Speed Over 120 Km/h Under 120 Km/h Under 60 Km/h Under 10 Km/h

Bandwidth Under 144 Kbps Over 384 Kbps Over 512 Kbps Over 2 Mbps

Table 3. Minimum bandwidth and average speed of each transportation KTX SeMaUl Train Vehicles in Highway MuGungHwa Train Vehicles in City (Limited Speed) Subway Vehicles in City (Average Speed) Pedestrian

Group Group D Group D Group C-1 Group D Group B Group C-2 Group B Group A

Average Speed 165 Km/h 110 Km/h 100 Km/h 70 Km/h 60 Km/h 50 Km/h 30 Km/h 4 Km/h

Bandwidth 144 Kbps 384 Kbps 384 Kbps 384 Kbps 512 Kbps 512 Kbps 512 Kbps 2048 Kbps

Figure 2, 3, and 4 shows buffer status of a client using streaming media service in a KTX, a vehicle in highway, a vehicle in uncongested city area, a subway, and a vehicle in congested city area respectively. Figure 2 shows buffer status of a client using 56Kbps and 200Kbps streaming media service in a KTX. A normal streaming media service with 56Kbps is stopped in around 450 seconds and a normal streaming media service with 200Kbps is stopped several times. However, a proposed method with 56Kbps and 200Kbps supports a seamless streaming media service without any problem and re-buffering.

Figure 2. Buffer status of mobile clients in KTX.

96

IADIS International Conference on Internet Technologies & Society 2010

Figure 3. Buffer status of mobile clients in vehicle on highway (left) and city area (right).

Figure 4. Buffer status of mobile clients in subway (left) and vehicle on congested city area (right).

Figure 3 shows buffer statuses of a client using 400Kbps and 800Kbps streaming media service in a vehicle in highway and a vehicle in uncongested city area, respectively. We can see that the proposed method provides stable buffer status and seamless streaming media service. Figure 4 shows buffer statuses of a client using 800Kbps and 1.5Mbps streaming media service in a subway and a vehicle in congested city area, respectively. We confirm that the proposed method provides stable buffer status and seamless streaming media service. When a pedestrian uses streaming media service, a buffer status of a client is stable in every case, because a client can be provided stable 2Mbps bandwidth.

5. CONCLUSION As a result of improvements in mobile wireless internet technologies, high speed moving users are able to access a streaming media service using PDAs, laptops, mobile phones and so on. However, mobile wireless networks have variable bandwidths depending on the speed and location of clients. These characteristics make it hard to support stable Quality of Service (QoS) streams for high speed mobile clients. In this paper, a client mobility-based media stream prefetching method to guarantee stable QoS in high speed internet environment like a Mobile WiMAX, was proposed. In proposed method, high speed moving clients is joint a group depending on their characteristics and provided streaming media service according to a joint group. Mobile clients can change their group depending on their situation. In each group, clients' direction and speed are predicted. Based on client grouping and prediction of client's direction and speed, media streams are prefetched against disconnect and latency caused by handover. From the experimental results, we confirm that the proposed method provides stable buffer status and seamless streaming media service in each group with various media bit-rates.

97

ISBN: 978-972-8939-31-1 © 2010 IADIS

ACKNOWLEDGEMENT This research was supported by the IT R&D program of MKE/KEIT. [KI002119, Development of New Virtual Machine Specification and Technology]

REFERENCES Sitaram, D. and Dan, A., 2000. Multimedia Servers: Applications, Environments, and Design, Morgan Kaufmann Publishers, San Francisco, USA. Seo, D. et al, 2006. Distribution Strategies in Cluster-based Transcoding Servers for Mobile Clients, In Lecture Notes in Computer Science, Vol. 3983, pp.1156-1165. Seo, D. et al, 2007. Resource Consumption-Aware QoS in Cluster-based VOD Servers, In Journal of Systems Architecture: the EURIMICRO Journal, Vol. 53, Issue 1, pp.39-52.. Paolini, M., 2009. Testing WiMAX Performance in the Clear Network in Portland, WiMAX Forum. IEEE_16, 2005. IEEE Standard for Local and Metropolitan Area Networks part 16, IEEE 802.16 Standard. WiMAX, 2009. WiMAX Forum Network Architecture - Architecture Tenets, Reference Model and Reference Points Base Specification, DRAFT-T32-001-R015v01-O. Bose, S. and Kannan, A., 2007. Adaptive Multipath Multimedia Streaming Architecture for Mobile Networks with Proactive Buffering Using Mobile Proxies, In Journal of Computing and Information Technology, Vol. 15, pp.215226. ETRI, 2003. The HPi Handover Specification, ETRI. TTA, 2005. Specifications for 2.3GHz band Portable Internet Service Physical & Medium Access Control Layer, TTA Standard, TTAS.KO-06.0082. WiMAX_3, 2009. WiMAX Forum Network Architecture - Stage 3 - Annex: R6/R8 Anchored Mobility Scenarios, WMFT33-0030R010v04. IEEE_16e, 2005. Amendment for Physical and Medium Access Control Layers for Combined fixed and Mobile Operation in Licensed Bands, 802.16e/D9. Soliman, H., 2004. Mobile IPv6 : Mobility in a Wireless Internet, Addison-Wesley Professional. Dimopoulou, L. et al, 2005. Fast Handover Support in a WLAN Environment: Challenges and Perspectives, In IEEE Network, Vol. 19, No. 3, pp.14-20. Lee, K. and Mun, Y., 2005. An Efficient Macro Mobility Scheme Supporting Fast Handover in Hierarchical Mobile IPv6, In Lecture Notes in Computer Science, Vol. 3480, pp.408-417. Bergh, A. and Ventura, N., 2006. PA-FMIP: a Mobility Prediction Assisted Fast Handover Protocol, Proceeding of IEEE Military Communications Conference, pp.1-7. Yavas, G. et al., 2005. A data mining approach for location prediction in mobile environments, In Data and Knowledge Engineering, Vol. 54, No. 2, pp.121-146. Nanopoulos, A. et al., 2005. A data mining algorithm for generalized web prefetching, In IEEE Transaction on Knowledge Data Engineering, Vol.15, No. 5, pp.1155-1169. Wu, S. et al., 2009. Headlight Prefetching and Dynamic Chaining for Cooperative Media Streaming in Mobile Environments, In IEEE Transaction on Mobile Computing, Vol. 8, No. 2, pp.173-187. Etoh, M and Yoshimura, T., 2005. Wireless Video Applications in 3G and Beyond, In IEEE Wireless Communications, Vol. 12, No. 4, pp. 66-72. Fitzek, F. and Reisslein, M., 2001. A Prefetching Protocol for Continuous Media Streaming in Wireless Environments, In IEEE Journal on Selected Areas in Communications, Vol. 19, No. 10, pp.2015-2028. Li, B. and Wang, K., 2003. NonStop: Continuous Multimedia Streaming in Wireless Ad Hoc Networks with Node Mobility, In IEEE Journal on Selected Areas in Communications, Vol. 21, No. 10, pp.1627-1641. KTDB, 2010. Korea Transport Database, http://www.ktdb.go.kr. Korail, 2010. Korea Railroad Information,http://info.korail.com/2007/kra/inf/inf01000/w_inf01100.jsp. EX, 2010. Transport Information page at Korea Expressway Corporation, http://www.ex.co.kr/portal/roa/tst/tst1/roa_tst01.jsp.

98

IADIS International Conference on Internet Technologies & Society 2010

THE RATIONALES BEHIND FREE AND PROPRIETARY SOFTWARE SELECTION IN ORGANISATIONS Damien J. Sticklen and Theodora Issa Curtin University - School of Management, Bentley, Western Australia

ABSTRACT The aim of this paper is to critically examine the important assumptions behind the software-selection function in organisations. Software is incorporated in many situations within enterprises due to its unique ability to efficiently and effectively augment business functions and processes. Proprietary software with its inherent advantages and disadvantages remains dominant over “Free and Open-Source Software” (FOSS) in a large number of cases. However, the arrival of cloud-computing almost certainly mandates a heterogeneous software environment. Open standards, upon which most FOSS is based promotes the free exchange of information, a founding requirement of the systems embedded in organisations. Despite evidence to the contrary, the fact that FOSS is also available at low financial cost, combined with the benefits implicit in facilitating inter-process communication supports the view that it would be attractive to organisations. This paper approaches this paradoxical situation by examining the relevant literature in a broad number of disciplines. n important aspect examined is the roles that management, and in particular the executive, play in the software-selection function. It is on the basis of these findings that the rationales of use for both proprietary and FOSS are discussed in a multi-disciplinary context. Understanding the rationales behind the software-selection function may provide academics and practitioners with insight into what many would consider an ICT-centric problem. However, by abstracting to the management context, as opposed to the technical context, the organisational issues surrounding both proprietary software and FOSS adoption are counter-intuitively brought to the forefront. KEYWORDS Ubiquitous Computing; Protocols and Standards; Cyber-law and Intellectual Property; Social and Organizational Aspects; Freedom of Expression.

1. INTRODUCTION Open standards are often discussed in academic literature when referring to the potential improvement of communications between organisations, business functions and processes. However, in management, less emphasis is placed on one of the key requirements of this integration – software. The benefits of hardware improvements over past decades have been readily measurable. However, the organisational benefit of adopting software has been largely intangible (Wiederhold 2009, p.9). Building on the foundations of open standards, but less frequently discussed, let alone adopted is “Free and Open-Source Software” which takes the form of both system and application programs. Its counterpart, proprietary software has enjoyed enterprise acceptance in both server and desktop environments for the past twenty to thirty years. While the literature tends to focus upon the advantages and disadvantages of proprietary and “Free and Open-Source Software” (FOSS), less attention has been paid to the software-selection processes within organisations, which has lead to the software-adoption landscape of today. The fact that significant research in this area has not been conducted as part of the management literature is anomalous, given that software is an important enabler of business processes (Guo & Zou 2008, p.333). Also difficult to reconcile are the following facts when viewed collectively rather than in isolation: 1. FOSS (GNU/Linux, for example) has some market penetration in server environments, but only moderate desktop use (Gray 2008, p.4); 2. FOSS is essentially free as a product (in both cost and user rights) while proprietary software-adoption typically invokes significant costs; 3. The quality of FOSS is beginning to, or in some cases as already met that achieved by proprietary equivalents; and 4. Most businesses are interested in reducing expenditure, while attempting to maintain the level of quality delivered by IT (what should be a key concern for enterprise executives, regardless of technical orientation). The situation described above would not be so significant if it were not

99

ISBN: 978-972-8939-31-1 © 2010 IADIS

for the paradigm-shift which is expected by some technologists in the near future – ubiquitous computing. Ubiquitous computing, enabled by cloud-based computing technologies, promises to enable access to data and processing power at almost any populated area of the world (Hooft & Swan 2006). Furthermore, cloudcomputing, which may be of significant benefit to enterprises would likely make use of FOSS in addition to proprietary software. This is partly due to FOSS licenses which are seen as compatible with the requirements of cloud-based computing. Another contributory factor is that some well-established FOSS such as GNU/Linux has a proven security track record, mitigating some of the risks posed by cloud-computing. However the notion that FOSS is free of financial and litigate risk, which has been suggested by some of the on-line media, is based on false premises, as demonstrated by recent industry events related to its adoption (Foo 2009; Bray 2009). Cloud-computing implies at least some modicum of outsourcing, a practice which many managers and management academics associate with both benefit and risk to organisations. The immediate uncertainties surrounding FOSS, proprietary software, and cloud-based computing technologies supports the view that further research will be of benefit to both academics and practitioners in their understanding of these emerging events.

2. ORGANISATIONS AND THEIR RELATIONSHIP WITH IT Executives are mandated to evaluate, direct the development, and monitor the performance of organisational work-flow mechanisms including functions, processes, and tasks in order to achieve organisational goals (Standards Australia & Standards New Zealand 2010, p.7). However, invariably, whether they are for-profit, non-profit, or governmental, contemporary organisations are mandated to rely on software to facilitate this management. For the purposes of this paper, a business functions is: A group of business-related processes that support the business. Functions can be decomposed into other sub-functions and eventually into processes that do specific tasks (Whitten & Bentley 2007, p.51). The literature suggests that: Business processes are the work, procedures, and rules required to complete the business tasks, independent of any information technology used to automate or support them (Whitten & Bentley 2007, p.21). This definition implies that technologies are not embedded within business processes. In practice however, technology, and in particular software, are inexorably embedded within business processes, and therefore, also within business functions (about which, the first definition is silent) (Pautasso et al. 2007). This is exemplified by the fact that large organisations often rely heavily on such systems as enterprise resource planning (ERP) to accurately manage resources (Samson & Daft 2009, p.76) and web-enabled technologies to reach markets. Such is the ideal state of management and IT as disciplines. The responsibility for IT governance cannot reside with the technical implementers of systems, but with the executive. Software-selection for the purpose of augmenting business processes must be a consultative process between managers and IT experts (Standards Australia & Standards New Zealand 2010, p.9). Building upon the point made above, is that if cloud-computing becomes at first a source of global competitive advantage and then, in the long-term, a mainstream and normal requirement of doing business, prior academic research suggests that Australian organisations are not culturally and strategically prepared for the adoption of the software infrastructure necessary (i.e. incorporating FOSS) (Goode 2005, p.675). Given that IT is a fertile ground for frequent change, it can be seen that decision-making which is affected by or affects IT cannot be divorced from the executives' mandate, even if by “insourcing” to the IT department. This implies that if software-selection is to be effective, it should incorporate the heterogeneous skill-sets of IT and management amongst relevant and value-adding disciplines, in a synergistic approach. The following section briefly outlines software-selection practice for FOSS in reality versus best-practice software-selection for all software types.

3. SOFTWARE-SELECTION Hauge et al. (2009, p.42) state that the adoption of FOSS is widespread amongst companies which have increasingly sought modular solutions to business problems. However, there is evidence that suggests that

100

IADIS International Conference on Internet Technologies & Society 2010

GNU/Linux, the primary platform for FOSS delivery has an estimated market share of just over one percent of all computers globally (Net Applications 2010; StatCounter 2010), a fact that implies either that business computing is a small proportion of the overall computing market, or that proprietary operating systems are entrenched in both business and consumer environments. Interestingly, Hauge et al.'s (2009, p.43) research also suggests that the process of software-selection undertaken by developers in this context is not grounded by a generalised framework. Considering that the general purpose of software acquisition is to solve a problem related to business processes in a structured way, it would be reasonable to expect that the process would be at least as systematic as that which is suggested by Whitten and Bentley (2007, p.30), an approach which is similar to that outlined in the IT Infrastructure Library (ITIL). One reason why this might not be the case is that the adoption of FOSS is expected by some to be fiscally low-risk (Raab 2007). Research also suggests that for FOSS, rather than identifying software components on the merit of what will solve the problem in the best available way, the first software component to solve the problem is integrated instead (Hauge et al. 2009, p.45). Although the stated limits of the research conducted by Hauge et al. (2009, p.46) include that the companies interviewed were only small and FOSS components were not utilised for missioncritical roles, there is also an additional limitation of the study – it was geographically clustered. If these results are generalisable to corporations of larger sizes and in different locations, the software-selection process for FOSS undertaken by IT professionals would have to be considered problematic for two reasons: firstly, the adoption of FOSS is not universally accepted to be devoid of financial risk (Ruffin & Ebert 2004, p.85; Foo 2009); and secondly, the concept of using 'first fit' rather than 'best fit' does not appear to be congruent with a rigorous development, testing, and integration process as suggested by IT management frameworks such as ITIL. The Information Technology Infrastructure Library (ITIL) is a well-known standard for IT management throughout the world (Addy 2007, p.1). Version 3 of this framework provides a richer set of considerations for software management than practised in the above case. This software development and selection process is described as 'Applications Management' and includes: Requirements; Design; Build; Deploy; Operate; and Optimise (Cannon & Wheeldon 2007, p.131). In light of the set of considerations offered by ITIL, it appears in the above study that the parameters used by these integrators of FOSS technologies are limited to requirements and deployment. Arguably, software components, regardless of their origin need to be assessed by their individual strengths and weaknesses against a formalised framework, even if that framework is developed internally.

4. OLD AND NEW ARCHITECTURES FOR SERVICE DELIVERY In the general business context, a considerable amount of literature has been written with regards to the social and environmental results of organisations, in addition to the obligatory financial results. The rationale behind encouraging or pressuring organisations to go beyond financial results largely stems from perceived market failures and their resultant negative impacts on both private citizens and the ecology of the planet (Melville & Ross 2010, p.1). Organisations which produce and employ large amounts of ICT infrastructure are on notice concerning environmental issues such as electricity consumption and electronic waste (Jeurissen 2000, p.229). Increasingly, companies are becoming responsive to requests and demands for them to become environmentally conscientious due to the financial benefits associated with compliance and the threat of litigation in its absence (Melville & Ross 2010, p.1; Jeurissen 2000, p.229). Previous architectures for mass computing required one operating system per server, which implied that the organisation's purchase of servers was not driven entirely by computational power requirements as much as it was for the need to access multiple operating systems and applications (Baschab & Piot 2007, p.202). Recognising the inherent inefficiency of this arrangement, organisations have sought consolidation of servers and associated equipment. One way in which this consolidation has been achieved is through the introduction of virtualisation, which is a probable cornerstone of cloud-computing (Tata Consultancy Services 2010, p.6). Virtualisation is a means by which multiple operating systems and associated software may run on common hardware. The purported advantages of virtualisation include a decreased requirement to purchase additional hardware to increase scale and potential reductions in overall electricity consumption (Baschab & Piot 2007, p.202). This reduction in electricity consumption assumes that efficiency gains in floor space are

101

ISBN: 978-972-8939-31-1 © 2010 IADIS

not used for further ICT installations. If one takes an optimistic view, it can be seen that virtualisation potentially offers lower financial and environmental costs than traditional architectures. The above discussion has focused on the environmental benefits of recent software developments but has been silent on the perceived social benefits and costs of software development methods and use in organisations. Furthermore, by and large, there is no debate presented in the above discussion because the facts presented are not considered to be contentious. However, the ideologies behind proprietary and FOSS must be considered polar opposites (Schmidt 2004, p.679).

5. INTRODUCTION OF SOFTWARE TYPES 5.1 Proprietary Software: The Traditional Model In 1976, an open letter concerning software licensing was written and presented by Bill Gates. In this letter, he outlined the efforts by Paul Allen and himself to develop some software for the hobbyist community, only to have this software copied and 'stolen' by users who were unwilling to pay for its use (Gates 1976, p.2). It was his and others' perception that the efforts of software developers should be remunerated financially that led to the concept of proprietary software, as it is now known. In 1976, through US federal copyright legislation, Gate's opinion on this issue was authoritatively confirmed. Software programs which implement mathematical algorithms began to be officially protected by US copyright law in 1978, when the 1976 legislation came into effect (Franz 1985, p.147). Previously, it had been legislated that only material which could be viewed with the human eye would be afforded protection by earlier copyright legislation (Franz 1985, p.148). This was insufficient for those who sought to profit from the emerging personal computing market due to the large amount of time, labour, and finance required to successfully commercialise software (Franz et al. 1981, p.56). Proprietary software is usually provided without the source-code (Simon 2005, p.231). Because of the desire to protect trade-secrets and profits, software is given protections to promote innovation throughout parts of the world (Freedman 2005, p.631), including Australia by enforcement of patent, copyright, and trademark legislation (Latimer 2009, p.139). Furthermore, the ownership of software is not passed onto the end-user as happens with physical property. Rather software is usually licensed under the terms of an EndUser License Agreement (EULA). This situation artificially makes software somewhat similar and different to physical property. Because EULAs regulate activities which might occur in conjunction with software, under certain circumstances, managers need to understand the importance of remaining in compliance due to the potential for litigation and court-imposed sanctions. For example, Psystar Corporation, in 2008 began selling hardware bundled with OS X (Feintzeig 2009), a UNIX-based operating system produced by Apple, but including some key FOSS components. Despite the presence of FOSS components (Apple Inc. 2007, p.2), the Apple EULA acts as a wrapper over the use of the software compilation as a whole, making it in effect, a proprietary software compilation. The Apple EULA explicitly prohibits the running of software covered by this license on hardware, other than that sanctioned by Apple Corporation (Apple Inc. 2007, p.1). Psystar, like others, suspected that this was allowing Apple Inc. an obfuscated market dominance. Psystar however was found to be in breach of copyright (Bawaba 2009) and the OS X EULA, faced litigation, then subsequently received court-imposed sanctions including fines exceeding $2 million (Burrows 2009). As stated above, the protections afforded to software and in particular, proprietary software are designed by governments to promote innovation (McTaggart et al. 2007b, p.252) which in turn are expected to provide benefits the consuming public through superior products and services (McTaggart et al. 2007b, p.282). It can also be deduced that this introduction of 'legal monopolies' through property rights is to intended create a market around the commercial development of software, which would be less likely to occur if these rights did not exist (McTaggart et al. 2007a, p.48). A problem with enforcing ‘intellectual property’ laws is that rather than preventing outright theft, these laws it could be argued, serve to protect against the opportunity costs created by those who obtain bootlegged copies of the original software i.e. lost sales, not stolen inventory, for the software vendor. Without broad compliance with these laws, it would not be inaccurate to suggest that software is inherently non-rival in the economic sense i.e. one's use of a bootlegged copy does not significantly impede the sale of a legitimate copy to another. Unlike most companies which manufacture physical products, software vendors have significant research and development costs (Chiang 2010, p.104)

102

IADIS International Conference on Internet Technologies & Society 2010

which must be reimbursed through comparatively lower costs of production and distribution as offered by optical media and the Internet. Copyrights, patents, and trademarks support and protect this approach to innovation, development, and profit generation. However, if the software vendor enjoys market domination, such as Microsoft does with their desktop operating system, MS Windows (Net Applications 2010; StatCounter 2010), these laws which were once designed to afford protection to their software may become a source of durable competitive advantage. This would not serve to benefit the general public, including corporate customers of the market dominant software vendor, but rather would further the interests of shareholders in that company (Federspiel & Brincker 2010, p.40). Through the above discussion, the manager may now understand why software volume licensing costs can be considerable. The increasing trend toward monopolies on technologies is not lost on the courts throughout Europe and the US. For example, Microsoft has recently faced pressure from the European Union (EU) not to abuse their market dominance of their operating system and web browser (Forelle & Wingfield 2009; Palmer & Tait 2009). Furthermore, in the “In re Bilski” case, the applicability of business-process related and software patents was tightened (Hulse et al. 2010, p.12; McFarlane & Litts 2010, p.71). This precedent is not binding in Australian law, but does present a strong persuasive argument regarding the decreasing scope to which software patents may protect software-based inventions in the future. Therefore the manager needs to ask; if the traditional business model surrounding proprietary software were to be slowly eroding, what other models might emerge?

5.2 FOSS and Cloud Computing: The Emergent Models As early as 1984, computer scientists, such as Richard Stallman, considered the proliferation of restrictions within the licenses of proprietary software to be contra-preferential to rights of software users (Wolf et al. 2009, p.279). As a computer scientist, Stallman expected to have access to the source code of programs in order that efficient modification and subsequent distribution of improvements could be achieved. In proprietary software, this code is unlikely to be available for end-users and its unauthorised use is usually prevented by copyrights, patents, or a combination of both. Furthermore, free software advocates like Stallman believe that software patents, proprietary formats and standards can threaten the rights of the end users in ways which are neither ethically or socially responsible (Brown 2010). Although not opposed to the commercialisation of software (Wolf et al. 2009, p.280), Stallman has stated through the Free Software Foundation, which he established, that four fundamental rights or freedoms should exist for all users of software: “The freedom to run the program, for any purpose; the freedom to study how the program works, and change it to make it do what you wish. Access to the source code is a precondition for this; the freedom to redistribute copies so you can help your neighbour; and the freedom to distribute copies of your modified versions to others” (Free Software Foundation, Inc. 2010b; Baschab & Piot 2007, p.171). It is apparent, all things remaining the same, that software licensed under such terms will have a diminishing marginal cost (not benefit) at each instance of distribution. Also obvious is that the protections provided by the law are less applicable to free software, and therefore the methods of generating profits that apply to proprietary software are generally unavailable (Freedman 2005, p.631). The four freedoms are captured within the various versions of the GNU General Public License (GNU GPL), which like the terms of an EULA, are legally binding (Kumar 2006, p.35; Wacha 2005, p.492). A potential problem for commercial developers of software is that if part of their software is copied from or linked to GNU GPL licensed source code, the entire body of their code becomes subject to the GNU GPL upon distribution (Norris & Kamp 2004, p.44; Ruffin & Ebert 2004, p.85). This restriction has led some to criticise the GNU GPL and others to liken it to a 'cancer' which binds itself to commercial inventions (Scacchi 2007, p.6; Kuehnel 2008, p.115). Regardless of the vehement claims from both sides of the debate, FOSS is generally improving in functionality over time, which with the advent of cloud-computing, should increase its relevance. While FOSS might be seen as an important complement to proprietary software, 'cloud-computing' is however seen by some to be a paradigm-shift away from both of these software distribution-models. Large organisations have traditionally made use of data-centres to provide massive data processing and storage. It is expected in future, with the adoption of cloud-computing, organisations will make increasing use of software-as-a-service (SaaS) and infrastructure-as-a-service (IaaS), with virtualisation to provide software and data processing

103

ISBN: 978-972-8939-31-1 © 2010 IADIS

capabilities to the organisation via a utility model (Sotomayor et al. 2009, p.14; Schneider 2009, p.19). These capabilities may offer financial benefits through reduced fixed costs. Additionally, cloud-computing may mitigate some of the burden placed upon the environment by reducing electricity use and decreasing electronic waste (Kavanagh 2009). Surprisingly, despite the potential sustainability advantages potentially available through cloud-computing technologies, the Free Software Foundation (2010a) has indicated that the term is confusing, even going so far as to quote Oracle Corporation's CEO, Larry Ellison, creating some doubt over the term's delimitation (Farber 2008). This aversion to the term is likely due to the potential threat that these parties perceive cloud-computing creates for their ideals and business models. Cloud-computing appears to challenge industry knowledge such that in order to better understand the business issues surrounding each development model, identifying what managers need to know is required.

6. SOFTWARE-ADOPTION AND THE KNOWLEDGE GAP Baschab and Piot (2007, p.172) argue that IT managers, including executives, have faced corporate resistance to the adoption of FOSS in the past due in part to the differing values, beliefs, and assumptions between FOSS organisations and commercial enterprises. Another challenge has been that proprietary software has arguably had a superior appearance to that offered by FOSS (Baschab & Piot 2007, p.172). Given that earlier discussion has supported the view that not all decisions concerning software are made on the basis of technical merit, this result is not surprising. Adding to these impediments to adoption is the fact that managers may not be cognisant of the existence of FOSS (Baschab & Piot 2007, p.173). Furthermore, research conducted by Goode (2005, p.675) quoted in Table 1 suggested that there may even be a conscious decision to reject FOSS by managers, particularly in Australia due to the perceptions listed in Table 1. Table 1. Reasons why FOSS use is not a priority in top Australian organisations (Goode 2005, p.675). Rationale for Omission of FOSS in Organisation

Percentage

Lack of relevance

36.00%

Lack of support

20.00%

Minimal or No Requirement

16.00%

Insufficient Resources

8.00%

Committed to Microsoft

8.00%

Not Commercial

8.00%

No time

4.00%

7. CONCLUSION This paper has provided the reader with a critical examination of information that might appear outside the realm of traditional management. However, in the twenty-first century, inculcating knowledge silos does not benefit organisations, it impedes them. Furthermore, with a dependence on automated systems, management more than ever requires a heterogeneous skill-set which borrows from other discrete disciplines. It is expected that the reader would now understand the fundamentals and implications of FOSS, proprietary software, and cloud-computing as they relate to organisations. There are significant business opportunities and challenges that arise when an organisation chooses to adopt any of these software development and distribution models. Although the use of proprietary software is widespread throughout the world and Australia, relatively scant empirical information regarding FOSS software and cloud-computing selection processes in Australian organisations has been identified. This creates a potentially imbalanced understanding of software-selection at the academic level, and a basis for bias in the practice of softwareselection. Therefore, given that there is potential for significant disruptive effects in social and environmental

104

IADIS International Conference on Internet Technologies & Society 2010

terms, future research to obtain this information ought to be of particular utility to IT and management academics as well as the organisations on which this research would be focused.

REFERENCES 1. Addy, R., 2007. Effective IT Service Management: To ITIL and Beyond!, New York: Springer Berlin Heidelberg. 2. Apple Inc., 2007. Software License Agreement for MAC OS X. Available at: http://images.apple.com/legal/sla/docs/macosx105.pdf [Accessed May 8, 2010]. 3. Baschab, J. & Piot, J., 2007. The Executive's Guide to Information Technology 2nd ed., New Jersey: John Wiley and Sons. 4. Bawaba, A., 2009. Apple Moves to Kill Second Psystar Lawsuit. Network World Middle East. Available at: http://proquest.umi.com.dbgw.lis.curtin.edu.au [Accessed August 5, 2010]. 5. Bray, H., 2009. Microsoft Files Suit Against GPS Device Maker. Boston Globe, B.7. 6. Brown, P.T., 2010. We Must Make Freedom our Goal. Available at: http://www.fsf.org/appeal/2009/freedom-is-thegoal [Accessed May 17, 2010]. 7. Burrows, P., 2009. Lessons from the Apple-Psystar Battle. Business Week (Online). Available at: http://proquest.umi.com.dbgw.lis.curtin.edu.au [Accessed August 5, 2010]. 8. Cannon, D. & Wheeldon, D., 2007. Service Operation, London: The Stationery Office. 9. Chiang, C., 2010. Product Diversification in Competitive R&D-Intensive Firms: An Empirical Study of the Computer Software Industry. Journal of Applied Business Research, 26(1), 99-108. 10. Farber, D., 2008. Oracle's Ellison Nails Cloud Computing. Available at: http://news.cnet.com/8301-13953_310052188-80.html [Accessed May 17, 2010]. 11. Federspiel, S. & Brincker, B., 2010. Software as Risk: Introduction of Open Standards in the Danish Public Sector. Information Society, 26(1), 38-47. 12. Feintzeig, R., 2009. Computer Maker sued by Apple files for Chapter 11 Protection. Available at: http://proquest.umi.com.dbgw.lis.curtin.edu.au [Accessed August 5, 2010]. 13. Foo, F., 2009. Kennards Hire offloads Linux Machines after costly Experiment. The Australian. Available at: http://www.theaustralian.com.au/australian-it/kennards-hire-offloads-linux-machines-after-costly-experiment/storye6frgakx-1225810376238 [Accessed May 14, 2010]. 14. Forelle, C. & Wingfield, N., 2009. Corporate News: EU Hits Microsoft With New Antitrust Charges --- Reviving Old Fight, Regulators Accuse Software Giant of Harming Competitors by 'Tying' Web Browser to Windows. Wall Street Journal. Available at: http://proquest.umi.com.dbgw.lis.curtin.edu.au [Accessed September 5, 2010]. 15. Franz, C.R., 1985. Re-examining the Proprietary Software Protection Issue in the U.S.A. for the 1980s. Information & Management, 8(3), 147-153. 16. Franz, C.R., Wilkins, S.J. & Bower, J.C., 1981. A Critical Review of Proprietary Software Protection. Information & Management, 4(2), 55-69. 17. Free Software Foundation, Inc., 2010a. Some Confusing or Loaded Words and Phrases that are Worth Avoiding. Available at: http://www.fsf.org/licensing/education/essays/index_html/words-to-avoid.html [Accessed May 17, 2010]. 18. Free Software Foundation, Inc., 2010b. The Free Software Definition. Available at: http://www.gnu.org/philosophy/free-sw.html [Accessed May 8, 2010]. 19. Freedman, W., 2005. Virtual Speech: At the Constitutional Crossroads. Santa Clara Computer and High - Technology Law Journal, 21(4), 629-643. 20. Gates, B., 1976. An Open Letter to Hobbyists. Homebrew Computer Club Newsletter, 2(1), 2. 21. Goode, S., 2005. Something for Nothing: Management Rejection of Open Source Software in Australia's Top Firms. Information & Management, 42(5), 669-681. 22. Gray, J., 2008. Linux and the Enterprise Desktop: Where are we Today? Linux Journal, 2008(171), 7. 23. Guo, J. & Zou, Y., 2008. A Business Process Explorer: Recovering Business Processes from Business Applications. In Reverse Engineering, Working Conference on. Los Alamitos, CA, USA: IEEE Computer Society, pp. 333-334. 24. Hauge, O., Osterlie, T., Sorensen, C. F. & Gerea, M. 2009. An Empirical Study on Selection of Open Source Software-Preliminary Results. In Proceedings of the 2009 ICSE Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development. pp. 42–47. 25. Hooft, M. & Swan, K., 2006. Ubiquitous Computing in Education: Invisible Technology, Visible Impact, Routledge. 26. Hulse, R., Sachs, R. & Patel, R., 2010. Patent Practice and In re Bilski. Computer and Internet Lawyer, 27(2), 7-12.

105

ISBN: 978-972-8939-31-1 © 2010 IADIS

27. Jeurissen, R., 2000. Cannibals with Forks: The Triple Bottom Line of 21st Century Business. Journal of Business Ethics, 23(2), 229-231. 28. Kavanagh, R., 2009. IT Virtualization Helps to Go Green. Information Management, 19(2), 20. 29. Kuehnel, A., 2008. Microsoft, Open Source and the Software Ecosystem: of Predators and Prey - the Leopard can Change its Spots. Information & Communications Technology Law, 17(2), 107-124. 30. Kumar, S., 2006. Enforcing the GNU GPL. Journal of Law, Technology & Policy, 1-36. 31. Latimer, P., 2009. 2009 Australian business law, North Ryde: CCH Australia Limited. 32. McFarlane, R. & Litts, R., 2010. Business Methods and Patentable Subject Matter Following In re Bilski: is "Anything under the Sun Made by Man" Really Patentable? Santa Clara Computer and High - Technology Law Journal, 26(1), 35-83. 33. McTaggart, D.F., Parkin, M. & Findlay, C.C., 2007a. Macroeconomics 5th ed., Frenchs Forest: Pearson Education Australia. 34. McTaggart, D.F., Parkin, M. & Findlay, C.C., 2007b. Microeconomics 5th ed., Frenchs Forest: Pearson Education Australia. 35. Melville, N. & Ross, S.M., 2010. Information Systems Innovation for Environmental Sustainability. MIS Quarterly, 34(1), 1-21. 36. Net Applications, 2010. Operating System Market Share. Available at: http://marketshare.hitslink.com/operatingsystem-market-share.aspx?qprid=10&qpcal=1&qpcal=1&qptimeframe=M&qpsp=135 [Accessed May 8, 2010]. 37. Norris, J. & Kamp, P., 2004. Mission-Critical Development with Open Source Software: Lessons Learned. IEEE Software, 21(1), 42-49. 38. Palmer, M. & Tait, N., 2009. Brussels Accepts Microsoft's Browser Offer. Financial Times, 24. 39. Pautasso, C., Heinis, T. & Alonso, G., 2007. Autonomic Resource Provisioning for Software Business Processes. Information and Software Technology, 49(1), 65-80. 40. Raab, D.M., 2007. Software Selection Methods, Part 2. DM Review, 17(5), 34. 41. Ruffin, M. & Ebert, C., 2004. Using Open Source Software in Product Development: a Primer. IEEE Software, 21(1), 82–86. 42. Samson, D. & Daft, R., 2009. Management 3rd ed., South Melbourne, Vic: Cengage Learning Australia. 43. Scacchi, W., 2007. Free/Open Source Software Development: Recent Research Results and Emerging Opportunities. In The 6th Joint Meeting on European software engineering conference and the ACM SIGSOFT symposium on the foundations of software engineering: companion papers. p. 468. Available at: http://www.ics.uci.edu/~wscacchi/Presentations/ESEC-FSE07/ESEC-FSE07-Scacchi-Paper.pdf [Accessed May 18, 2010]. 44. Schmidt, D.P., 2004. Intellectual Property Battles in a Technological Global Economy: A Just War Analysis. Business Ethics Quarterly, 14(4), 679-693. 45. Schneider, M., 2009. How to Keep Your Head in the Clouds. Information Management (1521-2912), 19(2), 18-20. 46. Simon, K.D., 2005. The Value of Open Standards and Open-Source Software in Government Environments. IBM Systems Journal, 44(2), 227-238. 47. Sotomayor, B., Montero, R., Llorente, I., & Foster, I., 2009. Virtual Infrastructure Management in Private and Hybrid Clouds. IEEE Internet Computing, 13(5), 14-22. 48. Standards Australia & Standards New Zealand, 2010. Corporate Governance of Information Technology. Available at: http://www.saiglobal.com.dbgw.lis.curtin.edu.au [Accessed June 5, 2010]. 49. StatCounter, 2010. Top 5 Operating Systems on Apr 10. Available at: http://gs.statcounter.com/#os-ww-monthly201004-201004-bar [Accessed May 8, 2010]. 50. Tata Consultancy Services, 2010. TCS and Cloud Computing. Available at: http://www.tcs.com/resources/white_papers/Pages/TCS_Cloud_Computing.aspx [Accessed March 16, 2010]. 51. Wacha, J.B., 2005. Taking the Case: is the GPL Enforceable? Santa Clara Computer and High - Technology Law Journal, 21(2), 451-492. 52. Whitten, J.L. & Bentley, L.D., 2007. Systems Analysis and Design for the System Enterprise 7th ed., New York: McGraw-Hill/Irwin. 53. Wiederhold, G., 2009. Tutorial: How to Value Software in a Business, and Where Might the Value Go? In Advanced Information Systems Engineering. pp. 9-10. Available at: http://dx.doi.org.dbgw.lis.curtin.edu.au/10.1007/978-3-64202144-2_5 [Accessed July 19, 2010]. 54. Wolf, M., Miller, K. & Grodzinsky, F., 2009. On the Meaning of Free Software. Ethics and Information Technology, 11(4), 279-286.

106

IADIS International Conference on Internet Technologies & Society 2010

SEMANTIC WEB SERVICES: STATE OF THE ART Markus Lanthaler1, 5, Michael Granitzer2, 3 and Christian Gütl1, 4 1

Instit. for Information Systems and Computer Media/2 Instit. of Knowledge Management, Graz University of Technology 3 Know-Center GmbH Graz - Graz, Austria 4 5 School of Information Systems/ Digital Ecosystems and Business Intelligence Institute, Curtin University of Technology Perth, Australia

ABSTRACT Service-oriented architectures (SOA) built on Web services were a first attempt to streamline and automate business processes in order to increase productivity but the utopian promise of uniform service interface standards, metadata, and universal service registries, in the form of the SOAP, WSDL and UDDI standards has proven elusive. Furthermore, the RPC-oriented model of those traditional Web services is not Web-friendly. Thus more and more prominent Web service providers opted to expose their services based on the REST architectural style. Nevertheless there are still problems on formal describing, finding, and orchestrating RESTful services. While there are already a number of different approaches none so far has managed to break out of its academic confines. This paper focuses on an extensive survey comparing the existing state-of-the-art technologies for semantically annotated Web services as a first step towards a proposal designed specifically for RESTful services. KEYWORDS Semantic Web; Web services; REST; SOA

1. INTRODUCTION Service-oriented architectures (SOA) built on Web services were a first attempt to streamline and automate business processes in order to increase productivity. Thus, more and more organizations offer access to their information through Web services. But, while most current and previous research efforts mostly concentrate on traditional SOAP-based Web services, the utopian promise of uniform service interface standards, metadata, and universal service registries, in the form of the SOAP, WSDL, and UDDI standards has proven elusive. Thus, the usage of SOAP-based services is mainly limited to the integration of legacy systems which have not been built to be Web-friendly. Instead of SOAP-based services with their high perceived complexity, prominent Web service providers like Microsoft, Google, Yahoo, and others have opted to use lightweight REST-style APIs. REST is an architectural style developed specifically for the Internet that specifies constraints to enhance performance, scalability, and resource abstraction within distributed hypermedia systems(Fielding 2000), (Lanthaler and Guetl 2010). But, despite the foreseeable potential, the increasing interest on, and growing acceptance of lightweight services, there are still problems on formal describing, finding and orchestrating RESTful services. Research on these issues has already started but none of the approaches so far has managed to break out of its academic confines. The lack of a widely accepted standard to create semantic RESTful services for various application domains and scenarios motivated us to research aspects of a holistic service description format for REST-based services. This paper in particular focuses on an extensive survey comparing the existing state-of-the-art technologies for semantically annotated Web services as a first step towards a proposal specifically designed for RESTful services. The remainder of this paper is organized as follows. First we highlight REST’s main advantages. Then we give an overview of different proposals for service interface description formats in section 3 and proposals for semantic annotation of (RESTful) Web services in section 4. Those approaches are then compared in section 5 to more pragmatic and widely accepted domain specific description formats. Finally, the concluding remarks are presented in section 6.

107

ISBN: 978-972-8939-31-1 © 2010 IADIS

2. REPRESENTATIONAL STATE TRANSFER (REST) Even though many successful distributed systems have been built on RPC and RPC-oriented technologies, such as SOAP, it is known for quite some time (Waldo et al. 1994) that this approach is flawed because it ignores the differences between local and remote computing. The major differences concern the areas of latency, memory access, partial failure and concurrency as described in detail in (Waldo et al. 1994). In Internet-scale systems intermediaries for caching, filtering, monitoring or, e.g., logging are “must haves” to ensure good performance, scalability, and maintainability. Fielding (2000) made meticulously chosen tradeoffs for REST which address exactly those issues and allow building extensible, manageable, maintainable, and loosely-coupled distributed systems at Internet-scale. In REST caching, e.g., is relatively straightforward: clients retrieve data by (conditional) GET requests and servers can specify the cache validity duration by HTTP Cache-Control headers. This clearly follows HTTP’s semantics and doesn’t break any intermediaries relying on those semantics. The fact that the whole Web—the largest and most successful distributed system— is built on the REST principles should be evidence enough of REST’s superior scalability and interoperability. The REST architectural style (Fielding 2000) is based on a client-server architectural style where the communication is stateless such that each request from client to server must contain all of the information necessary to understand the request. It cannot take advantage of any stored context on the server; the session state is kept entirely on the client. To mitigate the overhead caused by this statelessness, cache constraints, requiring that the data within a response to a request be implicitly or explicitly labeled as cacheable or noncacheable, have been added. To further improve scalability, the layered system constraint, that restricts the knowledge of the system to a single layer, has been added. But the central feature that distinguishes the REST architectural style from other network-based styles is its emphasis on a uniform interface between components. REST is defined by four interface constraints (Fielding 2000): 1) identification of resources, 2) manipulation of resources through representations, 3) self-descriptive messages, and 4) hypermedia as the engine of application state. Finally there exists an optional constraint which allows client functionality to be extended by downloading and executing code in the form of applets or scripts on demand to simplify clients and improve extensibility. Since it reduces visibility it is only an optional constraint within REST. REST’s “identification of resources” constraint, which specifies that every resource has to be addressable, makes REST a natural fit for the vision of the Semantic Web (Berners-Lee, Hendler and Lassila 2001) and creates a network of Linked Data (Bizer, Heath and Berners-Lee 2009); no parallel exists for SOAP’s remote method invocation. It follows that REST-based Web services are an ideal carrier for semantic data and would even provide the additional benefit of resource resolvability in human-readable HTML.

2.1 A Word on the HATEOAS Constraint Unfortunately the hypermedia as the engine of application state (HATEOAS) constraint, which refers to the use of hyperlinks in resource representations as a way of navigating the state machine of an application, is one of the least understood constraints, and thus seldom implemented correctly. It is exactly that rule that allows building reliable loosely coupled systems. In contrast, most of the time, SOAP-based systems rely heavily on implicit state-control flow control. The allowed messages and how they have to be interpreted depends on what messages have been exchanged before and thus in which implicit state the system is. Third parties or intermediaries trying to interpret the conversation need the full state transition table and the initial state to understand the communication. This in turn implies that states and transitions between them have to be identifiable which demands (complex) technologies like Web Services Business Process Execution Language (WS-BPEL). According to Fielding (2008), “a REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types. […] From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations.” The “human Web” is obviously based on this type of interaction and state-control flow where very little is known a priori. Humans are able to quickly adapt to such new control flows (e.g. a change in the order sequence or a new login page to access the service). On the other hand, machine-to-machine communication is often based on static knowledge and tight coupling. The challenge is thus to bring some of the human Web’s adaptivity to the Web of the machines to allow the building of loosely coupled, reliable, and scalable systems.

108

IADIS International Conference on Internet Technologies & Society 2010

After all, a Web service can be seen as a special Web page meant to be consumed by an autonomous program as opposed to a human being. The Web already supports machine-to-machine communication, what’s not machine-processable about the current Web is not the protocol (HTTP), it is the content.

3. SERVICE INTERFACE DESCRIPTION As outlined in an earlier paper (Lanthaler and Guetl 2010), there has to be an agreement or contract on the used interfaces and data formats in order for two (or more) systems to communicate successfully. In the traditional Remote Procedure Call (RPC) model, where all differences between local and distributed computing are hidden, usually static contracts in the form of an Interface Description Language (IDL) are used to specify those interfaces. The data types that such an IDL offers are abstractions of the data types found in actual programming languages to allow interoperability between different platforms. In SOAP this is usually done by using WSDL and XML Schema. That way, automatic code generation on both, the client and the server side, are possible. In contrast REST’s HATEOAS constraint is characterized by the use of contextual contracts where the set of actions varies over. Additionally the interface variability is almost eliminated due to REST’s uniform interface. In consequence REST-based services are almost exclusively described by human-readable documentation describing the URLs and the data expected as input and as output. Even though it would be possible to describe REST services with WSDL 2.0 different other approaches have been proposed. Most of them, such as WRDL (Prescod 2002), NSDL (Walsh 2005), SMEX-D (Bray 2005), Resedel (Cowan 2005), RSWS (Salz 2003), and WDL (Orchard 2010) were more or less ad-hoc inventions designed to solve particular problems and haven’t been updated for many years. The most recent, respectively only regularly updated proposals are, to our best knowledge, hRESTS (HTML for RESTful Services) (Kopecký, Gomadam and Vitvar 2008) and WADL (Web Application Description Language) (Hadley 2009). Given REST’s constraints, it is arguable whether REST even needs a service interface description. In contrast to the before mentioned approaches we argue that the description of the resource representations, i.e., the transport format, combined with the use of hypermedia should be enough to achieve a high degree of automation for RESTful services. Nevertheless, for the sake of completeness, the following two sections provide a short overview of WADL and hRESTS.

3.1 WADL (Web Application Description Language) WADL’s approach (Hadley 2009) is closely related to WSDL by generating a monolithic XML file containing all the information about the service interface. Given that it was specifically designed for describing RESTful services, it models the resources provided by the service and the relationships between them, in contrast to WSDL’s operation-based manner. In WADL each service resource is described as a request containing the used HTTP method and the required inputs as well as a response describing the expected service response representation and HTTP status code. The main critique of WADL is that it is complex and thus requires developers that have a certain level of training and tool support to enable the usage of WADL. This complexity contradicts the simplicity of RESTful services. In addition, WADL urges the use of specific resource hierarchies which introduce an obvious coupling of the client and server. Servers should have the complete freedom to control their own namespace. In consequence, WADL currently is not widely used.

3.2 hRESTS (HTML for RESTful Services) hRESTS’ (Kopecký, Gomadam and Vitvar 2008) idea is to enrich the, mostly already existent, humanreadable documentation with so called microformats (Khare and Çelik 2006) to make it machine-processable. While it offers a relatively straightforward solution to describe the resources and the supported methods, there is some lack of support for describing the used data schemas. Apart from a potential label, hRESTS does not provide any support for further machine-readable information about the inputs and outputs. Extensions like SA-REST (Sheth, Gomadam and Lathem 2007) and MicroWSMO (Kopecký and Vitvar 2008) address this issue. More information about those extensions can be found in section 0 and 0.

109

ISBN: 978-972-8939-31-1 © 2010 IADIS

4. SEMANTIC ANNOTATION OF (RESTFUL) SERVICES Most of the time, the syntactic description of a service’s interface is not enough. Indeed, two services can have the same syntactic definition but perform significantly different functions. Thus, also the semantics of the data and the behavior of the service have to be documented and understood. This is normally done in the form of a textual description which is, hopefully, easily understandable by a human being. Machines, on the other hand, have huge problems to understand such a document and cannot extract enough information to use such a service in a semantic correct way. To address this problem the services have to be annotated semantically; the resulting service is called a Semantic Web Service (SWS) or a Semantic RESTful Service (SRS). Those supplemental semantic descriptions of the service’s properties can in consequence lead to higher level of automation for tasks like discovery, negotiation, composition, and invocation. Since most SWS technologies use ontologies as the underlying data model they also provide means for tackling the interoperability problem at the semantic level and, more importantly, enable the integration of Web services within the Semantic Web.

4.1 OWL-S OWL-S (Web Ontology Language for Web Services) (‘OWL-S: Semantic Markup for Web Services’ 2004) is an upper ontology based on the W3C standard ontology OWL used to semantically annotate Web services. OWL-S consists of the following main upper ontologies: 1) the Service Profile for advertising and discovering services; 2) the Service (Process) Model, which gives a detailed description of a service’s operation and describes the composition (choreography and orchestration) of one or more services; and 3) the Service Grounding, which provides the needed details about transport protocols to invoke the service (e.g. the binding between the logic-based service description and the service’s WSDL description). Generally speaking, the Service Profile provides the information needed for an agent to discover a service, while the Service Model and Service Grounding, taken together, provide enough information for an agent to make use of a service, once found (‘OWL-S: Semantic Markup for Web Services’ 2004). The main critique of OWL-S is its limited expressiveness of service descriptions in practice. Since it practically corresponds to OWL-DL it allows only the description of static and deterministic aspects; it does not cover any notion of time and change, nor uncertainty. Besides that, in contrast to WSDL, an OWL-S process cannot contain any number of completely unrelated operations (Klusch 2008, Lara et al. 2004).

4.2 WSMO Another approach to describe Web services semantically is the Web Service Modeling Ontology (WSMO) (Roman 2005). It defines a conceptual model and a formal language called WSML (Web Service Modeling Language) together with a reference implementation of an execution environment (WSMX; Web Service Execution Environment) for the dynamic discovery, selection, mediation, invocation, and interoperation of Semantic Web services based on the WSMO specification. WSMO offers four top-level notions to describe the different aspects of Web services: 1) Ontologies that define the formalized domain knowledge; 2) Goals, which specify objectives that a client might have when consulting a Web service; 3) Service Descriptions for describing services that are requested by service requesters, provided by service providers, and agreed between service providers and requesters; and 4) Mediators for enabling interoperability and handling heterogeneity between all these components at data (mediation of data structures) and process level (mediation between heterogeneous communication patterns) to allow loose coupling between services, goals, and ontologies. In contrast to most other description formalisms, WSMO propagates a goal-based approach for SWS. It is particularly designed to allow the search for Web services by formulating the queries in terms of goals. So the task of the system is to automatically find and execute Web services which satisfy the client’s goal. This goes beyond of OWL-S’ idea whose principal aim is to describe the service’s offers and needs. One of the main critiques of WMO has been that its development has been done in isolation of the W3C standards. This raised serious concerns by the W3C which were expressed in the official response to the WSMO submission in 2005 (Bournez 2005). To address those issues, a lightweight version called WSMO-Lite (Vitvar et al. 2008) has been created; see section 0 for details. Another critique is that guidelines for developing mediators, which seem to be the essential contribution of WSMO in concrete terms, are missing.

110

IADIS International Conference on Internet Technologies & Society 2010

4.3 SAWSDL After number of efforts (including the above mentioned OWL-S (‘OWL-S: Semantic Markup for Web Services’ 2004) and WSMO (Roman 2005)) semantic annotation of SOAP-based services is now preferably addressed by the W3C recommendation Semantic Annotations for WSDL and XML Schema (SAWSDL) (2007). SAWSDL defines how to add semantic annotations to various parts of a WSDL document such as inputs, outputs, interfaces, and operations. However, SAWSDL does not specify a language for representing the semantic models. Instead, it just defines how semantic annotation is accomplished using references to semantic models, e.g. ontologies, by providing three new extensibility attributes to WSDL and XML Schema elements. The modelReference extension attribute defines the association between a WSDL or XML Schema component and a concept in some semantic model. It is used to annotate XML Schema type definitions, element declarations, and attribute declarations as well as WSDL interfaces, operations, and faults. The other two extension attributes, named liftingSchemaMapping and loweringSchemaMapping, are added to XML Schema element declarations and type definitions for specifying mappings between semantic data and XML. SAWSDL allows multiple semantic annotations to be associated with WSDL elements. Schema mappings as well as model references can contain multiple pointers. Multiple schema mappings are interpreted as alternatives whereas multiple model references all apply. SAWSDL does not specify any other relationship between them (‘Semantic Annotations for WSDL and XML Schema (SAWSDL)’ 2007). The major critique of SAWSDL is that it comes without any formal semantics. This hinders logic-based discovery and composition of Web services described with SAWSDL but calls for “magic mediators outside the framework to resolve the semantic heterogeneities. (Klusch 2008)

4.4 WSMO-Lite As already mentioned in the previous section, SAWSDL does not specify a language for representing the semantic models but just defines how to add semantic annotations to various parts of a WSDL document. WSMO-Lite (Vitvar et al. 2008) has been created as a lightweight service ontology to fill the SAWSDL annotations with concrete service semantics to allow bottom-up modeling of services. It adopts the WSMO model and makes its semantics lighter. The biggest differences are that WSMO-Lite treats mediators as infrastructure elements and specifications for user goals as dependent on the particular discovery mechanism used, while WSMO defines formal user goals and mediators. Furthermore, WSMO-Lite defines the behavior semantics only implicitly. WMO-Lite also does not exclusively use WSML, as WSMO does, but allows the use of any ontology language with a RDF-syntax. WSMO-Lite describes the following four aspects of a Web service: 1) the Information Model, which defines the data model for input, output, and fault messages; 2) the Functional Semantics, which define the functionality, which the service offers; 3) the Behavioral Semantics, which define how a client has to talk to the service; and 4) the Non-functional Descriptions, which define non-functional properties such as quality of service or price. A major advantage of the WSMO-Lite approach is that it is not bound to a particular service description format, e.g., WSDL. As a result WSMO-Lite can be used to integrate approaches like, e.g., hRESTS (in conjunction with MicroWSMO) with the traditional WSDL-based service descriptions.

4.5 MicroWSMO MicroWSMO is an attempt to adapt the SAWSDL approach for the semantic description of RESTful services. It uses, just as hRESTS, on which it relies, microformats for adding semantic annotations to the HTML service documentation. Similar to SAWSDL, MicroWSMO has three types of annotations: 1) Model, which can be used on any hRESTS service property to point to appropriate semantic concepts; 2) Lifting, and 3) Lowering, which specify the mappings between semantic data and the underlying technical format such as XML. Therefore, MicroWSMO enables the semantic annotation of RESTful services basically in the same way in which SAWSL supports the annotation of Web services described by WSDL. It is important to point out that, since both MicroWSMO and SAWSDL can apply WSMO-Lite service

111

ISBN: 978-972-8939-31-1 © 2010 IADIS

semantics, REST-based services can be integrated with WSDL-based ones (Maleshkova, Kopecký and Pedrinaci 2009). Therefore, tasks such as discovery, composition, and mediation can be performed completely independently from the underlying Web service technology.

4.6 SA-REST Another approach for the semantic description of RESTful services is SA-REST (Sheth, Gomadam and Lathem 2007). It relies on RDFa for marking service properties in an existing HTML service description, similar to hRESTS with MicroWSMO. As a matter of fact it was the first approach reusing the already existing HTML service documentation to create machine-processable descriptions of RESTful services. The main differences between the two approaches are indeed not the underlying principles but rather the implementation technique. SA-REST offers the following service elements: 1) Input and 2) Output to facilitate data mediation; 3) Lifting and 4) Lowering schemas to translate the data structures that represent the inputs and outputs to the data structure of the ontology, the grounding schema; 5) Action, which specifies the required HTTP method to invoke the service; 6) Operation which defines what the service does; and 7) Fault to annotate errors.

5. DOMAIN SPECIFIC DESCRIPTIONS All the above mentioned approaches try to be as general as possible to allow the creation of suitable service annotations for a wide range of application domains. In contrast to that, there exist a number of service description formats such as OpenSearch (Clinton 2005) and the Atom Publishing Protocol which are tailored for very specific application domains. The semantics of those descriptions are implicit, i.e., all services described by such an approach are created for the same or very similar use cases. In this chapter the two probably most successful domain specific description formats, Atom and OpenSearch, are described.

5.1 Atom The Atom suite consists of two related standards: the Atom Syndication Format and the Atom Publishing Protocol (also known as AtomPub or APP). The Atom Syndication Format is a XML-based format used by publishers to syndicate their content to providing users with frequently updated content in the form of so called Web feeds or news feeds. The Atom Publishing Protocol is an application-level protocol for publishing, editing, and deleting Web resources. Atom is designed to be an extensible format and so foreign markup (markup which is not part of the Atom vocabulary) is allowed almost anywhere in an Atom document. Both, the Atom Syndication Format as well as the Atom Publishing Protocol, are fully based on the REST architectural style and thus extremely Web-friendly. This, and the above mentioned extensibility, led to the adoption of AtomPub for the implementation of various kinds of Web services. The most prominent examples might be the Google Data Protocol (GData) (2010) and Microsoft’s Open Data Protocol (Odata) (2010). They use Atom’s extensibility to implement APIs for their services. It is thus arguable that Atom is one of the world’s most successful RESTful Web service stories.

5.2 OpenSearch OpenSearch (Clinton 2005) was developed by A9, an Amazon.com subsidiary, and was first unveiled in 2005. It is a collection of simple formats that allow the interface description of search engines and publishing of search results in a format suitable for syndication and aggregation. OpenSearch thus allows search clients, such as Web browsers, to invoke search queries and process the responses. By now all major Web browsers support OpenSearch and use it to add new search engines to the browser’s search bar. This way the user can invoke a query directly from the browser without having to load the search engine’s homepage first. OpenSearch consists of the following four formats: 1) the description document, 2) the URL template syntax, 3) the response elements, and 4) the Query element. The OpenSearch description document describes

112

IADIS International Conference on Internet Technologies & Society 2010

the web interface of a search engine in the form of a simple XML document. It may also contain some metadata such as the name of the search engine and its developer. The URL template syntax represents a parameterized form of the URL by which a search engine is queried. Simply speaking it describes the used GET parameters to invoke a query. An example of a such a template looks as follows: http://example.com/search?q={searchTerms}. All parameters are enclosed in curly braces and are by default considered to be part of the OpenSearch template namespace. By using the XML namespace prefix conventions it is possible to add new parameter names, which brings extensibility. The OpenSearch response elements are used by search engines to augment existing XML formats such as Atom and RSS with searchrelated metadata. Finally, the OpenSearch Query element can be used to define specific search requests that can be performed by a search client. The Query element attributes correspond to the search parameters in a URL template. One use case is, e.g., the definition of related queries in a search result element.

6. CONCLUSIONS AND FUTURE WORK The attempt to standardize Web services has taken years, but still there are no clear definitions of what constitutes a service at a conceptual level. While a number of different approaches, such as OWL-S, WSMO, WSMO-Lite, MicroWSMO, SAWSDL, and SA-REST as described in the previous sections, have been proposed, none so far has managed to break out of its academic confines. Looking at REST-based services the situation looks even worse; there does not even exists a widely accepted standard for the (syntactic) description of a service’s interface respectively a machine-readable description of the resource representations. We argue that the lack of acceptance of those approaches stems from the fact that they do not provide any imminent incentive and thus experience a classic chicken-and-egg problem. No services are semantically described because there are no applications making use of that information and no applications are developed because there are no semantically annotated services. Facebook’s recently introduced Open Graph Protocol (2010) clearly shows the willingness of Web site publishers to semantically annotate their content if it’s easy enough and if there is an imminent incentive to do so. More than 50,000 sites implemented the protocol within the first week of its publication (Huang 2010). Another factor for the lacking acceptance is the high complexity of most of the approaches. Often their functioning resembles the flawed RPC-model (remote procedure call) which is a problem especially with regard to RESTful services, whose approach is fundamentally different. To solve those and other problems, we are currently working on an approach which tries to create the machine counterpart of the human Web (as described in section 0) by combining different proven technologies. By basing the approach on well-known technologies we hope to lower the barrier for developers to develop scalable semantic Web services. The most challenging part is how to communicate the domain knowledge without overburdening developers. We aim to develop a prototype and investigate how usable the approach is to model complex Web services. We also aim to improve and investigate methods that enable discovery of services with minimal complexity.

ACKNOWLEDGEMENT The Know-Center GmbH Graz is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Ministry of Transport, Innovation and Technology, the Austrian Ministry of Economics and Labor and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.

113

ISBN: 978-972-8939-31-1 © 2010 IADIS

REFERENCES Berners-Lee, T., Hendler, J. and Lassila, O. 2001, “The Semantic Web”. Scientific American, vol. 284, no. 5, pp. 34-43. Bournez, C. 2005, “Team Comment on Web Service Modeling Ontology (WSMO) Submission”. W3C Submissions, viewed 23 June 23 2010, Bray, T. 2005, “SMEX-D (Simple Message Exchange Descriptor)”, C. Bizer, C., Heath, T. and Berners-Lee, T. 2009. “Linked Data - The Story So Far”. International Journal on Semantic Web and Information Systems (IJSWIS), vol. 5, no. 3, pp. 1-22. Clinton, D. 2005, OpenSearch 1.1 Draft 4, Cowan, J. 2005, “Resedel”, Fielding, R. T. 2008, “REST APIs must be hypertext-driven”. Untangled musings of Roy T. Fielding, Fielding, R. T. 2000, “Architectural Styles and the Design of Network-based Software Architectures”, PhD thesis, Department of Information and Computer Science, University of California, Irvine, USA, 2000. Google Data Protocol, 2010, viewed 5 July 2010 . Hadley, M. J. 2009, “Web Application Description Language (WADL)” Huang, S. L. 2010, “After f8 - Resources for Building the Personalized Web”. Facebook Developer Blog, viewed 7 July 2010, Khare, R. and Çelik, T. 2006, Microformats: A Pragmatic Path to the Semantic Web, CommerceNet Labs, Palo Alto, CA, USA, Tech. Rep. 06-01, Jan. 2006, < http://wiki.commerce.net/images/e/ea/CN-TR-06-01.pdf> Klusch, M. 2008, “Semantic Web Service Description”. In M. Schumacher, H. Schuldt, & H. Helin, CASCOM: Intelligent Service Coordination in the Semantic Web (pp. 31-57). Basel: Birkhäuser Basel. Kopecký, J. and Vitvar, T. 2008, “D38v0.1 MicroWSMO: Semantic Description of RESTful Services”, Kopecký, J., Gomadam, K. and Vitvar, T. 2008, “hRESTS: an HTML Microformat for Describing RESTfulWeb Services”. Proc. 2008 IEEE/WIC/ACM Int. Conf.on Web Intelligence and Intelligent Agent Technology, vol. 1, pp. 619-625. Lanthaler, M. and Guetl, C. 2010, “Towards a RESTful Service Ecosystem - Perspectives and Challenges”. Proceedings of the 2010 4th IEEE International Conference on Digital Ecosystems and Technologies (DEST), Dubai, UAE. Lara, R., Roman, D., Polleres, A., Fensel, D. 2004, “A Conceptual Comparison of WSMO and OWL-S”. European Conference on Web Services (ECOWS 2004), Erfurt, Germany, pp. 254-269. Maleshkova, M., Kopecký, J. and Pedrinaci, C. 2009, “Adapting SAWSDL for Semantic Annotations of RESTful Services”, LNCS 5872, pp. 917-926. Open Data Protocol, 2010. Viewed 5 July 2010, Open Graph Protocol, 2010. Viewed 7 July 2010, Orchard, D. 2010, “Web Description Language (WDL)”, viewed 7 January 2010, OWL-S: Semantic Markup for Web Services, W3C Member Submission, Prescod, P. 2002, “Web Resource Description Language (‘Word-dul’)”, Roman, D., Keller, U., Lausen, H., & Bruijn, J. D. 2005, “Web Service Modeling Ontology”, Applied Ontology, vol. 1, issue 1, pp. 77-106. Salz, R. 2003, “Really Simple Web Service Descriptions”, Semantic Annotations for WSDL and XML Schema (SAWSDL), 2007, W3C Recommendation. Sheth, A. P., Gomadam, K. and Lathem, J. 2007, “SA-REST: Semantically Interoperable and Easier-to-Use Services and Mashups”, IEEE Internet Computing, vol. 11, issue 6, pp. 84-87. Vitvar, T., Kopeck, J., Viskova, J., & Fensel, D. 2008, “WSMO-Lite Annotations for Web Services”, Proceedings of the 5th European Semantic Web Conference (ESWC 2008), LNCS 5021, pp. 674-689. Tenerife, Spain. Waldo, J., Wyant, G., Wollrath, A. and Kendall, S. 1994, A Note on Distributed Computing, SMLI TR-94-29, Sun Microsystems Laboratories, Mountain View, CA, USA. Walsh, N. 2005, “WITW: NSDL”,

114

IADIS International Conference on Internet Technologies & Society 2010

VIRTUAL GOODS REPURCHASE INTENTION IN SOCIAL NETWORKING SITES Echo Huang National Kaohsiung First University of Science and Technology - 2 Juoyue Rd. Nantz District, Kaoshiung, Taiwan, ROC 811

ABSTRACT A conceptual model was proposed which based on Morgan and Hunt(1997) Commitment-Trust to examine the predictors of repurchase intention toward virtual goods(VGs) in social networking sites(SNSs). A survey was conducted in Facebook, 176 usable responses were collected. Partial Least Square (PLS) regression was chosen to test the conceptual model and corresponding hypotheses. The findings show that relationship commitment, trust, and relationship satisfaction significantly influence Facebook users repurchase intention. Four external variables, perceived value, social identity, SNSs interactivity, and flow of SNSs, have positive impacts on relationship satisfaction. Implications are proposed in the final section. KEYWORDS Commitment, satisfaction, trust, repurchase intention, virtual goods, social networking sites

1. INTRODUCTION To maintain the relationship with an existing customer is much cheaper than the cost of requiring a new customer(Reichheld, 1993). And the cost of acquiring new customers is five times of retaining existing customers(Kotler, 1994). Therefore, customer retention and their repurchase behavior are very important for businesses because long-term value can be best be appropriated through loyal customers(Reichheld, 1996; Reichheld & Schefter, 2000). Bamberg, et al(2003) argued that repurchase behavior can be predicted from past experiences only when there are no meaningful changes in situational and environmental circumstances. In this paper, the question of what leads consumers to purchase virtual goods. The area of churn and percentage of users who monetize are the most important statistics to measure the financial health of social games. Social games rely on 80/20 model, where 80% of users play freely, while the 20% of more active users pay for the service. It is important to under the 20% of users intention to repeat purchase. Previous studies on the topic mostly focus on the consumer, considering with motivations and decision processes lead individuals into purchasing virtual goods(Guo&Barnes, 2007; Lehdonvirta, 2005; Nojima, 2007; Lehdonvirta, et al., 2009). Much research has investigated the virtual goods use in online games(Guo&Barnes, 2007). Much less research has studied virtual good(VG) use in social networking sites(SNSs). For practice, it is arguable that VGs use in SNSs is more important than use in online games, because the size of customer base is dramatically growing in SNSs. For a research perspective, there are two streams of literature that have addressed online purchasing behaviors: commitment and satisfaction research. Satisfaction research posits that a satisfied customer s more likely to stay with an online website(Huang, 2008). Commitment literature posits the relationship between individual prior use and future use of an online website(Huang, 2007, Huang, 2008). However, the effect of prior use on future use appears to be more complex than previously thought and is associated with additional factors, such as social identity, interactivity, perceived value, and flow. Future behaviors can be predicted from past experiences only when there no meaningful changes in situational and environmental circumstances. Although there are accumulated research findings about loyalty and customer retention, many important questions about repurchase VGs remain. The social networking sites context is one in which individual use is kind of socially tied with friends, therefore switching from one social networking site to another similar site cost losing friends connection. In compare with a general website, switching from

115

ISBN: 978-972-8939-31-1 © 2010 IADIS

one social networking site to another is relatively difficult and involves high cost. Therefore, users’ commitment to continue use is obvious even they are not satisfied, but satisfaction to repurchase VGs is critical to vendors. However, the role of users’ commitment and satisfaction to continue use VGs remains largely unaddressed in the social networking literature. Therefore, the purpose of this study is to explain the factors influence virtual goods use and their repurchase intention by expanding Commitment-Trust model with external variables.

2. VIRTUAL GOOD SALES IN SNSS Marketing literature emphasis that marketing is about identifying and meeting human and social needs(Kotler and Keller, 2006). Marketing results in a customer behavior which is he willing to buy the products/service(Drucker, 1993). Marketing is an activity that creates needs. This view is particularly pertinent in the context of SNSs, where operators create the events that determine to a large extent the activities and specific needs of the users. When involving a social connection through virtual channel, its rules and social-based economy can be regarded as marketing activities concerned with creating the underlying needs and conditions for customers to become incentivized to buying virtual goods. This set value creation through virtual goods somewhat similar with physical world, as the value for the goods has to first created through designing the context for the goods. SNSs tend to like ‘free-to-join’ sites, the subscription model is not suitable for operators, but if other metrics are take into consideration such as registered users, active users, conversion rates and costs, the situation maybe a little bite different. ‘Free-to-join’ services have the potential to attractive much larger audiences. Although the market for virtual goods is still in its infancy, virtual goods are responsible for billions in revenues for social games and social networks. Several social networks have turned to virtual goods, including Facebook, hi5 and myYearbook. A successful implementation of virtual goods and other forms of alternative revenue-generation is conducted by a second tier network-myYearbook. Between its new subscription-based VIP Club, virtual goods and a CPA platform that ties in with its virtual currency program, an increase in revenue of 120% from 2008 to 2009. The VIP Club is the umbrella of social networks’ virtual goods trifecta, giving users the ability to pay a subscription fee, buy points directly or earn virtual currency through participating in branded advertising campaigns. Zynga, the developer of popular social games like FarmVillie, Mafla Wars, FishVillie, etc, racked in $200 million in revenues from the sales of virtual goods within its games. Friendster says selling virtual gifts, goods and games is a proven revenue model in Asia, where 90% of their audience lives. Friendster and Facebook are getting their virtual shops in order. Friendster’s Wallet actually has two kinds of currency: Friendster Coins with real money value, and Friendster Chips without. Both exist to help customers get their feet with the payment system, provide incentives for special marketing promotions, and as a ‘loyalty program’. And to convince users that, in the Friendster economy, they’re not quite as broke as they might be in real life. The marketing and brand building campaign by Coca Cola and H&M highlights the ever growing importance of Social Media. It further shows that large organizations are now increasing shifting their ad budget towards social media. Campaigns like these further demonstrate the increasing dominance of virtual goods over other methods of reward during campaigns on social media sites(Hameed, 2010). Understanding how to create and maintain demand for virtual goods is therefore an increasingly pertinent question. How does a service entice users into virtual good spending? How can sales be sustained over time without saturating the demand?

3. LITERATURE REVIEW 3.1 Commitment-Trust Model Morgan and Hunt(1994) proposed a model investigates the joint role of commitment and trust in relationship exchanges between a focal business company and its partners. Commitment and trust are two mediators between a group of antecedent variables and outcomes. Trust exist ‘when one party has confidence in an

116

IADIS International Conference on Internet Technologies & Society 2010

exchange partners’ reliability and integrity’(Morgan and Hunt, 1994) and has a positive impact on commitment. The degree of trust that develops between companies has been described as a ‘fundamental relationship building block’(Wilson, 1994) and a ‘critical element of economic exchange’(Ring, 1996). The more the customer trusts the supplier, the higher the perceived value of the relationship by the customer(Walter et al., 2002); consequently, one can expect that the greater the chances will be that the customer remains in the relationship, as for the user of SNSs services, trust is an important element of the perceived quality of the service. Trust has been conceptualized as the self-assurance that the relationship collaborators have developed reliability and integrity between them(Morgan and Hunt, 1994) and a belief that the other company will only perform actions that will result in positive outcomes(Anderson and Narus, 1990). Trust is the result of a gradual deepening of the relationship through a process of mutual adaptation to the needs of the other party although not necessarily symmetrically(Hogberg, 2002). There are two types of trust, behavioral and cognitive, reported in the previous studies(Ring, 1996; McAllister, 1995; Lewicki and Bunker, 1995). Commitment is the desire for continuity manifested by the willingness to invest resources into a relationship. Commitment has been conceptualized as the developed cooperative sentiments(Childers and Ruekert, 1986), strong preference for existing partners(Teas and Sibley, 1980) and propensity for relation continuity(Anderson and Weitz, 1989). Morgan and Hunt(1994) define commitment as the belief of an exchange partner that the ongoing relationship with another is so important as to deserve maximum efforts at maintaining it definitely.

3.2 The Research Model and Hypotheses Based on the theories presented in the previous section, a research model is developed as Figure 1. The dependent variable is an individual’s repurchase behavior to virtual goods in a social networking site. A positive relationship between commitment and repurchase is predicted. Commitment to a social networking site is generally conceptualized as affective commitment or a psychological attachment(Gundlach et al, 1995; Bennett, et al, 2000). Morgan and Hunt(1994) posited that relationship commitment is a desire to maintain a valued relationship. This attachment to the social networking site translated into a behavioral intention which to continue the relationship in the future(Gundlach, et al, 1995). Therefore, if a user wishes to continue a relationship with virtual friends met in a social networking site then they will also need to continue purchase products/services that is sold by the social networking site. Trust is a component of social exchange theory(Morgan and Hunt, 1994) and is identified in the service marketing literature as important in creating successful exchanges(Berry and Parasuraman, 1991). Given the intangible nature of a virtual good that a virtual good is consumed as it is purchased, it can be argued that a high degree of trust in the virtual goods and the social networking site required to encourage purchase and repeated purchase. Adapted from Morgan & Hunt(1997) and Huang(2008), indicate repurchase was significantly correlated with commitment and trust on repurchase. Hence, H1: Commitment is positively related to repurchase virtual goods in a social networking site. H2a: Trust is positively related to repurchase virtual goods in a social networking site. A positive relationship between trust and commitment is predicted. Moorman, et al.(1992) posited that trust is a determinant of relationship quality in that the level of honest, believability and integrity influence how the relationship with the service provider is perceived. The perceived quality of the relationship then in turn influences the level of commitment extended towards the service provider. Hence, high levels of trust are likely to lead to high levels of commitment to the relationship(Moorman, et al., 1992; Morgan and Hunt, 1994; Huang, 2009). H2b: Higher levels of trust in a social networking site have positively influences on higher levels of commitment to the social networking site.

117

ISBN: 978-972-8939-31-1 © 2010 IADIS

Perceived value

Trust H4

Social identity

H5 H6

SNS interactivity

H7

H3 b

H2 a

Relationship Satisfaction H3 c

H3 a H2 b

Repurchas e Intention H1

Commitment

Flow of SNS Figure 1. The research model

Expectation Confirmation Theory(ECT) has been used satisfaction to explain repurchase intention(Bhatacherjee, 2001; Bolton, et al., 2000). Subsequent researchers employing ECT to explain online customer’s continuance intention(Bhattacherjee, 2001; Bougie, et al., 2003; Chea and Luo, 2006). Empirical evidence supports the ECT hypothesis that satisfaction is a major determinant of repurchase intentions(Bhattacherjee,2001; Huang, 2008). On the basis of relational theory, it is widely agreed in the literature that trust and commitment are key variables in influencing customer loyalty in successful relational exchanges. However, there is a gap in the literature with respect to the possibility that trust and commitment reflect the level of relationship satisfaction, rather than driving(Caceres and Paparoidamis, 2005). Thus the following hypotheses are therefore proposed. H3a: Satisfaction is positively related to repurchase virtual goods in a social networking site. H3b: Higher levels of satisfaction in a social networking site have positively influences on higher levels of commitment. H3c: Higher levels of satisfaction in a social networking site have positively influences on higher levels of trust. Perceived value refers to a user perceived benefits in terms of functional benefits, social benefits, and affective benefits acquired from suppliers(Lai, 2004; Lin and Vivek, 2005). Anderson, et al.(1994) found that customers satisfaction is determined by perceived product quality and product value. Walter, et al.(2001) and de Ruyter, et al.(2001) investigated the relationship satisfaction between suppliers and customers is significantly influenced by perceived functionality and value. Previous studies of relationship marketing found that perceived value has a positive influence on relationship satisfaction(Oh, 2000; Cronin et al., 2000; Sajeev and Colgate, 2001). Lehdonvirta(2009) argued that functional and utilitarian attributes enhance game players satisfaction on virtual goods. Thus, H4: Perceived value has a positively influence on satisfaction. Social identity refers to a user self-esteem and commitment to groups(Dholakia, et al., 2003; Yujong, 2008; Kwon and Wen, 2009). Wenger(1998) investigated online game players behaviors, the results show that social identity has a positive influence on the relationship satisfaction with games and other players. Brunetto and Farrwharton(2002) argued that customers satisfaction is influenced by social identity. Davis(2006) also had the same finding of the relationship with customer satisfaction and their social identity. Hence, H5: Social identity has a positively influence on satisfaction. Interactivity refers to a user perceived responsiveness, perceived control, and perceived connection by using web-based platform(Lee, 2005; Johnson, 2006). Kalakoto and Whinston(1996) argued that creating a sold relationship with customers needs the help of web-based communication and interactive technologies. Hoffman, et al.(1995) confirmed customers involve in communication with vendors will enhance their power of control on information collection. Upshaw(1995) found that interactivity enhances higher brand awareness and positively influence loyalty. Thus, H6: Web interactivity has a positively influence on satisfaction. Flow of SNSs refers to the pleasant and focus degree of a user using social networking sites(Moon and Kim, 2001; Lu, et al., 2008). Online communities provide services, transaction and flow have positive

118

IADIS International Conference on Internet Technologies & Society 2010

influences on customer purchase behaviors and satisfaction(Szymanski and Hise, 2000; Hsu and Lu, 2004; Chan et al., 2004; Koh and Kim, 2004). Koh and Kim(2004) defined the state of flow influences the attitude of game playing and further influence the intention of participation of an online game. Verma and Hu(2008) study on online stocking found that flow influences satisfaction, the intention to pay and revisit. O’Cass and Carlson(2010) investigated online sports website users found flow influence their satisfaction and loyalty toward the website. Koufaris(2002) argued that online consumers revisit intention will be influenced by flow. Wu and Chang(2005) investigated member behaviors of online travelling communities, the results show interactivity and flow have positively influences on member behaviors. Thus, H7: Flow of a social networking site has a positively influence on satisfaction.

4. RESEARCH METHODOLOGY In this section, we used participants in Facebook.com as our subjects in the proposed research model to explore and explain the impact of commitment, trust, and satisfaction on repurchase behavior.

4.1 Sample and Data Collection The research model was done with empirical data collected from members of Facebook.com. We invited the light users and heavy users to participate this study. A banner with a hyperlink connecting to our web survey was posted on the homepage of Facebook in June, 2010. A number of respondents are randomly selected for offering incentive payments of $15(NT$500), this is done for increasing the incentives of participants and quality responses. We request respondents who commit to involve need to take responsibility to fulfillment their obligation to response a self-administrated questionnaire before due. Of the 258 surveys received back, 176 were fully completed and usable for the purpose of this study. The respondents were a diverse sample: 52.3% of the respondents were female; 47.7% were male. Their age ranged from 18 to over 40 years old, with 64.8% twenty three to thirty five years old, 18.8% between 36 and 45, and 14.8% between 18 and 25. More than 77.2% had a college degree. Their usage of virtual goods from 0 to over 90, 34.1% reported they have bought VGs, 35.8% exchanged VGs, 54.5% used VGs, and 54.5% have bought VGs as gifts since 2007. The respondents reported the spending amount of VGs is $20 per month.

4.2 Construct Measurement Measurement items were adapted from the literature wherever possible. New items were developed based on the definition provided by the literature. Item from perceived value was adapted from Lai(2004) and Lin and Vivek(2005); perceived value as defined that a user perceived benefits in terms of functional benefits, social benefits, and affective benefits acquired from using virtual goods in a social networking site. Items from social identity were adapted from Dholakia et al.(2003), Yujong(2008), and Kwon and Wen(2009); social identity as defined that a user self-esteem and commitment to groups in a social networking site. Web interactivity items were adapted from Lee(2005) and Johnson(2006); SNS interactivity as defined that a user perceived responsiveness, perceived control, and perceived connection by using a social networking site. Items from SNS flow were adapted from Moon and Kim(2001) and Lu, et al.(2008); SNS flow as defined that the pleasant and focus degree of a user using virtual goods of a social networking site. Items from commitment and trust were adapted from Morgan and Hunt(1994) and Casalo, et al(2007); commitment is an enduring desire to maintain a valued relationship with friends met in a social networking site; trust is confidence in reliability and integrity of friends of a social networking site. Relationship satisfaction and repurchase items were adapted from Premkumar and Bhattacherjee(2008), Wu and Wang (205), and Massad et al.(2006); satisfaction was defined that relational satisfaction of a social networking site; repurchase was defined that willing to repeat purchase a virtual goods in a social networking site. The attributes were then summarized to create a survey instrument, which asks respondents to identify the extent to which they agree/disagree with respect to their experience with virtual goods in Facebook. Each item was rated on a scale of 1 to 5, where 1 equals “disagree” and 5 “agree.” According to Zmud and Boynton(1991), however, refining the instrument through pre-testing and pilot testing of typical respondents

119

ISBN: 978-972-8939-31-1 © 2010 IADIS

satisfied face validity criteria. Hence, a pretest is also give into increase the face validity. Pretests were conducted to ensure the instrument is acceptably valid. The instrument was first evaluated for content validity by two EC scholars, and then further tested for reliability, item consistency, ease of understanding, and question sequence appropriateness. Thirty Facebook users were invited to complete the questionnaire. Comments on question sequence, wording choice, and measures were solicited, leading to minor modifications of the questionnaire. Based on feedback from pretest subjects, several items were removed from our instrument.

4.3 Data Analysis and Results The measurement model was evaluated in terms of convergent validity and discriminant validity (Anderson and Gerbing, 1998). Convergent validity was evaluated for the measurement scales using two criteria suggested by Fornell and Larcker (1981): (1) all indicator factor loadings should be significant and exceed 0.70 and (2) average variance extracted (AVE) for each construct should exceed the variance due to measurement error for that construct (i.e., should exceed 0.50). Factor loadings λ in the study exceeded 0.7, which represents the measure model is significant due to high convergent validity. Composite reliabilities in the measurement model ranged from 0.95 to 0.97 and were all above the minimum of 0.7 as suggested by Nunnally (1978). Average variance extracted (AVE) ranged from 0.56 to 0.88. Hence, all two conditions for convergent validity were met. The theoretical model is multistage, suggesting the need for a structural equation modeling technique that simultaneously tests multiple relationships. To assess validation and test linkages in the theoretical model, partial least squares (PLS) was widely accepted as a method for testing theory in early stages, especially in IS research. Within the IS discipline, a large percentage of research had been devoted to examining the conditions and contexts under which relationships may vary, often under the general umbrella of contingency theory (McKeen, Guimaraes, and Wetherbe 1994; Weill and Olson 1989). Since PLS does not generate an overall goodness of fit index, one primarily assesses validity by examining the R2 of the endogenous constructs and the structural paths. The empirical results showed that there were seven hypotheses with their statistical significance level between .05-.001. Users intention to continue purchase virtual goods in a social networking site was found to be significantly associated with their commitment ( β =0.209, p Median E + P + I + C. where O is the operator, V is the finite set of research information, AS is the finite set of activity situations, VP is the finite set of viewpoints, E is the set of extracted research information, P is the visual presentation of E, I is the set of research information that should be generated and accumulated as an activity result (information to be generate), and C denotes the comments that direct an activity. Namely, when AS and VP are defined, operator O provides E, P, I, and C. The extraction of research information by the operator and its visual presentation enable the students to understand the relationship of a more necessary portion, so new knowledge that can be aligned with the purpose of an activity is acquired, and the students can expect helpful and efficient support for an activity situation. To implement this operator, it is necessary to clarify activity situations, viewpoints, etc. used as parameters of the operator. We devise methods of extraction and visual presentation in alignment with a viewpoint and develop an operation model according to the following procedures. Procedure 1: Defining the research information treated in this research (Sec. 3.2). Procedure 2: Defining an activity situation and viewpoint (Sec. 3.2). Procedure 3: Extracting and visually presenting information according to the activity situation and viewpoint (Sec. 3.3). Procedure 4: Developing a formal model for the extraction-and-visual-presentation method (Sec. 4). Procedure 5: Developing a research-activity support system (Sec. 5).

191

ISBN: 978-972-8939-31-1 © 2010 IADIS

3.2 Definitions of Research Information, Activity Situations, and Viewpoints From the results of an investigation of 23 persons and the references they used, we define the actual objects for each kind of information treated in this research as follows. We treated 25 kinds of research information: research schedule, to-do list, paper, book, Web page, other reference, research outline, progress report, reference summary, survey result, graduation thesis, main point, system specifications, system design, preparation data, collection data, analysis result, presentation data, resume, minutes, videos, research notes, reference notes, product notes, and knowledge and technology notes. The nine activity situations and their viewpoints are listed in Table 1. Table 1. Activity situations and viewpoints Activity situation

Meaning of the situation

Viewpoints

Two or more references for a topic are compared and considered from a certain viewpoint, and knowledge is acquired. Data collected under certain conditions is analyzed and checked, and analysis results are extracted. A certain problem is solved and a system that satisfies the requirements is designed and developed.

1. Surveying papers relevant to the paper in question 2. Understanding the relationship between utilized research information by reading the paper in question 1. Surveying the relationships between research references 2. Understanding the research information utilized by reading and comprehending the surveyed references 1. Understanding the change history of products, etc. 2. Presenting the investigation & experiments and understanding the products in the evaluation plan 1. Understanding the change history of system design specifications & plans and that of the system 2. Understanding the products of the current development plan

Study of knowledge and technology

Knowledge and technology required for the research are acquired.

1. Surveying references to knowledge and technical notes and the relationships of practical-use places

Presentation

The results of the current stage of self-study and research progress are reported and debated.

1. Understanding progress made since the previous presentation 2. Understanding research transitions

Writing paper

The student’s own theoretical research results are summarized in a draft paper.

1. Understanding research transitions 2. Arranging information about papers 3. Understanding change histories, such as the main point and the paper, etc.

Research schedule

The outline of the student’s research, etc. are considered, a research topic is determined in detail, and the meaning of the student’s own research, impact, position, etc. are clarified.

1. Understanding research transitions 2. Understanding details of the research

Schedule management

A schedule for reaching the research goal is created and research activities are managed.

1. Surveying schedule lists 2. Surveying the relationships among items of information generated about activities on the schedule

Reading references

Research articles and books are considered through self-study.

Survey Investigation, experiment, evaluation Development

a

3.3 Extraction and Visual Presentation 3.3.1 Overview In this section, we describe situations, the method of extracting research information that is suitable for the viewpoint, and the method of visually presenting the relationships between the extracted research information. The extracted information can be classified into practical-use information and information to be generated. Operation result E can be defined by defining practical-use information. Operation result I can be defined by defining information to be generated. Operation result C can be defined by defining comments for carrying out instructions from the supervisor (activity directions). By following activity directions by C, students are encouraged to accumulate the information to be generated (I). Furthermore, operation result P is defined by defining the visual presentation pattern of practical-use information. The E, I, and P treated in this research are defined in Sections 3.3.2 and 3.3.3.

192

IADIS International Conference on Internet Technologies & Society 2010

3.3.2. Decision about what Research Information to Extract To define operation result E, we need to determine practical-use information suitable for the activity situation and viewpoint. For the activity situation viewpoint, we define practical-use information as relevant information included among the kinds of research information listed in Section 3.2 that was selected. Next, the information to be generated is determined in order to define operation result I. The comments that direct the activity for accumulating the desired information are also clarified. For example, when AS is a survey and VP is the second of the two corresponding viewpoints, the operator provides E={ paper, book, knowledge and technology notes, reference notes, reference summary, survey results }, I={ paper, book, reference notes, survey result }, and C=(reference collection in accordance with survey viewpoints, comparison, and examination in alignment with the same viewpoint).

3.3.3 Visual Presentation In this section, we define visual presentation P of set E of practical-use information for each viewpoint clarified in the last section. Visual presentation uses a two-dimensional graph and draws relationships among items of research information. Consider research information in which the relationship between a node and research information is expressed as a connector. Next, the rule that gives and arranges the node’s coordinates is determined. To consider the node arrangement that satisfies the activity situation and viewpoint for making a visual presentation, we considered what kind of index should be given to the x- and y-axes for the relationship between different kinds of research information. For example, when AS is schedule management and VP is the second corresponding viewpoint, visual presentation P is as shown in Figure 2. This means that by arranging the schedule information with their related products and notes separated, one can understand the research information generated by any particular kind of activity. We proposed 11 kinds of visual presentation. The node types representing research information are listed in Section 3.2. The relationship types are classified into, for example, accompanying, reference, generating, practical-use, transition, revision, and same keyword.

4. FORMAL MODELING OF EXTRACTION AND VISUAL PRESENTATION METHODS There are many combinations of situations, viewpoints, and visual presentations. In order that our operator achieves interactive extraction and visual presentation of research information suitable for any situations, the model of extraction and visual presentation is treated mathematically. First, we define a research information related graph (RIRG). Next, restrictions on the extraction and visual presentation candidates for all the AS x VP combinations are shown mathematically. This mathematical treatment of models and restrictions, in the effective extraction and visual expression of research information, means that the formal models make it easy to achieve combinations of various situations and viewpoints.

4.1 Model of Research Information Related Graph The research information related graph (RIRG) that performs visual presentation is defined as follows. [Definition 1] RIRG G consists of three elements: G= (V, Ed, Π). V is a finite set of research information. v (V∋v) is a node of the graph and expresses research information. Ed is a finite set of connectors. e (Ed∋e) is a connector of the graph and expresses a research relation. Π is a finite set of coordinate functions π (V to I2). // [Definition 2] The research information attributes of an RIRG are defined as attV = (kindV, titleV, dateV, π). Each attribute is attached to the node. The attributes have the following meanings: • kindV expresses the kind of research information. The elements of kindV are defined in Section 2.2. The kind of research information v (V∋v) is written as kindV(v).

193

ISBN: 978-972-8939-31-1 © 2010 IADIS

• titleV expresses the title of the research information expressed by strings. The title of research information v is written as titleV(v). • dateV expresses the creation date and time of the research information. The date and time at which the research information v is created is written as dateV(v). • π is the coordinate function of research information. The coordinates of research information v are written as π(v)=(πx(v), πy(v)). // In the case of kindV(v)=“paper”, there are more attributes such as authors, magazine name, keywords, and target file. The node coordinates depend on the coordinate functions. The coordinate functions are related to the users’ viewpoints. [Definition 3] The research relation attributes of an RIRG are defined as attEd = (kindEd). Each attribute is attached to the connector. The attributes have the following meanings: • kindEd expresses the classification of the research relation. The elements of kindEd are defined in Section 2.2. When v1 is a source node and v2 is the target node, then the kind of research relation ev1v2 (Ed∋ev1v2) is written as kindEd(ev1v2). //

4.2 Modularization of Extraction Method Here, the modules that make the operation result E are defined. We develop modules that extract the sub-set of research information under some conditions from the given set of research information. Let VQ be a finite set of research information as the extraction target. The extraction module is represented by CE. [CE1 (K1, K2, …, Km)] Extraction by the kind of research information. Let Ki(1