The 7th European Conference on Knowledge ...

21 downloads 2442 Views 12MB Size Report
mostly for technology oriented companies in the area of applied text-mining and ... Knowledge Management and the development of new software system for the ...
The 7th European Conference on Knowledge Management Corvinus University of Budapest, Hungary 4-5 September 2006

Edited by Dr. Péter Fehér Corvinus University of Budapest, Hungary

i

Copyright The Authors, 2006. All Rights Reserved. No reproduction, copy or transmission may be made without written permission from the individual authors. Papers have been double-blind peer reviewed before final submission to the conference. Initially, paper abstracts were read and selected by the conference panel for submission as possible papers for the conference Many thanks to the reviewers who helped ensure the quality of the full papers.

ISBN:

978-1-905305-29-2 CD/Booklet

Published by Academic Conferences Limited Reading UK 44-118-972-4148 [email protected]

ii

ECKM 2006 Contents Paper Title

Author(s)

Proceedings Page

Preface

vii

Biographies of Conference Chairs, Programme Chair, Keynote Speaker and Mini-track Chairs

ix

Biographies of contributing authors

xi

An Ontology-Based Content Management System for a Dynamic Operating Context: Issues and Prototype Evaluation

Maurizio Agelli and Felice Colucci Parco Scientifico e Tecnologico della Sardegna, Pula Italy

1

Building up Knowledge like a “Rubik Cube”

Marko Anzelak, Gabriele Frankl and Wolfgang Ebner Alpen-Adria University of Klagenfurt, Austria

10

Innovation Focus and Middle-up-down Management Model: Empirical Evidence

Nekane Aramburu, Josune Sáenz and Olga Rivera ESTE School of Management, University of Deusto, San Sebastián, Spain

19

Four Valued Logic: Supporting Complexity in Knowledge Sharing Processes

Peter Bednar, Christine Welch and Vasilios Katos University of Portsmouth, Buckingham Building, Portsmouth, UK

29

Knowledge Cooperation in Online Communities: A Duality of Participation and Cultivation

Marco Bettoni, Silvio Andenmatten and Ronny Mathieu Swiss Distance University of Applied Science, Brig, Switzerland

36

Alternative Accounting to Manage Intellectual Capital

György Boda and Peter Szlávik Corvinus University, Budapest, Hungary

43

Systematic Knowledge Organisation for Marketing Research with Assisted Search Employing Domain Models and Integrated Document Storage

Karsten Böhm1, Martin Delp1 and Wolf Engelbach2 1 FH KufsteinTirol – University of Applied Sciences, Kufstein, Austria 2 Fraunhofer Institute for Industrial Engineering IAO, Stuttgart, Germany

53

Knowledge Intermediation in Regulated Electronic Commerce Environments. The Case Study of NETMA (Nato Eurofighter and Tornado Management Agency)

Ettore Bolisani1 and Roberto Ruaro2 1 Department of Management and Engineering, University of Padova, Italy 2 Italian Air Force, RMS, Villafranca Airbase, Italy

64

The Cognitive Rationalization of Professional Services: Expected and Unexpected Effects of Knowledge Management Systems

Marion Brivot HEC, Jouy-en-Josas cedex, France

72

i

Paper Title

Author(s)

Proceedings Page

Managerial Perceptions of Linking Intellectual Capital and Strategy

Maria do Rosário Cabrita Faculty of Science and Technology, Universidade Nova de Lisboa, Portugal

82

Analysis of International Experiences in Knowledge Management Systems: The SOSMER Project

Nuria Calvo Babío and Marta García Pérez University of A Coruña, Spain

91

Knowledge Building in Innovation Networks: The Impact of Collaborative Tools

Annick Castiaux Business Administration Department, University of Namur, Belgium

99

Where is the Pharaoh and the Treasures? The Social Mystery of the Great Pyramid of Knowledge

Jordi Colobrans Delgado Universitat Autonoma de Barcelona, Bellaterra, Barcelona, Spain

108

Multiple Competences in Distributed Communities of Practice: The Case of a Community of Financial Advisors

Monica De Carolis and Vincenzo Corvello Department of Business Science, University of Calabria, Rende, Italy

116

Understanding Transnational Knowledge Transfer

Yanqing Duan1, Xiaoxiao Xu1 and Zetian Fu2 1 Luton Business School, University of Luton, UK 2 China Agriculture University, Beijing, China

126

Knowledge Management Strategy – What Have We Learnt?

John Edwards Aston Business School, Birmingham, UK

136

Learning Networking and Knowledge Sharing Skills in Cross-Border e-Learning and Student Exchange Processes

Tiit Elenurm Estonian Business School, Estonia

144

Knowledge Management and Career Development: The Application of Hedonic Approach to Evaluate Real and Potential Salaries of Brazilian Retailers’ Executives

Luiz Paulo Lopes Fávero, Patrícia Prado Belfiore and Claudio Felisoni de Angelo. University of São Paulo, Brazil.

152

Organisational Solutions for Supporting Knowledge Management

Péter Fehér Corvinus University of Budapest, Hungary

161

The Role of an Extensionist in ICT-based Knowledge Transfer

Weizhe Feng1, Yanqing Duan2, Zetian Fu1 and Brian Mathews2 1 China Agricultural University, Beijing, China 2 University of Luton, Luton, UK

171

The Role of Integrative and Interactive Technologies on Know-What and KnowHow Exchanges in Defense Organizations

Cécile Godé-Sanchez French Air Force Academy Research Centre, Salon de Provence, France

178

Sharing Project Knowledge: Initiation, Implementation and Institutionalisation

Waltraud Grillitsch, Alexandra MüllerStingl and Robert Neumann Alpen-Adria-University of Klagenfurt, Austria

184

ii

Paper Title

Author(s)

Proceedings Page

Using Analytical Hierarchy Process for Evaluating Organisation Core Competences and Associated Personal Competencies

Khalid Hafeez1, Jawed Siddiqi2 and Essmail Essmail3 1 Bradford University, UK 2 Informatics Research Group, City campus, Sheffield, UK 3 Sheffield Hallam University, UK

192

Motivational Influences on Knowledge Sharing

Meliha Handzic1,2 and Amila Lagumdzija2 1 Sarajevo School of Science and Technology, Bosnia 2 BiH and Universitas 21 Global, Singapore

208

Knowledge Networks: A Mechanism of Creation and Transfer of Knowledge in Organisations?

Deogratias Harorimana Southampton Business School, Southampton Solent University, UK

213

Construction Model for IT Assignment to Corporate Knowledge Management Systems

Milan Hasznics Budapest University of Technology and Economics, Hungary

223

Knowledge Creation Through UniversityIndustry Collaborative Research Projects

Julie Hermans and Annick Castiaux Business Administration Department, University of Namur, Belgium

233

Cultural and Communicative Interaction and the Development of Dynamic Capabilities

Jianzhong Hong, Aino Pöyhönen and Kalevi Kyläheiko Lappeenranta University of Technology, Finland

241

Transferring Experience Into an Organizational Resource

Hanna Hovila and Jussi Okkonen Tampere University of Technology, Finland

250

Topic Maps for Knowledge Creation With Stakeholders

Isabelle Kern University of Applied Sciences for Business Administration, Switzerland

260

Partaking as a Tool for Knowledge Creation and Sharing in Practice

Per Kirkebak and June Tolsby Ostfold University College, Faculty of Engineering, Norway

268

Business Strategy Drives KM Strategy: A Case of a Central-European Firm

Gábor Klimkó1 and Róbert Tóth2 1 MTA Information Technology Foundation, Budapest, Hungary 2 MOL Plc., Budapest, Hungary

274

A KM Framework for IT Auditing

Andrea Kő and Zoltán Szabó Corvinus University of Budapest, Hungary

280

Health Care Ecosystem as a Network of Knowledge Flows

Harri Laihonen Tampere University of Technology, Finland

292

Knowledge Management for the Sustainable Supply Chain: A Literature Review

Valerie Martin, Chuda Basnet, Paul Childerhouse and Leslie Foulds Waikato Management School, University of Waikato, Hamilton, New Zealand

302

iii

Paper Title

Author(s)

Proceedings Page

Identifying the Knowledge Gap in Knowledge Management Systems Development

Aboubakr Moteleb and Mark Woodman Middlesex University e-Centre, The Burroughs, Hendon, London, UK

310

Knowledge Enhanced e-Workflow Modelling - A Pattern-Based Approach for the Development of Internet Workflow Systems

John Ndeta, Farhi Marir and Islam Choudhury Department of Computing, London Metropolitan University, UK

318

Communication and Knowledge Transfer in a Research Organisation

Margit Noll, Doris Froehlich and Damaris Omasits Arsenal research, Giefinggasse, Vienna, Austria

326

Preparing a Robust Entity-Relationship Model Using Supporting Worksheets: Demonstrating the Basis for an Acceptable Model

Gary Oliver The University of Sydney, Australia

335

The Power in Visualising Affects in the Organisational Learning Process

Theresia Olsson Neve Mälardalen University, School of Business, Västerås, Sweden

347

Architecting Knowledge Management for Results

Patrick Onions The Knowledge Studio, UK

355

Knowledge Management Governance

Patrick Onions1 and René de Langen2 1 The Knowledge Studio, UK 2 Sasol Synfuels, Secunda, South Africa

365

Innovation Management: Complexity and Systems Thinking

Paul Otterson, Zoë Dann, Ian Barclay and Peter Bond Liverpool John Moores University, Merseyside, UK

375

Organizational Knowledge Transfer: Turning Research Into Action Through A Learning History

Robert Parent and Julie Béliveau Université de Sherbrooke, Québec,Canada

380

Systemic Methodologies and Knowledge Management: A survey of Knowledge Management and Systems Thinking Journals

Alberto Paucar-Caceres and Rosane Pagano Manchester Metropolitan University Business School, UK

389

Why Do Managers from Different Firms Exchange Information? A Case Study from a Knowledge-Intensive Industry

Mirva Peltoniemi Tampere University of Technology, Finland

399

Knowledge Reuse in Creating Audit Plans

David Peto Corvinus University of Budapest, Department of Information Systems, Hungary

406

Knowledge Logistics to Support the Management of Organizational Crises: A Proposed Framework

Stavros Ponis1, Epaminondas Koronis2, Ilias Tatsiopoulos1 and George Vagenas1 1 National Technical University of Athens, Greece 2 Warwick Business School, University of Warwick, UK

415

iv

Paper Title

Author(s)

Proceedings Page

Knowledge-Based View of the Firm – Foundations, Focal Concepts and Emerging Research Issues

Aino Pöyhönen and Kirsimarja Blomqvist Lappeenranta University of Technology, Finland

425

Conceptual Design of a Knowledge Management Support System for Assessment of Intellectual Capital

Agnieta Pretorius and Petrie Coetzee Tshwane University of Technology, Pretoria, South Africa

434

Developing a Taxonomy for Knowledge Management Documents Organization in Digital Libraries

Seyed Mohsen Rahnamafard and Hajar Fatemi ShariatPanahi Tehran University, Faculty of Management, Tehran, Iran

447

From Teaching to Practicing Knowledge Management in Internal Auditing Services

Camille Rosenthal-Sabroux1 and Michel Grundstein2 1 Paris – Dauphine University, Lamsade, Paris, France 2 Bénédicte Huot-de Luze, Senior Manager Internal Audit Services, KPMG

460

Knowledge Management in the Extended and Virtual Enterprises: A Review of the Current Literature

Enrico Scarso, Ettore Bolisani and Maria Rita Arico’ University of Padua, Vicenza, Italy

468

Information Broker Approach for Management Information Systems in SME

Christian-Andreas Schumann and Claudia Tittmann University of Applied Sciences Zwickau, Germany

476

Architecture for Effective Knowledge Creation in Educational Institutions and Dissemination Through Satellite Technology – An Indian Experience

S.Shanthi and. V.C.Ravichandran College of Engineering Guindy, Anna University, Chennai, India.

484

The Moderating Role of the Team–Leader in the Value of Knowledge Utilization: An Extension of Haas and Hansen's Situated Performance Perspective

Evangelia Siachou and Anthony Ioannidis, Athens University of Economics and Business, Athens, Greece

495

Knowledge Sharing: Cultural Dynamics

Kerstin Siakas1 and Elli Georgiadou2 1 Alexander Technological Educational Institute of Thessaloniki, Greece 2 Software Forensics Centre, Middlesex University, UK

505

Journalists, the Makers and Breakers of Relational Capital

Joanna Beth Sinclair Hanken Swedish School of Economics and Business, Helsinki, Finland

514

Knowledge Management: An Organization Design Issue

Pernille Dissing Sørensen University of Southern Denmark, Denmark

523

Exploring Knowledge Processes in UserCentered Design Process

Kaisa Still Department of Information Processing, University of Oulu

533

v

Paper Title

Author(s)

Proceedings Page

The Implementation of the Educational Ontology

Ildikó Szabó Corvinus University of Budapest, Department of Information Systems, Hungary

541

The Hidden Face of Intellectual Capital: Social Policies

Eduardo Tomé Instituto Piaget, Viseu, Lordosa, Portugal

548

The Role of Knowledge Transfer for Innovation in a Knowledge Community

Kuo-Hung Tseng and Chun-Yu Chen Meiho Institute of Technology, Taiwan

561

Knowledge Base Development in Organizations: Impact on the Knowledge Transfer Capabilities; an Organization Learning Perspective

Ananya Upadhyaya and S. Krishna QMIS Area, Indian Institute of Management Bangalore, India

568

Educational Ontology and Knowledge Testing

Réka Vas Corvinus University of Budapest, Hungary

577

The Impact on Intellectual Capital of Organisational Improvement

George White1 and Sandra Begley2 1 Mi-2 Limited, Loughborough, UK 2 Staffordshire University, Stafford, UK

585

Capital and Equity and the Knowledge Process Cycle

Roy Williams University of Portsmouth, UK

598

Actors and Factors: Online Communities for Social Innovation

Diana Woolis and Susan Restler Knowledge in the Public Interest, New York, USA

610

Investigating the Use of Knowledge Management to Support Innovation in UK Energy SMEs: A Questionnaire Survey

Irfan Bashir, José Miguel Baptista Nunes and Nigel Russell University of Sheffield; UK

619

Knowledge Management and Competence Management: New Possibilities Based on old Conceptions

Claudia Bitencourt and Mírian Oliveira University of Vale Dos Sinos, Brazil

635

The Role of Sunk Costs in Knowledge Conflicts: The Case of a Small Rural Bank

Matteo Bonifacio and Diego Ponte University of Trento, Italy

643

A Constructivist Approach to IC: A Case Study

Roberta Cuel DISA, University of Trento, Trento, Italy

653

An Innovative Measurement Framework for a Government Knowledge Management Initiative

Kimiz Dalkir1, Erica Wiseman1, Michael Shulha1 and Susan McIntyre2 1 Graduate School of Library and Information Studies, Montreal, Quebec, Canada 2 Defence R&D Canada, 344 Wellington Street, Ottawa, Ontario, Canada

662

vi

Paper Title

Author(s)

Proceedings Page

Regional Development Authorities as a Basis for Knowledge Management

Scott Erickson1, Helen Rothberg2, James Melitski2 1 School of Business, Ithaca College, Ithaca, USA 2 School of Management, Marist College, Poughkeepsie, USA

671

Knowledge Creation and Sharing through Student-lecturer Collaborative Group Coursework

Elli Georgiadou1, Kerstin Siakas2 and Eleni Berki3 1 Middlesex University, London, UK 2 Alexander Technological Educational Institute of Thessaloniki, Greece 3 Department of Computer Sciences, University of Tampere, Finland

678

Knowledge Dynamics in Community of Practice

Khalid Hafeez and Fathalla Alghatas School of Management, Bradford University, UK;

690

Adaptive Working Environment Through Semantic Interoperability

Jorg Hartwig University of Leipzig, Institute of Computer Science, Leipzig, Germany

703

Manage Business and Knowledge

Ali Hessami Atkins Global, London, UK

713

Assessing Knowledge Management Implementation at Two Major Lebanese American Universities

Silva Karkoulian1 and Leila Halawi2 1 Business School, Lebanese American University, Beirut, Lebanon 2 Nova South-Eastern University Fort Lauderdale, Tampa, FL USA

720

Knowledge Diffusion in Context

Mounir Kehal, Sandrine Crener and Patrice Sargenti International University of Monaco, Principality of Monaco,

726

Implicit Knowledge Sharing

Axel-Cyrille Ngonga Ngomo and Frank Schumacher University of Leipzig, Germany

736

A Holonomic Approach to Strategic Knowledge Elicitation

Rosane Pagano and Alberto Paucar Manchester Metropolitan University Business School, UK

748

Facilitating the Generation of Research Questions using a Socratic Dialogue

Dan Remenyi School of Systems and Data Studies, Trinity College Dublin, Ireland

755

vii

Preface Welcome to the 7th European Conference on Knowledge Management (ECKM 2007) hosted this year at Corvinus University Budapest in Hungary. The Conference Chair is Dr András Gábor and the Programme Chair is Dr Péter Fehér, both from Corvinus University Budapest. The opening keynote address is given by Dr. Péter Racskó, Director of Knowledge Management, T-Com Hungary (Magyar Telekom, Subsidiary of the Deutsche Telecom Group). The main purpose of the Conference is for individuals concerned with current research findings and business experiences from the wide community which is now involved in knowledge management and intellectual capital and organisational learning to come together to share knowledge with peers interested in the same area of study. A key aim of the conference is about sharing ideas and meeting the people who hold them. The range of papers will ensure an interesting two days. Alongside the main conference there are mini tracks on Auditing Knowledge Management – Chaired by Dr Bálint Molnár, ISACA Hungarian Chapter, Budapest, Hungary, Measuring Intellectual Capital, chaired by Bernard Marr from Cranfield University in the UK and the Role of Knowledge in Regional Co-operations, chaired by Professor Jose Viedma from the Polytechnic University of Catalonia in Spain. To further enhance the conference experience Professor Dan Remenyi will be facilitating a Socratic Dialogue and Dr Gabriela Avram and David Gurteen will lead a workshop on Social Computing. With an initial submission of 134 abstracts, after the double blind, peer review process there are 83 papers published in these Conference Proceedings. These papers represent research from Australia, Austria, Belgium, Brazil, Canada, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, India, Iran, Ireland, Italy, Lebanon, Monaco, New Zealand, Norway, Portugal, Peoples Republic of China, Singapore, Spain, South Africa, Sweden, Switzerland, Taiwan, United Kingdom and USA. I hope that you have an enjoyable conference.

Dr Péter Fehér Programme Chair [email protected] September 2006

viii

Conference Executive: Dr Joan Ballantine, Queens University, Belfast, UK Professor Sven Carlsson, Jönköping International Business School, Sweden Daniele Chauvel, Independent Consultant, France Dr Charles Despres, Conservatoire Nationale des Arts et Metiers, Paris, France Dr. Péter Fehér, Corvinus University of Budapest, Hungary Dr Andras Gabor, Corvinus University of Budapest, Hungary Professor Meliha Handzic, Sarajevo School of Science and Technology, Croatia Barnard Marr, Cranfield School of Management, UK Dr Fergal McGrath, University of Limerick, Ireland Professor Dan Remenyi, Trinity College Dublin, Ireland Professor Jose Viedma Marti, Polytechnic University of Catalonia, Spain Dr Roy Williams, University of Portsmouth, UK The conference programme committee consists of key people in the Knowledge Management and IS community. The following people have confirmed their participation: Fathalla Alghatas (Bradford University School of Management, UK); Derek Asoh (Southern Illinois University, Carbondale, USA); Gabriela Avram (University of Limerick, Ireland); Joan Ballantine (Queens University Belfast, UK); Frank Bannister (Trinity College Dublin, Ireland); Diane Benjamin (National Health Service, UK); Egon Berghout (Groningen University, Netherlands); Maumita Bhattacharya (Charles Sturt University, Australia); Heather Bircham-Connolly (University of Waikato, New Zealand); Janis Briedis (Riga Business School, Latvia); John Byrne (RMIT, Australia); Sven Carlsson (Lund University, Sweden); Daniele Chauvel (Independent Consultant, France); Satyadhyan Chickerur (Sona College of Technology, India); Bruce Cronin (University of Greenwich Business School, UK); Reet Cronk (Harding University, USA); Farhad Daneshgar (University of New South Wales, Australia); John Deary (Higher Colleges of Technology, UAE); Charles Despres (Conservatoire des Arts et Metiers, Paris, France); Yanqing Duan (University of Luton, UK); John Edwards (Aston University, UK); Jamal El Den (American University of Beirut, Lebanon); Tiit Elenurm (Estonian Business School, Estonia); Mercy Escalante-Ludeña (São Paulo University, Brazil); Andras Gabor (Budapest University of Economic Sciences and Public Administration, Hungary); Leslie Gadman (International Business School, Isle of Man, UK); Andrew Goh (International Management Journals, Singapore); David Gurteen (Gurteen Associates, UK); Khalid Hafeez (University of Bradford, UK); Matthew Hall (Aston University, UK); Meliha Handzic (Sarajevo School of Science and Technology, Bosnia and Herzegovina); Ali Hessami (Atkins Global, UK); Andrew Kok (University of Johannesburg, South Africa); Ilidio Lopes (Polythenic Institute of Santarém, Portugal); Farhi Marir (London Metropolitan University, UK); Olivera Marjanovic (University of New South Wales, Australia); Fergal McGrath (University of Limerick, Ireland); Sandra Moffett (University of Ulster, Ireland); Arthur Money (Henley Management College, UK); Kehal Mounir (International University of Monaco, Monaco); David O’Donnell (Intellectual Capital Research Institute of Ireland); Gary Oliver (University of Sydney, Australia); Patricia Ordóñez de Pablos (University of Oviedo, Spain); Kaushik Pandya (University of Luton, UK); Ann Peng (Providence University, Taiwan); José de Jesus Pérez-Alcázar (University of São Paolo, Brazil); Selwyn Piramuthu (University of Florida, USA); John Politis (Higher Colleges of Technology, UAE); Aino Pöyhönen (Lappeenranta University of Technology, Finland); Thurasamy Ramayah (Universiti Sains Malaysia); Dan Remenyi (Trinity College Dublin, Ireland); Carol Royal (University of New South Wales, Australia); John Saee (IESEG School of Management, Lille, France); Sarosa Samiajii (Atma Jaya Yogyakarta University, Indonesia); Dave Snowden (The Cynefin Centre, UK); Siva Sockalingam (Glasgow Caledonian University, Scotland); Edward Truch (Lancashire University Management School, UK); José María Viedma (Polytechnic University of Catalonia, Spain); Roy Williams (University of Portsmouth, UK); Les Worrall, (Wolverhampton University Business School, UK).

ix

Biographies of Conference Chairs, Programme Chair and Keynote Speaker Dr. András Gabor, professor, head of department of information systems. He earned a diploma (MSc) in Economics (1974) and in Computer Science (1979) , dr. univ. oec. (1976), C.Sc (Ph.D) (1983), CISA (1999), Habil.dr (2002). His expertise: systems analysis, information management, and intelligent systems. Foreign experiences: "Delivering Information Services", Harvard Business School, USA (Executive course) 1995, Imperial College of Science and Technology, Department of Management Science, UK, 5 months (knowledge-based systems) 1986, DePaul University, Department of Computer Science and Information Systems, Chicago, USA 1 month (executive information systems, AI applications) 1985, Imperial Chemical Industries, Pharmaceutical Division, Alderly park, Macclesfield, UK 2 months (sales/distribution management simulation) 1975. Memberships: John von Neumann Society of Computer Science, ISACA Hungarian Chapter. He participated and managed several national and international projects: PRAXIS, ACTS program, AC301 project. (1998-2000), “Hospital Evaluation and Analysis” Research project No. 622/191, Soros Foundation Healthcare Management Programme (1998-99), “Knowledge intensive technologies in the development of global information infrastructure” c. Ministry of Education R&D program, No. 1355/1997. (1997-99), “Decision Support Methods for Hospital Evaluation”, Swiss National Science Foundation, Project #7UNPJ048553, cooperating with the Ingenieur Schule, Wadenswil. (1996-98) Knowledge-based Systems, Knowledge Engineering, Frame-based Expert Systems University of Amsterdam (1992-95), "Curriculum Development in Business Information Systems", PHARE/TEMPUS project (1990-92). Dr. Péter Fehér, a “Depozit” research fellow, Corvinus University of Budapest, Department of Information Systems. He earned a diploma (MSc) in Economics in 2000, and defended his PhD thesis in 2005. His foreign experiences are: Universität Paderborn (Germany, 1999), HEC (France, 2001), and Kingston Business School (United Kingdom, 2003-2004). He leads courses in the topics of Knowledge-, and Project Management, Management Information Systems. His current research areas are Knowledge Management, especially the problems and difficulties of KM practices, but he is also participating in IT Service Management, Software Process Improvement, IT Audit and Organisation Development projects, both as researcher and as consultant. Dr. Péter Racskó is currently the Director of Knowledge Management Centre at TCom Hungary. He earned a diploma (MSc) in mathematics in the Lomonosoff University, Moscow, and a PhD in mathematics (1977). The spent a semester at Cornell University, 1 month at Brown University and 2 months at University of Indianapolis as a visiting researcher, at. He also earned a PhD in applied informatics. He holds System Analyst, Smart Cards Expert certifications, and he is also a Certified Information System Auditor. He was member of the IT Committee of the Hungarian University Union, member of the IT Committee of the Hungarian Government, member of the management board of the Smart Card Forum, member of the IT Committee of the National Development and Research Program, President of the board of the CARDNET Co., IT advisor to the Hungarian Higher Education and Research Board, and member of the Jury of the National R&D Program. He works for Matáv (Hungarian Subsidiary of Deutsche Telecom) since 2000, as Head of IT Strategy Department, later as Business Intelligence and Documentation director. The company changed its name the T-Com Hungary.

x

Biographies of contributing authors (in alphabetical order) Maurizio Agelli (48) is head of the Digital Media Systems group at CRS4 (www.crs4.it). He is currently working on research projects focused on personalized content delivery. His research interests also extend to KM (in particular on the usage of ontology’s for resource classification).He has a degree in Electronic Engineering from the University of Pisa (1985). Nekane Aramburu is PhD in Economics and Business Administration, and a faculty member of the University of Deusto (Spain). She also participates in the Cluster of Knowledge of the Basque Region (Spain) and in SOL-Basque Country (the fractal of the Society of Organizational Learning in the Basque region). Her research focus is currently on Organizational Learning and Knowledge Management Pierre Barbaroux has a PhD in economics. Research assistant in the Research Center of French Air Force Academy. My research interests are organization theory and IT, organizational learning and cognitive approaches of the firm. My empirical research focuses on Defense organizations and the impact of IT on the military. Irfan Bashir holds a BSc Hons in Chemical Engineering with Chemistry from the University of Huddersfield. He has subsequently worked with Johnson & Johnson having acquired experience as a Project Scientist. Currently, Irfan is undertaking PhD Research with dual supervision from the Department of Chemical & Process Engineering and Department of Information Studies, both at the University of Sheffield. This project is a CASE Scholarship fully funded by the EPSRC. Peter Bednar is a Senior Lecturer at the University of Portsmouth, UK and is affiliated with the Department of Informatics at Lund University, Sweden. His research covers contextual analysis, organisational change and information systems development, apart from academic teaching and research he also has an industrial background. Marco Bettoni is Director of Research & Consulting at the Swiss Distance University of Applied Sciences focusing on knowledge cooperation as a way of managing research activities. From 1977 to 2005 researcher, engineer and lecturer with industrial and academic organisations in the domains of machine design, engineering education, IT development, knowledge engineering, knowledge management and knowledge-oriented cooperation. Claudia Bitencourt is Assistant Professor at the Universidade do Vale do Rio dos Sinos, Brazil. She obtained her Ph.D. in Business Administration from the Universidade Federal of Rio Grande do Sul, Brazil. She was a researcher fellow at the University of Queensland, Australia. She has published 2 books about Human Resource Management and Competences Management & Organizational Learning. György Boda is a PhD professor at the Corvinus University in Budapest. His main expertise is controlling and knowledge management. Péter Szlávik is a management consultant dealing with the same topics. Both authors are co-partners of their joint consultancy firm named Boda & Partners. Karsten Böhm (35) studied Computer Science with a specialization in Automatic Language Processing and Intelligent Systems at the Universities in Leipzig (Germany) and London (UK). After obtaining his degree in Computer Science (M.Sc.) he worked for several year in the industry mostly for technology oriented companies in the area of applied text-mining and information extraction technologies and applied them in several industrial Knowledge Management projects. He returned to the Leipzig University in 2003 to work as a researcher in several funded research projects mostly focused on the topic of business process oriented Knowledge Management. Since 2006 he is holding a Research Professorship for Business Informatics at the University of Applied

xi

Science FH KufsteinTirol (Austria). His main research interests are in the area of IT-based Knowledge Management and the development of new software system for the operational support of Knowledge processed in enterprises. Ettore Bolisani (L Eng, PhD) is Associate Professor of Business Information Management at the Faculty of Engineering of the University of Padova. He previously worked as Assistant Professor at the Universities of Trieste and Padova. In 1997 he was visiting research fellow at PREST (University of Manchester), where he conducted a research project funded by the European Commission on the developments of Electronic Commerce in the UK and Italy. His current research centres on the economic and organisational issues of Information and Communication Technology applied to Knowledge Management. On such topics he took part in various research, and wrote several publications at international level. He is member of MCFA (Marie Curie Fellowship Association). Marion Brivot worked as a consultant from 1999 to 2005, first as a senior consultant at KPMG Consulting France, and more recently at the European headquarters of KPMG Tax within knowledge management. She is currently completing her Ph.D. thesis on the cognitive rationalization of professional services at HEC Paris. Maria do Rosário Cabrita is assistant professor in Lisbon Institute of Banking Management. She holds graduate and master degrees in business administration and international economics. She is currently a Ph.D. candidate at Institute of Economics and Business Administration, Lisbon Technical University. She also has several years’ experience in executive positions in international banks. Nuria Calvo Babío has worked for one decade as consultant, involved in projects related with innovation and knowledge exchange. Actually she combines her professional activities with the academic research in the knowledge management area. In this sense, she has published some articles and has contributed with several papers in Spanish and American Conferences during the last two years. Annick Castiaux is a Professor at the University of Namur. After a PhD in Physics, she was a consultant in KM during 5 years. For 4 years, she is back at the university to share her experience with students in management sciences and perform research on innovation and knowledge management. Chun-Yu Chen is a lecturer at the department of business administration in Mei-ho institute of technology, Taiwan. She is also a PhD candidate at the department of industrial and technology education in National Kaohsiung Normal University Taiwan. Her research interests are knowledge management, technology innovation, and e-Learning. Jordi Colobrans Delgado (Barcelona, 1965) is a Doctor of Sociology of Organizations and a graduate in Cultural Anthropology. He is a lecturer of qualitative methodology at Ramon Llull University (URL), University of Barcelona (UB), Open University of Catalonia (UOC), a researcher at the Institute of Governance and Public Policies (IGOP, University Autonoma of Barcelona) for the EURODITE project. He also works as a consultant in private sector. His main field of interest is the organization of meaning systems and their practical uses in public and private institutions. Roberta Cuel holds a Ph.D. in “Organization and Management of the Firms” (2003). She was a post-doc fellow at the computer science department of the University of Verona. Currently, she is a senior researcher in the Net Economy Group at the department of computer and management science of the University of Trento. Her research interests include: the study of KM and knowledge representation systems (such as ontology’s, classifications) as mechanisms for knowledge reification processes. She has contributed to a number of publications, and has served as the PC member for a number of interdisciplinary conferences.

xii

Monica De Carolis is contract professor of Business Management at the faculty of Engineering of the University of Calabria. She holds a PhD and a master degree in Management Engineering. She published in national and international conferences and in national reviews. Her research interests are knowledge management, communities of practice, organization design. Zoe Dann Is currently a Senior Lecturer in the Technology Management Group at Liverpool John Moores University. Prior to this she was the New Business Generation Programme Manager in the Merseyside SME Development Centre at Liverpool John Moores. From 1994 to 1997 she was a Senior Research Fellow in the Department of Industrial Studies at the University of Liverpool. She has extensive industrial experience in Technical Sales and as a Project Engineer. She has a particular interest in New Business generation management systems. Yanqing Duan, Ph.D, is a Reader in Information Systems in University of Luton. Her principal research interest is the development and use of advanced information and communication systems (ICTs) in, and their impact on, business and management. She is particularly interested in ICT based knowledge management and transfer, and the use of e-learning in enhancing knowledge and skills in SMEs. She has published about eighty papers in journals, books and conference proceedings. John Edwards is Professor of Operational Research and Systems at Aston Business School, Birmingham, and U.K. His principal research interests include: integrating knowledge-based systems with simulation models; investigating knowledge management strategy and its implementation; and the relevance of technology to knowledge management He has published more than 40 articles in refereed journals, and is editor of the journal Knowledge Management Research & Practice. Tiit Elenurm holds the professorship in entrepreneurship at the Estonian Business School. Ph. D. in 1980 for the dissertation “Management of the Process of Implementation of New Organizational Structures”. His vision is to develop synergy between management training, consulting and research activities. Research interests include knowledge management, change management, international networking and transfer of management knowledge. Scott Erickson is Associate Professor of Marketing and International Business at Ithaca College, USA. He holds a Ph.D. from Lehigh University, USA. He has published extensively on KM development and protection issues, including his book with Helen Rothberg, From Knowledge to Intelligence, published by Elsevier Butterworth-Heinemann. Péter Fehér Dr, a “Depozit” research fellow, Corvinus University of Budapest, Department of Information Systems. He earned a diploma (MSc) in Economics in 2000, and defended his PhD thesis in 2005. His foreign experiences are: Universität Paderborn (Germany, 1999), HEC (France, 2001), and Kingston Business School (United Kingdom, 2003-2004). He leads courses in the topics of Knowledge-, and Project Management, Management Information Systems. His current research areas are Knowledge Management, especially the problems and difficulties of KM practices, but he is also participating in IT Service Management, Software Process Improvement, IT Audit and Organisation Development projects, both as researcher and as consultant. Gabriele Frankl is a researcher at the Institute for Media and Communication Studies (Research Unit: New Media – Technology – Culture) at the University of Klagenfurt in Austria. Her main topics of interest are knowledge management and e-Learning. Since 2002 she is developing Knowledge Management Systems and since 2004 e-Learning Systems. Elli Georgiadou is a Principal Lecturer in Software Engineering and Curriculum Leader for Postgraduate Courses in Business Information Systems, at Middlesex University, London. Her teaching includes Knowledge Management, Software Metrics, Methodologies, CASE and Project Management. She co-ordinates the European Affairs and International Exchanges of her School.

xiii

She is engaged in research in Knowledge Management, Software Measurement for Product and Process Improvement, Methodologies, Metamodelling , Cultural Issues and Software Quality Management. She is a member of the University’s Global Campus project (developing and offering ODL). She has extensive experience in academia and industry, and has been active in organising/chairing conferences and workshops under the auspices of the British Computer Society, the ACM British Chapter and various European programmes for Technology Transfer and development of joint curricula. She established a Distance Mode! Initiative between a UK University and a Hong Kong Institute developing and offering technology-based learning. She designed and carried out evaluations of various ODL initiatives in the UK, Greece, Spain, Finland, Hong Kong and Cyprus. Gode PhD economics. Researcher in the Research Centre of French Air Force Academy and professor in economics and management. The organization management leads me to concentrate my researches on organizational culture and on the influence of ICTs uses on organizational structures and changes. My application cases are Defence organizations and weapon systems. Waltraud Grillitsch is researcher/member of the Knowledge Management project group of the department for eBusiness/ Business Technologies (biztec) and lector at the department for Organizational-, Human Resources and Management Development at the “Alpen Adria” University of Klagenfurt. She is writing her Ph.D. Master thesis in the field of Knowledge Management in corporate networks and is studying journalism at the University of Klagenfurt. She is working in specific company projects in the region. Khalid Hafeez is a Reader in Operations and Information Management, and Director for the Centre for Ethnic Entrepreneurship and Management at the Bradford University School of Management. He is a regular member of the organisation committee for the ECKM since 2003. He is an editor and conference organizer for the IEEE Conferences on Machine Learning. He has presented in many conferences and has been an invited speaker on Knowledge Management and Core competence on many occasions. He is supervising many projects on Knowledge Management, Core competence, E-Commerce and E-Governance and has published on these subjects in prestigious journals such as Journal of Operations Research Society (JORS), IJPE, IEEE Engineering Management, IEEE Potentials, Computers & Operations Research and TQM and Business Excellence. He was on the book-reviewing panel for Knowledge Board Publishers and for the Financial Times/Prentice Hall/Pearson Publisher. He is a visiting Professor to NIMBAS and eTQM Dubai. He is an external examiner for the University of Leeds, the University of Central England, and the University of Hong Kong. Leila Halawi (B.S, and MBA, Lebanese American University (formerly Beirut University College) in Beirut, & DBA, Nova South-eastern University) is the operations manager for Avis in Brandon, Florida. Prior to this, Dr. Halawi was a lecturer at the Lebanese American University. She has authored several journal articles and contributed to the new edition of the textbook Business Intelligence and Decision Support Systems. Her current research interests include knowledge management, data mining, information systems success and strategy, ethical impacts of information technology and software piracy. Deogratias Harorimana graduated in BA(Hons) Business with Information Management prior to embarking on PhD Management with focuss on Knowledge Transfer at Southampton Solent University. His most recent recent paper was " Indigenous Knowledge: Shifting Boundaries?" Deogratias is the Chair Elect of the Geographical Society of Great Britain with the Institute of British Geographers Post Graduates Forum Research Group. Milan Hasznics. M.Sc. degree in technical informatics at the Budapest University of Technology and Economics. 2005: - PhD student at the Budapest University of Technology and Economics, Department of Information and Knowledge Management. Research topic: IT support of knowledge

xiv

management. 2005: - Specialized engineers' degree of Banking Informatics at the Budapest University of Technology and Economics Julie Hermans is researcher at the Information Management Research Unit of Namur University. She is developing her interest in University-Industry Relationships and Knowledge Transfer by undertaking her PhD programme at Namur University. Ali Hessami is Chief Engineer and Head of Advanced Technology Group at Atkins Global and Director for Transportation Surety (Safety & Security) at Project Development Group. He is an expert in the systems assurance and safety/security assessment methodologies and has a background in design and development of advanced control systems for critical industrial applications. Ali has established an International reputation in the field of system safety assurance and has published many papers and lectured throughout the world from Europe to the Far-Middle East and North America Jianzhong Hong PHD (PSYCH). is a senior researcher in knowledge management at Lappeenranta University of Technology (2001-date). He was a postdoctoral researcher in University of Helsinki, leading a research project Learning, Expertise and Innovation of Chinese and International Work Organizations Undergoing Transformation (1997-2000). His articles have appeared in several international journals, including those in the areas of Psychology, Education and Management. Vasilis Katos Dr is a Senior Lecturer at the University of Portsmouth. His research area include Systems Security. Among other responsibilities he is also course-leader for a new MSc in Forensic IT. Mounir Kehal Professor in Computing Science and Knowledge Management at the International University of Monaco. Member of the European Conference on Knowledge Management, Organizing Committee. Member of the Association of Information Systems. Co-Director of Teaching Links, Informing Science Institute. Member of the Computer Society/IEEE. Member of the Artificial Intelligence Group, University of Surrey, UK. Member of the Editorial Review Board for the Interdisciplinary Journal of Knowledge and Learning Objects, IJKLO The International Journal of Mobile Learning and Organization, IJMLO The Interdisciplinary Journal of Information, Knowledge and Management, IJIKM, The International Journal of Internet Technology and Secured Transactions, IJITST Isabelle Kern graduated at the University of Zurich in English, German and History in 1993. Afterwards she worked for four years in the multimedia industry as an author of computer-based training programs. In 1998 she began her studies in business administration and IT, again at the University of Zurich and graduated in 2004. Her research topic treats computer-supported systems, which support stakeholder management and knowledge management. Per Kirkebak 1975: Chemical Engineer from NTNU, Norwegian University of Science and Technology, Department of Chemistry 1975: Master of Pulp and Paper Technology from NTNU, Norwegian University of Science and Technology, 2000: Awarded PhD from NTNU, Institute of Industrial Economics and Technology Management, on innovation with the title “ Better implementation of innovation processes: A study of methods and praxises in one company within the Norwegian pulp and paper process industry”. 2003 and present: Responsible for the curriculum at the Ostfold University College for the Bachelor of Technological, Innovation and Entrepreneurship study, 2001 – 2003: R&D manager at Ostfold Regional Research Institute 1991 – 2001: Process- and Product Development Manager at Peterson Paper and Board AS 1989 – 1991: Senior researcher at Ostfold Regional Research Institute 1984 – 1989: Manager in a company I founded. When I left, I sold out, and at that time we had 12 employees 1981 – 1984: Process responsible for new Processing plant at Borregaard CemCell 1977 – 1981: Senior

xv

Process Engineer at Aker Engineering today Aker Kværner Oil & Gas (stationed mainly in Scotland and Brasil) 1975 – 1977: Senior Process Engineer at Borregaard Cellulose Factory. Gabor Klimko PhD in 2003, interest information strategy and knowledge management. Regular participant of ECKM. Andrea Ko; Dr Ph.D graduated at the ELTE (Eötvös Lóránd University of Budapest) in 1988 (MSc in mathematics and physics). She has a university doctoral degree in computer science (1992). She is an associate professor of the Corvinus University of Budapest, where she has earned her Ph.D. in 2005. Her research field includes systems design, intelligent systems, knowledge management, management and design of ontologies and IT audit. Gita Kumta, Dr has a Masters in Statistics from the Indian Statistical Institute, Calcutta, and a Ph.D from University of Mumbai, India. She has around 30 years experience in Information Systems and IT consulting. Presently Dr. Kumta is Professor at Narsee Monjee Institute of Management and Higher Studies (NMIMS-Deemed University), Mumbai, India and specialises in Information Systems and Knowledge Management Harri Laihonen is a Ph.D. student who studies knowledge-intensive organisations from the viewpoint of knowledge flows. In an ongoing research project, he focuses to health care organisations and their renewal capability. He works currently as a researcher in the Institute of Business Information Management at the Tampere University of Technology. Valerie Martin With an M.Sc in Information Management (Strathclyde University, UK), and a Ph.D in Computer Integrated Manufacturing (Cranfield University, UK), Dr Valerie Martin has had substantial research and consulting experience in manufacturing regeneration and change, supply chain management, information management and knowledge management. She presently works as a lecturer at the University of Waikato, New Zealand. She has published in a wide range of international conferences and journals. Aboubakr Moteleb . Throughout his education and work experience, Abou-Bakr has combined engineering and IT with social and organisational sciences both in academia and business. His MSc. Business Information Technology is applied to help growth and integration of organizations in the digital economy through e-Business, e-Commerce and e-Marketing. He worked for the American University in Cairo (AUC), the Industry Modernization Center (IMC) with the European Commission, Japan International Cooperation Agency (JICA), the Regional Information Technology Institute (RITI) and the Regional Information Technology and Software Engineering Center (RITSEC) in Egypt. Moreover, he provided consultancy for private sector, governmental and non-governmental organizations such as the UNDP, the International Advertising Association, the League of Arab States, Trenta Real Estate, Panasonic, the Ministry of Administrative Development, the Information and Decision Support Center, and the Centr! al Bank of Egypt. Furthermore, his current PhD research in Knowledge management systems development focuses on its value to organizations Theresia Olsson Neve (PhLic) is a PhD student in Computer and Systems Science. Her main interest concerns individual and organisational learning. She will defend her thesis in September 2006 at Stockholm University, The Department of Computer and Systems Sciences." Axel-Cyrille Ngonga Ngomo was born in 1983 and received his Masters in 2004. He worked on government funded projects on Information Retrieval (Pi-AVIDA) and Knowledge Management (PreBIS). He is now a PhD student at the graduate school of the University of Leipzig. His topics of interest include Information Retrieval, Ontology Extraction and Intelligent Systems. Margit Noll. I am concerned with research management in arsenal research as well as responsible for the knowledge management process. My work is based on my prior scientific research and

xvi

consulting activities in the fields of knowledge management, innovation and technology management. Basically I have studied physics and finished an MBA in general management. Patrick Onion Formerly director and principle consultant of South African knowledge management and information management consultancy The Knowledge Studio, Patrick Onions has taken a sabbatical and is currently a PhD student and lecturer at Leeds Metropolitan University. Patrick has a background in information systems and technology development and project management, and holds an MSc and MBA. Paul Otterson (DR) is Programmes Manager for Technology Management subject area. A Principal Lecturer and the Programme Leader of the e-Business Technology and Management programme at the John Moores University in Liverpool. He has a particular interest in e-Business applications in SMEs. His doctoral thesis was on the use of Innovation and Creativity tools within SMEs Rosane Pagano is a Senior Lecturer in Management Information Systems, at Manchester Metropolitan University Business School, United Kingdom. She has a long experience developing Information Systems for organisations, and in particular on requirements elicitation. Rosane holds a PhD in Applied Sciences (University of Louvain, Belgium), a BSc in Systems Engineering, and a Post Graduate Diploma in Education. Robert Parent holds a Ph.D. in human and organizational systems from the Fielding Graduate University in Santa Barbara, California and conducts his teaching and research as a full professor of strategy with the Faculty of Administration at the Université de Sherbrooke (UdeS) in Québec. He also serves as Director of the Knowledge Transfer Research Laboratory at the University. As a researcher with the Centre d’étude en organisation du travail at the UdeS, he conducts research on how effective knowledge transfer contributes to a systems competitive advantage. Prior to his academic career, Mr. Parent was president of an international consulting firm specializing in strategic management. Alberto Paucar-Caceres, BSc, MBA, MA, MPhil. Alberto’s research interest is in the area of application of Systems Thinking and systems ideas in organisational problematical situations, in particular, in the use of systems methodologies (Soft Systems Methodology, Critical Management Sciences and Soft Operational Research) in organisations. He has published on Systems, Operational Research and Management Education Journals. Mirva Peltoniemi has received a Master of Science degree in industrial engineering and management from Tampere University of Technology, where she is currently working as a researcher at the Institute of Business Information Management, pursuing her doctoral studies. Her research interests include the development mechanisms of the Finnish games industry from the viewpoint of complexity and evolutionary economics. Dávid Peto is a assistant professor at the Corvinus University of Budapest, Department of Information Systems. He graduated as M. Sc. of Business Administration in 2000 and he is employed by the university ever since. He recently submitted his Ph. D. thesis: Daring Measures Risk Assessment Metrics in IT Auditing. In addition to teaching, he has worked on several research and consultancy projects in his main research topics: IT auditing, risk management and ERP systems connectivity. Stavros Ponis has graduated from the School of Mechanical Engineering of the National Technical University Athens (NTUA) in 1996 and owns a PhD from the same institute. Since December 2001 he holds a position of Teaching and Research Associate in the Section of Industrial Management and Operational Research of National Technical University Athens. He is an expert reviewer for the European Union, and as from November 2005, he has been elected a

xvii

Lecturer in the scientific area of Supply Chain Management and he currently waits for his official appointment by the Ministry of Education. Aino Pöyhönen, Ph.D. (Econ.) is Professor of Knowledge Management at the Department of Business Administration, Lappeenranta University of Technology, Finland. Her research interests include, for example, the knowledge-based view of the firm, intellectual capital, organizational renewal and innovation, creativity and imagination, and social capital. Agnieta Pretorius is a doctoral student and a lecturer in information and communication technology at Tshwane University of Technology, South Africa. Prior to this she was a software developer. Her current domains of research and teaching include technical programming, knowledge management and decision support systems. Camille Rosenthal-Sabroux is a graduate of PHD Paris VI, (1971). Since 1989, She is full Professor at Dauphine University, and advises some large companies about Information Systems, Knowledge Management, and Decision Science. She is the founder of the SIGECAD Group, which domain topics are Information System, Knowledge Management and Decision Aid. She published several books and articles. Enrico Scarso is Associate Professor of Engineering Management at the University of Padova. His research interests are in the area of the economics and the management of technology, with particular reference to the use of ICTs in Knowledge Management. He has published in several journals and contributed to various books. He is member of IAMOT and IEEE. Christian-Andreas Schumann, born 1957 in Chemnitz, studied Industrial Engineering in the Technical University of Chemnitz, first doctor’s degree in 1984 and second doctor’s degree in 1987. In 1988 he was appointed as associated professor for plant planning and information processes in the TU of Chemnitz. In 1994 he became professor for business and engineering information systems in the University of Applied Sciences Zwickau. Since March 2003 he has been dean of the faculty of business and management sciences in the University of Applied Sciences Zwickau. He is director of the Institute for New Kinds of Education, Vice-Chairman of the Central German Academy for Further Education and the Institute for Knowledge Management. Margaret Shannon With a Ph.D. from the University of Chicago, Margaret Shannon writes on circus as discourse and studies the nexus of embodied knowledge, meaning, and dynamic social change. Since presenting in Greece on success and composition as well as exploring the relationship between writing and complexity theory, Dr. Shannon believes that inter-disciplinary approaches to language are critical for social change. Evangelia Siachou is a PhD candidate at the Department of Business Administration, Athens University of Economics and Business. She holds a Bachelor's degree in International and European Studies from the Panteion University of Athens and an MSc in Industrial Relations and Personnel Management from the London School of Economics. She possesses work experience in the Strategic Planning Department of ATHOC 2004 (Olympic Games). Her current research interests include Knowledge Management and Strategic Human Resources Management. Kerstin Siakas is an Associate Professor at the Alexander Technical Educational Institute of Thessaloniki in the department of Informatics. She comes from a in multicultural and multidisciplinary background and has an extensive experience of software development from different organisational levels in several countries. Her research interests are in the "soft" issues of Information Systems development, including cultural issues influencing software quality management, knowledge management (knowledge creation and transfer), as well as educational issues, such as Open and Distance Learning.

xviii

Joanna Sinclair is currently working on her Ph.D at Hanken, the Swedish School of Economics and Business Administration. Before joining the academic world she worked as a publicist and PR consultant, which greatly influenced her choice of research topics: Building and upholding Relational Capital through storytelling, with a particular interest in the gatekeeper role of business journalists. Pernille Dissing Sørensen is a PhD-student with the department of Marketing and Management at the University of Southern Denmark. She holds a MSc in Business Administration (International Management). Her research interests are organizational learning, knowledge management, knowledge management strategies, and organization design. Her PhD project focuses on knowledge management and organization design Kaisa Still has been involved with both the professional and academic worlds of Knowledge Management for 10 years. She is currently working as Knowledge Management Specialist for Beijing DT Electronic Company, in China. In addition, she is pursuing Ph.D. at the Oulu University, Finland, at the department of information science. Her research interests include knowledge processes, communities, and communication technologies. Zoltan Szabo associate professor, Corvinus University of Budapest, Department of Information Systems. He graduated from the Budapest University of Economic Sciences, specialized in Information Management in 1994. He has earned his Ph.D. in 2001. His research field includes strategic information system planning, management of IT services, IT governance. Eduardo Tome I am Portuguese, made a PhD in Economics, and I am interested in IC, KM and Social Policies. Was present in ECKM 2002, 2003, and 2005. Published in the Journal of Intellectual Capital and in the Journal of European Industrial Training. Ananya Upadhyaya is doing a doctorate from one of the leading management institutes of India. My area of interest is knowledge sharing and transfer issues in organizations. Prior to this I did my Masters in Computer Applications. My long-term goal is to join academics. Réka Vas is an economist, graduated at the University of Szeged. Since 2002 she is a researcher at the Department of Information Systems of the Corvinus University of Budapest. At the same time she is a Ph.D. candidate at the Faculty of Business Administration of the University. Vincent Wade FTCD is a Senior Lecturer in the Department of Computer Science, Trinity College Dublin and Research Director for the Knowledge and Data Engineering Group. Vincent is also founder and Director of the Centre for Learning Technology in TCD. The Centre is responsible for the supporting Trinity College's academic staff in the areas of Technology Enhanced Learning. Having graduated from University College Dublin with a BSc (Hons) in Computer Science in 1988, he completed his postgraduate studies in Trinity College Dublin before taking the position of Lecturer in the Computer Science Department in 1991. As Research Director for the Knowledge and Data Engineering Group in the Computer Science Department, Vincent leads a-team comprises seven academic staff and twenty five research staff (including research fellows, associates and postgraduates). The group focuses on knowledge and data management research and has established an international reputation in three related research areas! , Namely Telecommunications & Network Management, e-Learning and e-Business services and Health Telematics. He was awarded Fellowship of Trinity College for his contribution to research. Vincent is co-chair of the ACM sponsored Adaptive Hypermedia and Adaptive Web Systems International Conference (AH2006), is guest editor of the upcoming special issue of IEEE Internet Computing and serves on many prestigious International Conference and Journal Series such as IEEE Communications, IEEE Networks, WC3’s www Conference Series, IEEE IM and NOMS, AACE’s E-Learn and EdMedia.

xix

Christine Welch is a Senior Lecturer at the Portsmouth Business School. Her research interests include systemic thinking, organisational analysis, learning and change. Her academic responsibilities include being the course leader for a MSc Knowledge Management. George White is an IS Consultant providing Business Analysis and Project Management services primarily within the Utilities and Manufacturing sectors. George is currently working on the final stage of an MSc in Professional Computing at Staffordshire University. Roy Williams Dr develops and manages e-learning and knowledge management for the University of Portsmouth, and for w.w associates in Reading, both in the U.K. His research includes the practical design development of knowledge and e-learning systems, the theory of knowledge and knowledge management, discourse analysis and semiotics, and the application of discourse analysis and complex adaptive systems theories to management and development issues. He is actively involved in the European Conferences on e-learning and Knowledge Management, and edits the Electronic Journal of e-learning. He has held posts of Visiting Professor of Education, Professor and Chair of Communication, and Executive Board Member and CEO of the South African Broadcast Regulator. He has also worked extensively in international development in literacy, distance education, media, HIV/AIDS, and national and international policy. Diana Woolis founding Partner, Knowledge in the Public Interest, the first company devoted exclusively to KM for Social Innovation. She has published in professional and popular journals on access and equity, organizational learning, and most recently the power of the Internet to revolutionize social program and policy development. Her Doctorate and Masters Degrees are from Columbia University, USA.

xx

An Ontology-Based Content Management System for a Dynamic Operating Context: Issues and Prototype Evaluation Maurizio Agelli and Felice Colucci Parco Scientifico e Tecnologico della Sardegna, Pula Italy [email protected] [email protected] Abstract: Content management systems are very useful tools for organizing and sharing information resources and may considerably benefit from using ontology-based description schemes. Ontologies set a common ground for resource acquisition, enabling different users to share a common view of a knowledge domain, and may considerably enhance the search paradigms by exploiting semantic relationships between concepts. However, ontologies may evolve since they reflect knowledge schemes that are by nature dynamic. Moreover this evolution should be the result of a collaborative process of ontology maintenance. These and other issues are addressed in the present work and some practical solutions are proposed. Also, a very simple prototype implementation of an ontology-based content management system is described. Finally, the results of a short experimentation of this prototype within a small community are presented. Keywords: Content management systems, ontologies, knowledge sharing, ontology evolution, and communities of practice

1. Introduction Nowadays, the availability of reliable information sources is a key factor for any decision making process in private and public organizations. However, the amount of information sources that are available on-line and off-line is overwhelming and this abundance of content may become virtually useless without the right tools that allow retrieving the useful information items. Search engines, although restricted to on-line resources, play an important role, but the results they provide are quite often imprecise and narrowing down the search may be a very complex and time-consuming task. Content management systems (CMS) provide an integrated framework, based on metadata and workflows, for organizing and sharing information resources within a community of users. Many description schemes have been standardized, defining metadata sets and vocabularies and providing a good level of interoperability among applications. However, metadata vocabularies are generally too loosely related to the operating context of the end users and finding the most suitable tags for describing a resource can lead to inaccurate or ambiguous results. Information should be contextualized using a description that makes sense in the domain where the information will be used. What distinguishes knowledge management systems from content management systems is that they deal with information plus semantics, not with information alone (Riedl 2002). Ontologies, adding the required semantics to vocabularies of concepts, may considerably improve the classification and search paradigms. It is much easier to identify the right description tags exploiting semantic relationships between ontology concepts, rather than looking up in a long list of keywords provided by a metadata vocabulary. Also, ontologies may integrate different views of a knowledge domain, enabling different users to reach the same concept going through different semantic paths. An ontology, intended as a set of structured terms that describe some domain, provides the skeletal structure for building a knowledge base (Swartout et al. 1996). The centralized technological infrastructure obtained by combining a CMS with an ontology is still weak from an organizational perspective because it has to face a distributed social form made by communities that operate in different and dynamically changing contexts. Knowledge should be autonomously managed where it is created and used. Each community should formalize its own context and then create a mapping with the contexts of the other communities through social interaction (Maier 2002). The process shall lead to the definition of a common ontology that is

1

The 7th European Conference on Knowledge Management

understood and accepted across the different contexts and can be used to classify the knowledge repository.

2. Framework overview A methodological and technological framework was set up with the aim to encourage the sharing of information resources within research teams working in the domain of digital media systems and applications. The objective was to provide an online toolset that could help to keep track of information sources that were considered worthwhile for a subsequent reuse. A CMS provided the basic infrastructure for collecting, annotating and retrieving information resources. A set of ontology tools provided the support for ontology construction, resource classification and semantic search. The ontology described a very dynamic knowledge domain that, like many other information technology topics, was far from being consolidated. A collaborative environment was provided for building and maintaining ontologies through an evolutionary process that involved the participation of all teams.

3. Methodology 3.1 Approach in terms of knowledge management strategies Knowledge management practices are based on two fundamental strategies: the codification strategy, consisting of making explicit the knowledge that is tacitly held by people, codifying it in documents and making it available for a subsequent reuse, and the personalization strategy, involving the direct transfer of tacit knowledge among people through socialization and personal interaction (Hansen et al. 1999). The knowledge management approach based on “communities of practices” represents a recent evolution of the personalization strategy and is based on the consideration that knowledge cannot be separated from the communities that create it, use it and transform it. In any type of knowledge work people are required to personally interact and exchange experiences with other people, even where technology provides a considerable support (Allee 2000). Communities of practice contribute to creating a common language and context that can be shared by community members. They also contribute to developing taxonomies within the common repositories managed by community members where individuals could submit knowledge artefacts that could be reused by others (Lesser et al. 2001). The proposed framework is based on a mix of the two knowledge management strategies. The people-to-document approach of the codification strategy is adopted in the CMS, that acts as a common knowledge repository. The people-to-people approach of the codification strategy is adopted by setting up a collaborative process, based on communities of practice, aimed at defining an ontology matching the different operating contexts. In figure 1 is described a case where two teams, working in different operating contexts, interact in order to define a common ontology that is used to contextualize the repository of information resources.

2

Maurizio Agelli and Felice Colucci

Community of Practice OPERATING CONTEXT A

Sharing of operating contexts ONTOLOGY (codification of a common operating context)

OPERATING CONTEXT B

Classification of resources

Knowledge Repository

Figure 1: interacting with the knowledge repository from different operating contexts

3.2 The metadata schema The choice of a suitable metadata schema is a fundamental step towards the specification of a knowledge repository. Many metadata schemas exist, addressing specific application requirements, however the Dublin Core (DC) metadata set, defined by the Dublin Core Metadata Initiative (DCMI) has been chosen for the present work, for several reasons: ƒ It is an open standard, adopted by W3C, and provides full interoperability with other applications; ƒ Its metadata attributes can be easily embedded in HTML/XHTML, allowing an easy detection by search agents (Powell 2003) ƒ Depending on specific needs, extensions and element refinements may be added to the standard metadata set defined by DC. DC also provides a set of encoding schemas that may be used to identify, e.g. through vocabularies of terms, the possible values that metadata may assume. In the present work an ontology-based encoding schema for the DC:subject metadata field is proposed.

3.3 The knowledge model The metadata set should not only annotate information resources, but must also place them in the context of a knowledge scheme. A knowledge scheme can be effectively represented by ontology, i.e. by a formal and agreed description of a knowledge domain in terms of concepts and relationships. Therefore, mapping information items to ontology concepts provides a homogeneous view over the repository of information sources and allows identifying, through the ontology network, new relations among resources and concepts. Ontology can be seen as a collection of concepts (also called classes) and properties (also called slots). Properties may describe relationships to other concepts. In particular, an inheritance relationship (IsA or SubclassOf) allows the building of a hierarchical view of the ontology (i.e. a taxonomy) that is particularly suitable to the knowledge classification task that is at the basis of the present work. The DC: subject metadata field maps information resources to domain concepts. The whole of information resources and domain concepts is nothing but a knowledge base. Creating a knowledge base involves adopting a knowledge model in order to achieve interoperability with other knowledge representation systems and to enable knowledge sharing and reuse. Many ontology representation languages exist, however, in the scope of the present work, the RDF knowledge model has been considered. RDF (Resource Description Framework) is a knowledge representation language defined by W3C with the aim of enabling software agents to directly process web resources (W3C 2006). The RDF knowledge model is based on a simple predicate logic that defines relationships between resources. The RDF Schema (RDFS), that is an integral

3

The 7th European Conference on Knowledge Management

part of the RDF recommendation, allows defining application-specific vocabularies of concepts (classes) that can be used by RDF to describe the knowledge items (instances). RDFS classes can be hierarchically organized using the rdfs:subClassOf relationship. Moreover, since a class may have more than one parent, a concept can be located going through different inheritance paths. This is an important aspect, because it allows integrating in a unique ontology different views of the same knowledge domain. The only main concern with RDFS is the impossibility of modelling axioms. However, this potentially serious limitation of RDF can be overcome either by adding extra application layers on top of RDFS or by modelling axioms as RDF objects (Staab et al. 2002).

3.4 User roles and workflows Creating and using an ontology-based repository of information resources involves different kinds of activities, which are described hereafter.

3.4.1 Ontology editing Creating an ontology that encodes a given knowledge domain is the first activity required to build the repository. It is quite a long and laborious task even for domain experts and it should rely on a specific tool (ontology editor) that facilitates the design, the construction and the consistency check of the ontology. Also, using a version control system is highly recommended in order to keep track of the whole ontology development roadmap.

3.4.2 Knowledge acquisition As soon as a first version of the ontology is available, the knowledge acquisition process can start. Knowledge acquisition is essentially a collaborative task carried out by a community of users that submit relevant information items to the repository, classifying them on the basis of the available ontology concepts.

3.4.3 Knowledge validation Trusted users prior to making them available for knowledge retrieval should validate all submitted information items. The validation involves both an evaluation of the resource itself and a full review of the associated metadata.

3.4.4 Knowledge retrieval Knowledge processing encompasses all activities concerning the utilization of the repository, such as semantic search, automatic generation of reports, etc.

3.4.5 Ontology maintenance Ontology maintenance is assumed to go on throughout the entire lifetime of the knowledge repository and relies on the contribution of all users. In the case that the users operate in heterogeneous contexts, the community of practice paradigm may be useful to encourage knowledge sharing leading to the creation of a common ontology. The ontology maintenance process does not involve direct modifications of the ontology, instead it produces a feedback directed to the people in charge of editing the ontology. Ontology maintenance should not prevent the normal usage of the repository. A minimal set of user classes has been identified and is reported in table 1. In figure 2 is depicted the interaction between the users and the main functional blocks. Table 1: User classes and roles User Class Domain Expert Trusted User Generic User

knowledge acquisition √ √ √

knowledge retrieval √ √ √

4

User roles knowledge validation √ √

ontology maintainance √ √

ontology editing √

Maurizio Agelli and Felice Colucci

DOMAIN ONTOLOGY

knowledge acquisition

knowledge acquisition KNOWLEDGE REPOSITORY

knowledge validation Ontology Editor

Domain Experts

knowledge validation

knowledge retrieval

Generic User

Trusted User

knowledge retrieval

Generic User

Generic User

Trusted User

Generic User

Team B

Team A

Community of Practice

Figure 2: User classes and functional blocks Workflow management, which is supported by most of the CMS, should be tailored according to the possible states a knowledge item may assume, as shown in figure 3. Any knowledge retrieval activity may only access knowledge items that are in the “valid” state. publish / retrieve Valid validate validate

submit a new resource

review New

publish / retrieve Under Review

modify metadata modify metadata

remove Deleted

Figure 3: Possible states of a knowledge item

4. Key issues 4.1 Encouraging knowledge acquisition The intrinsic value of the repository is strictly correlated to the amount and quality of the information resources it contains. The possibility to submit new resources in a quick and easy way as soon as the users evaluate them is vital. If the knowledge acquisition phase involves laborious and unfriendly tasks, users will be reluctant to add new resources and the repository will never reach its critical mass in a reasonable time. Therefore, only a minimal set of DC metadata elements should be required to be provided when a new resource is added to the repository, while the remaining metadata may be added later on. The user should always be able to switch to a full metadata entry form, containing all metadata fields. The use of automatic metadata detection tools is also recommended. However, in order to avoid affecting the performance of knowledge acquisition, these tools should operate in background.

5

The 7th European Conference on Knowledge Management

Finally, it is important that the choice of DC:subject metadata term is done using techniques that allow an easy identification of closely related concepts (e.g. using a taxonomy navigation tool).

4.2 Dealing with ontology evolution Ontologies represent a point of reference in the attempt to structure information resources within an agreed knowledge scheme. However, ontologies are not necessarily static and may generally evolve for different reasons, e.g. because they reflect knowledge schemes that are by nature dynamic, or because the more skilled the domain experts become, the more thoroughly ontologies are detailed. A big challenge in the design of an ontology-based CMS is the ability to cope with ontology changes that may occur during the lifetime of the system (Stojanovic et al. 2002). The main issue in managing ontologies changes is to avoid invalidating any resources already present in the repository. Another concern is how to update the knowledge base after some parts of the ontology have been improved. The possible operations that involve ontology modifications are addressed by the following cases:

4.2.1 Adding a new class. Adding new classes is a common practice that occurs during ontology maintenance, e.g. because some concepts need a more detailed classification or because the scope of the domain has slightly changed. Adding a new class also involves specifying relations with other classes, in particular its position in the taxonomy tree (i.e. identifying the superclasses the new class inherits from). After a new class has been added to the ontology, there is a possibility that knowledge instances related to parents or siblings of that class may be better reclassified. These instances remain valid but should be marked as “pending for review”, in order to highlight them in a future revision process.

4.2.2 Removing a class. Removing a class is a potentially dangerous operation that may cause inconsistencies in the knowledge base. For example, removing a class that is referred by some knowledge instances ends up in an inconsistent status of the knowledge base. Therefore, removing a class of the ontology can be allowed only if that class is no longer referred by any knowledge instance. In most cases it is preferable to mark the class as obsolete.

4.2.3 Making a class obsolete. An obsolete class cannot be used anymore to index new knowledge instances. Existing instances that are related to obsolete classes are marked as ‘pending for review’.

4.2.4 Modifying semantics. Modifications in the semantics (e.g. altering the taxonomy tree) can be allowed at any time, since it does not cause inconsistencies in existing instances. However, the same considerations made for adding a new class shall be applicable in this case too.

4.2.5 Renaming a class. Renaming a class should not cause particular problems as long as it maintains a unique identifier in the context of the knowledge base. The design of a strategy for managing the ontology evolution should consider the above described cases. As a general rule, every time a new version of the ontology is produced it should be verified whether the new ontology version causes incongruences in the knowledge base. If so, an explanatory report should be produced in order to allow domain experts to build a correct version of the ontology.

6

Maurizio Agelli and Felice Colucci

5. Prototype application framework 5.1 General architecture A prototype application framework, named ORKO (Ontology-based Repository of Knowledge Objects), was developed with the aim of (1) experimenting and validating the solutions proposed in the present work; (2) highlighting unexpected issues; (3) laying the foundations of the prototype evolution. The entire prototype, whose building blocks are outlined in figure 4, was totally developed using open source components. In particular, Plone (Plone 2006), a CMS based on Zope application server, was chosen because it provided full support for workflow management, creation of custom content types and web publishing. Protégé (Protégé 2006), a knowledge based framework written in Java, was adopted as ontology editor. Ontologies were exported to an ontology server and made available remotely through XML-RPC web services. Although Protégé already includes all required tools to build a knowledge base, it was originally conceived as a centralized application. A web-based distributed paradigm was definitely more appropriate for the ORKO prototype, whose primary purpose was to serve communities. Therefore it was preferred to rely on a traditional CMS for storing, searching and publishing knowledge items, using Protégé only for the tasks related to ontology creation and maintenance. The ontology server was designed to provide access to different versions of the ontology. In fact, it may happen that the ontology version in use by the CMS is not the latest available, owing to inconsistencies that have arisen during the evolution of the ontology itself. The trusted users that are in charge of validating the knowledge repository shall be able to switch to a new version of the ontology only upon evaluating the actions that such an upgrade involves. The ontology maintenance process was supported by Zwiki (Zwiki 2006), a wiki engine based on Zope and fully interoperable with Plone, that provided an informal discussion tool within the community of practice for sharing ideas about ontology evolution, enhancements and defect fixing.

Wiki

CMS

(Plone)

XML-RPC

Ontology Server (written in Java)

RDFS

Ontology Editor (Protégé)

users

domain experts

Version Control

DB

(CVS)

Figure 4: General architecture of the ORKO prototype A sample screenshot of the ORKO prototype, related to the knowledge acquisition phase, is shown in figure 5.

7

The 7th European Conference on Knowledge Management

Figure 5: sample screenshot of the prototype application

6. Evaluation and implication for future work The ORKO prototype was tested for a 10-month period by two small teams working on a joint research project related to multimedia content delivery. At the end of this period, about 1200 resource items were available in the repository and about 550 classes made up the ontology. Approximately 85% of the repository items were online resources. During the evaluation period many improvements were made in the prototype upon users’ suggestions. Although the ORKO prototype showed to be a profitable tool for sharing knowledge within a small research community, a number of problems were identified. Firstly, in spite of the enhancements made during the test phase, usability was still an important issue and users were not encouraged to actively participate in the knowledge acquisition process. In particular, the necessity to navigate through a taxonomy tree in order to find the required description tags (rather then entering directly some text) was reported as a disadvantage by most skilled people. Secondly, the ontology maintenance process lacked a real involvement of users, who preferred in many cases to produce an imprecise classification of the resources rather than undertake a discussion within the community of practice about the required ontology enhancements. Finally, the need to validate every resource item by a trusted user could raise scalability issues as the number of resources increases. Future work should look into a different approach to overcoming the above described issues. An interesting possibility could be adopting a collaborative classification system based on freely chosen tags. The ontology would be no longer visible to the users, however it would still play an important role in background, mapping the many user-generated tags to a formal representation of a common knowledge domain. Moreover, since these tags can also contribute to improving and extending the ontology, all users would be implicitly involved in the ontology maintenance process.

Acknowledgements (1) “Driving Innovative Exploits for the Sardinian Information Society” (DIESIS) project funded by the European Regional Development Fund (ERDF); (2) "Distributed Architecture for Semantic Search and Personalized Content Delivery” project funded by the Italian Ministry of Education, University and Research (MIUR).

8

Maurizio Agelli and Felice Colucci

References Allee, V. (2000) “Knowledge Networks and Communities of Practice”, OD Practictioner, Vol.32, No.4, [online], http://www.odnetwork.org/odponline/vol32n4/knowledgenets.html DCMI (2006) “Dublin Core Metadata Initiative Organization”, [online], Dublin Core Metadata Initiative, http://dublincore.org/ Hansen, M.T., Nohria, N., and Tierney, T. (1999) “What’s your strategy for managing knowledge?”, Harvard Business Review, Vol. 77, No. 2, pp106-116 Lesser, E.L., and Storck, J. (2001) “Communities of Practice and organizational performance”, IBM Systems Journal, Vol. 40, No. 4, pp831-841 Maier, R. (2002) “State-of-Practice of Knowledge Management Systems: Results of an Empirical Study”, Revue des Organisations Suisses d’Informatique, No. 1, pp14-29. Powell, A. (2003) “Expressing Dublin Core in HTML/XHTML meta and link elements”, [online], DCMI, http://dublincore.org/documents/dcq-html/ Plone (2006) “Plone: A user-friendly and powerful open source Content Management System”. [online], Plone Foundation, http://plone.org Protégé (2006) “The Protégé Project”, [online], Stanford Medical Informatics, http://protege.stanford.edu Riedl, R. (2002) “Some Critical Remarks in Favour of IT-based Knowledge Management”, Revue des Organisations Suisses d’Informatique, No. 1, pp44-49. Staab, S., Erdmann, M., Maedche, A., and Decker, S. (2000) “An extensible approach for Modeling Ontologies in RDF(S)”, AIFB, University of Karlsruhe, [online], http://www.aifb.unikarlsruhe.de/WBS/Publ/2001/AEAfM_sstetal_2001.pdf Stojanovic, L., Stojanovic, and N., Handshuh, S. (2002) “Evolution of the Metadata in the Ontologybased Knowledge Management Systems”, [online], University of Karlsruhe, http://www.aifb.uni-karlsruhe.de/WBS/sha/papers/evoMeta.pdf Swartout, B., Patil, R., Knight, K., and Russ, T. (1996) “Toward distributed use of large-scale ontologies”, Proceedings of the Tenth Knowledge Acquisition for Knowledge-Based Systems Workshop, pp32.1-32.19 W3C (2006) The World Wide Web Consortium Organisation, [online], The World Wide Web Consortium, http://www.w3.org Zwiki (2006) “A powerful Zope 2-based wiki engine”, [online], Joyful Systems, http://zwiki.org

9

Building up Knowledge like a “Rubik Cube” Marko Anzelak, Gabriele Frankl and Wolfgang Ebner Alpen-Adria University of Klagenfurt, Austria [email protected] [email protected] [email protected] Abstract: As the amount and the complexity of available information steadily rises, the user’s demand for information gets more and more specific. In most cases, the user queries about data require not only a subset of stored data. Very often complex relationships between different data, sometimes derived from different applications, are needed. At the same time it is very important that users should not be confronted with several front ends to avoid the waste of cognitive resources. To focus the user’s attention on important content it is necessary to provide uniform user interfaces, which are adjusted to their requirements. Complex search algorithm and data queries, which operate on databases of different applications, should not bother the user. Additionally, it is necessary that important data and information are continuously collected and stored in the system. At the same time entry masks hide a complexity of demons. To satisfy those requirements it is essential to have a system architecture that enables the user to retrieve the data easily and at any time without much of an effort and with exact results. Using the experience that we were able to gain through the implementation of knowledge management systems in the paper industry in former projects, we elaborated on a concept, which we shall introduce in this paper. The conceptual architecture is based on the “Rubik cube“ and allows the user to gain different and flexible views of the same data. In the field of knowledge management and e-Learning, these perspectives correspond to individual user requirements like knowledge collection, knowledge transfer, knowledge retrieval and also knowledge acquisition and learning.

Keywords: Knowledge management, e-Learning, rubik cube, software-architecture, information overload, user requirements.

1. Preface For some years now there has been concern about an overflow of information in different areas, for example in media and communication studies, economics, IT, brain research, sociology, pedagogy and psychology (Jochum 1999, Welsch 2000, Borchers 1998). What we often forget is that we are always confronted with an overflow of information: If we were aware of and attempted to analyse all the information surrounding us, it would. certainly be problematic. For example, working on a computer, we would have to look at the monitor but we would also be distracted by things happening directly outside the office, noticing, for example, whether a fly is walking across the window-pane, or in which direction someone might be passing. We would become aware of every noise in our office, such as the electrical buzz from lights, the noise we make when typing, the sound of passing cars, people chatting in rooms close by or outside the building. We would notice smells are in the air, the contact of our fingertips with the keyboard, the sensation of clothes on our skin, the feeling of our forearms on the table, or the physical contact of our thighs on the chair and our feet on the floor. Our brains would have to evaluate which temperature had been reached, which kind of light source influences us and to what degree, whether we are hungry or thirsty, what our bodies feel like, whether our muscles are flexed, and so on. At the same time we would have to concentrate on our proper work, which, considering the processing capacity of our brains is almost impossible. We do not get lost in this overflow of information because we have found ways of concentrating on a particular section of reality if required, so-called selective perception (Anderson 1985/2001). This makes it possible to deal with a large amount of information, which has always been important to survival and is still important. What is needed, are strategies to store this information meaningfully, to supervise it, to interrogate it methodically, to manage it skilfully so that it can be used efficiently. Strategies evolved in the natural world can thus be used in other areas. The negative competence (Röll 2003) is missing in the electronic section because the mass of information always increases. So it is important to find more strategies, which will help us find our way through the thicket of information. One of our proven strategies will be introduced in this paper.

10

Marko Anzelak, Gabriele Frankl and Wolfgang Ebner

2. Specific requirements of production companies to a knowledge management system As in other areas, the production field investigated is marked with an increase in complexity. Some factors that have led to more variables being considered in the production processes, which become more complicate, are changing markets, a low life-cycle of products, more specific as well as higher customer demands on products. A difficulty arises for the individual user of the machines in the production process: on the one hand the operator has to provide more knowledge and consider more information, on the other hand he has to make decisions in a limited amount of time. The correct information must be readily and freely available if there are incidents with the machine so that the employee can act promptly and efficiently. In addition to basic knowledge, a user needs access to external and collective shared information, and advice on how to use it. The flow of information among peers enables an exchange of experiences and the passing on thereof widens and actualises a shared knowledge base. Informal discourse is an ideal method for sharing knowledge but there are few opportunities for staff interaction, especially in manufacturing companies where staffing levels are often low. The opportunity for informal, interpersonal exchange is made more difficult when there is shift work because the exchange of information must occur during the changeover, when the following shift must be informed about incidents during the preceding shift. This is problematic because the employees have to stay on after working hours to summarise their work for colleagues who are starting their shift. It is possible then that knowledge is lost when the entire course of the shift cannot be made explicit. A situation can become critical when information is passed from one shift to the other without personal contact among workers of the early morning shift and the employees of the night shift. Precious explicit knowledge, which is needed for production to run smoothly, is thus lost forever. In this way the burden on the individual worker increases, as well as on the company, in the sense of an organisation learning to produce efficiently and remain competitive in order to survive in the face of global competition.

2.1 Source, complexity and combination of data versus user requirements In the company we analysed the data come from different data sources, as explained in chapter 3.2. There is a data warehouse and a process control system, which records the working history of different machines. In a text based data bank – the AS400 – different maintenance data are stored and supervised. The third kind of data sources are shared drives where files, created and administered by the employees, are stored. These files are almost exclusively Microsoft Office files.

Figure 1: Schematic presentation of the data sources

11

The 7th European Conference on Knowledge Management

The development that takes place in the field of Knowledge Management Systems (e.g. the elearning component) should help to prepare relevant knowledge for a production company. The shared drives are important devices and give the employees of different departments access to data and information fundamental to their work. Furthermore, there is the potential for interchanging data with other locations of this globally operating company. A problem is that these variable data sources might have variable data types from variable tables of databases and “real“ users. Their specific requirements and individual computer literacy are confronted with these data types and data sources. Obviously the use of different systems requires that the user become familiar with different user interfaces and different operating instructions and data structures. All too often such user requirements are not taken seriously. As a result, expensive and laborious systems don’t achieve the de facto user requirements. This leads to an absence of acceptance on the part of the employees or difficult handling and consequently the reduced usage of the system. Most operators in manufacturing companies have achieved low computer literacy; they are not familiar with touch systems and are afraid of applications with high complexity. This fear strengthens if the abstract-formal logic of these systems is based on abstract and technical logic, which doesn’t correspond with the user’s habitual way of thinking. Thus, these systems are not transparent, but are irreproducible and unpredictable to the user and as a result the motivation for use is diminished. On the one hand the amount of information an individual operator has to process and consider rises but on the other hand, the skills to manage and use it for profitability are missing. Firstgeneration document management systems and information systems that offer merely access to information or documents are not able to find remedy to handle the information. Mostly marketed as knowledge management systems, it is easy to see that they are merely slightly extended versions of document management systems, which only offer features such as data storage, data management and simple data output. These static systems, where data has to be treated directly, were not able to produce what is required of Knowledge Management. Beyond this, user interfaces are created without including special user requirements. The existence of data or information pools is not adequate, the information pools must have a reasonable structure because with a logical structure it is much easier for end-users to work with the data. To bridge the gap between the system and user requirements, it is necessary to integrate future users into the development of the system. The way in which we tried to integrate the users is described in the next chapter.

2.2 Method and realisation During each step of the development of our knowledge management system and the extended elearning module for Mondi Packaging Frantschach GmbH, employees were largely involved in the process of making important decisions because they were the future users of the system. Their requirements were analysed in detail with regard to special conditions of employment of manufacturing companies. At the beginning of our project, these user requirements were collected with the help of different questionnaires. After that, we conducted interviews and had intensive talks with end-users and stakeholders and as a result we implemented a draft prototype. This prototype came up for discussion and during an iterative process it was improved using the results of these discussions as input. During the preparatory training with the application, there were several feedback units to give users a chance to play a part in the system design. At the same time the users’ behaviour and active handling were observed from a distance in order to note discrepancies in the interviews mentioned above. After the system was set up at the company, we gave employees the opportunity to make suggestions for improvement. During the whole development process the users´ doubts were taken

12

Marko Anzelak, Gabriele Frankl and Wolfgang Ebner

seriously, we integrated their proposals for improvement into the system or they received an explanatory statement as to why it was not advantageous or possible to implement this requirement. This process might to be costly but it proves worthwhile. Mostly, the end-users do not know the possibilities or limitations of knowledge management and e-learning systems and therefore do not know what is or is not feasible. This may make it difficult for them to articulate their requirements and wishes. This was, however, made possible with the help of the prototype. The prototype was used in discussions and helped us to find new ideas as well as to work out what was necessary or useless, and end-users could see that their point of view was taken seriously. End-users thus are not forced to use a new system that was developed by management without their own requirement being considered. The user is a part of the development process of the application and is willing to use the system because individual requirements are being met and the system is adapted to individual skills and knowledge. This is perhaps the most important precondition to managing knowledge and activating learning processes. Users are motivated because they work with an application that is partly their own creation. Also, the content of the system is not abstract and inapplicable but rather derived from a theory based on facts, which are practical and immediately convertible. Importantly, the opportunity to introduce users´ own expertise, to share their own knowledge with that of colleagues and to improve the system encourages the acceptance of knowledge management and e-learning. This, in turn, is associated with employees´ positive feelings and pride in the system (Frankl 2006). According to (Mandl/Huber 1983), emotions affect the content of cognitive processes and the capacity to remember. Negative emotions such as indifference, boredom or disinterest have a detrimental impact on cognitive ability and performance, whereas positive emotions such as fun or a sense of achievement improve the performance of the user.

2.3 Conclusion To find a balance between a continuously growing and heterogeneous information pool and the need to lighten the cognitive load of the user, the use of homogenous front ends is essential – in terms of both navigation and operation logic. The vigilance of the user can thus be directed toward important contents. The closer the user is integrated into the process of system development, the more successful its implementation will be.

3. System architecture and Rubik cube 3.1 Philosophy of the Rubik cube Erno Rubik was a lecturer in the Department of Interior Design at the Academy of Applied Arts and Crafts in Budapest. He was thrilled by geometry, in construction and in combinations of forms and material when he developed a first working prototype of his Cube in 1974. The idea was to improve the stereoscopic imaginative faculty of his students, and in this he succeeded with the Rubik Cube. Rubik developed the three-dimensional colourful cube whose sides at the starting point show a different colour. Each side consists of nine smaller cubes, all of which can be rotated. If one “layer” of the cube is moved, more sides change because all pieces of the cube are connected through an ingenious mechanism. The individual layers of the cube can be manipulated vertically and horizontally so that 43.252.003.274.489.856.000 transformations of the Rubik Cube are possible (Puzzle-Shop online 2006). This is an order of magnitude, which outstripped all puzzles in the market. Although the cube is mentally extremely demanding, Rubik’s students and their friends were very interested and the Cube was popular. This popularity led Erno Rubik to take out a patent for the cube in 1975. In 1978, without promotion or publicity, the Cube slowly gained wider popularity with youths, in homes, playgrounds and schools” (Rubik online 2006). In June 1979 David Singmaster, an English mathematician, wrote an article in Scientific American, with a cover picture by Douglas Hotstadter, an acknowledged authority in the field of Recreational Mathematics. This article brought the Cube to the attention of academic circles world wide and represented a milestone in its history:.

13

The 7th European Conference on Knowledge Management

In September 1979 a deal was signed with Ideal Toys to bring the Magic Cube to the West. The cube appeared at the Toy Fairs of London, Paris, Nuremberg and New York in January/February, 1980 and became “Toy of the Year” in England. Demand for the cube then outstripped supply. In 1981 Rubik Cube was included in the design collection of the Museum of Modern Arts in New York (Rubik online 2006).

3.2 Systemdesign of "knowledge for production“ Our Knowledge Management System “Knowledge for Production”, which consists of the modules described in chapter 3.2, is a “coherence-network”. This “coherence-network” is composed of variable data that are shown to the end user in a consistent and intuitive way. The network is a three dimensional combination of data and every user can retrieve specific and customised information needed for learning. Schematically, the structure of our system is similar to a Rubik Cube because there are user-defined views on data and information. Arranging the parts of the cube can change these views. The generation of views and combination of data can only be achieved through a constant matching of meta-information. The user builds up this metainformation in part automatically and in part. This means that the user can take control of the actual data. Most important is the manually read in meta-information. Important giving an event can save information a chosen name and this information would not be stored with pure data storage as in the process control system. Another way of combining data is achieved by three possible dataallocations - a plant-view, a process-view and a profile-view. Our approach predominantly uses a vertical combination of data inside the modules. A horizontal combination of data is built up by the different modules and by the departments of the company. The primary sorting of data is done by the plant-view. The plant-view is displayed in a navigation tree and is a copy of the real working environment. This is important because the user detects the structure of the plant in the system which makes it easier to use.. However, the two other views, the process-view and the profile-view, can also be used to sort the data. The specific dimensions of the cube are only partly obvious for end-users because the front-end is used to abstract from the structures and processes that run in the background. The end-user is able to gain active access on the first and the second dimension of the cube (see Figure 2) because it is possible to navigate, through the user interface, to the components and subject-areas that are managed by the first and the second dimension. As can be seen from Figure 2: Schematic diagram of a “Knowledge-Rubik Cube”, the first dimension of the “InfoCube” corresponds to the different departments of the company that work with the knowledge management system. In large part this is information from the departments quality assurance, paper mill and maintenance because they are closely connected with each other. The second dimension shows the three modules of the system that can be used and managed through a frontend. These modules are named Electronic Shift Protocol ESP, Electronic Shift Support ESS and Extended Search, in short E-Search together with an e-learning module that is already under way. The third dimension comprises the three data sources and databases - IBM AS400, an Oracle database and a data warehouse. The third dimension is the one that cannot directly be manipulated by the user because depending on user interactions, the system itself chooses adequate data source and allocates the information. Figure 2: Schematic diagram of a “Knowledge-Rubik Cube” shows that there is a direct connection between the several parts of the cube. From this we get cooperation and interaction among the several parts and the whole system. With only few mouse-clicks, each user may arrange his or her own view of the “InfoCube” by choosing the relevant and important parameters. The user has the opportunity to select a concrete goal or “knowledge-object” or only “play” with the system. These are shown in more detail in chapter 3.3.2 „Search strategies, playful learning and Serendipity“.

14

Marko Anzelak, Gabriele Frankl and Wolfgang Ebner

ESP ESS E-Search Q-Assurance

Electronic Shift RePort Electronic Shift Support Extended Search Quality-Assurance

Figure 2: Schematic presentation of a “Knowledge-Rubik Cube” Clearly our cube consists of a larger number of parts (see Figure 2) because there are more departments in the company and the several parts of the cube are therefore split up into smaller cubes. This means that a user not only has access to all the information of the department “paper mill” but also to information and knowledge of a special area inside the “paper mill”. These subdivisions are very detailed because the user should, if necessary, be able to retrieve only the relevant information for a small field, without information that is not of interest or is already known. Thus an “overkill” of information should be avoided. The subdivisions are very small so that the user is able to get information even for a single component of a machine. This division has several advantages but also a disadvantage. That is, if a user creates a new event in the system, it is necessary to make an exact allocation to a part of a machine. The more subdivisions there are, the more time is needed to make this allocation but with the rising experience of the user this disadvantage will diminish.

3.3

Characteristics of the Rubik cube and their applicability to knowledge management systems

3.3.1 Flexibility The aim of the Rubik Cube is to return the cube to its original state, which means six unicoloured sides, and there are 43.252.003.274.489.856.000 ways in which this can be achieved (PuzzleShop online 2006). There are other aims in Knowledge Management and e-learning. The pieces of

15

The 7th European Conference on Knowledge Management

the cube can be arranged according to need in such a way that there exist 43.252.003.274.489.856.000 flexible views of the data pool The flexibility of this architecture is multifaceted and powerful and this is precisely what is needed for applications that aim to make such complex constructions as Knowledge Management and learning processes easier, or even make them possible. A possibility, to admit different sights on linked data stock, exists in the use of reports. These have limitations because they are defined statically and are not readily adaptable. For some queries these reports are appropriate, such as for daily production, setup time, change of settings, or times of idling, and discrepancies of nominal value. They do not, however, offer flexibility. A certain degree of flexibility is offered with the known OLAP-Cube – probably a reason for the success of this application, though the possible data combinations are pre-defined. In addition, the OLAP cube is constructed such that only linear data combinations are possible and make sense. As can be seen from Figure 3, a linear comparison of the data makes sense, for example the contrast of the paragraph of Rootbeer in the first quarter in Chicago and Dallas. Not useful will be a non-linear comparison of the paragraph of Rootbeer in the first quarter of Chicago and of Lemon Line in the second quarter in New York.

Figure 3: Schematic presentation of an OLAP-Cube. (IBM 2006) The Cube-model, by contrast, is based on the architecture of the Knowledge Management System "Knowledge for Production“. The individual pieces of the cube – referring to the philosophy of the Rubik Cube (cf. chapter 3.1.) - can be turned and combined individually. It should be noted that the data combination is pre-defined, as far as the system architecture is predefined, that is determined. Nonetheless, all variations of the cube model are in principle possible and make sense. This is the reason why non-linear data combinations make sense – for example, it can be quite informative to compare the data of the electronic shift protocol (ESP) of the maintenance department with data of the electronic shift Support (ESS) of the paper-mill.

16

Marko Anzelak, Gabriele Frankl and Wolfgang Ebner

The Best Practice description of ESS could be optimised through information of maintenance, which offers conclusions for optimisation possibilities in the production process. Another flexibility are user queries that were not planned at the conception stage of the system. This is a powerful way to access and combine information, which opens enormous new knowledge potentials. However, in order to use these potentials, appropriate strategies must be in place and these will be described in the next chapter.

3.3.2

Search strategies, playful learning and serendipity

There are more opportunities for information retrieval in an information system. In the first scenario, the user knows the aim pursued and the knowledge object searched, as well as the specific area of the searched information. The search strategy is target-directed, while it tries to locate the information as quickly as possible. The way to the information is relatively well-known. In the second scenario of “Information retrieval”, the user does not know exactly what he is searching. An exact definition or description of the knowledge object is not possible. If he or she sees it, it will be perceived and the search will be terminated. The search strategy is led by hypotheses about where the searched information might be. These presumptions are either confirmed or corrected until the search is successful. In the third scenario, the finding of information happens incidentally, which comes close to “Serendipity”. This unforeseeable, incidental, pleasant but unexpected information acquisition we would like to direct our attention to – we would like to thank Mr. DI Harald Semmelrock for the inspiration in a discourse on 4 May 2006. As the exact aim of information finding in form of knowledge object is not known, a conscious strategy cannot be developed and followed. Here, knowledge acquisition is more like a floating, a playful approach that is fun and rewarding if useful information is found unexpectedly. This reward motivates engagement with the system and information search – the best precondition for learning and knowledge acquisition are thus provided (Huizinga 1938/1997). In addition, Toms (Toms 2000) points to studies that explore the effect of Serendipity. For the first two types of “Information retrieval” the aim of the search is more or less known, which can lead to a narrow mindedness. By contrast, “Serendipity” opens up all directions. That means the user considers themes, which he or she has not considered before or at least not deliberately. This allows synergies to arise, it brings new ideas and ‘discoveries” that can lead to important improvements in a company, and these should be supported and promoted by the architecture of our Knowledge Management System. Evidence for its utility and success comes from current experience protocols of users, as well as from employees of the paper machine and even the management, who appreciate the new access to information and the resulting “aha-events”.

3.3.2 Illumination of coherence The Rubik Cube is a perfect example for recognising continuity (cf. chapter 3.1). The Knowledge Management Systems, which builds up analogies for the Rubik Cube, can be used for users to impart precious continuity knowledge. This is very important for production processes where a smooth course is only possible if the machine user is familiar with processes that take place before and after, and where the user knows their interactions. The employees of Mondi Packaging Frantschach GmbH mentioned in the interviews that their main concern is to build up continuity knowledge. “Knowledge For Production” is ideally suited to fulfil this requirement. Every Best Practice Article and each learning unit is provided with meta data for “Topics in connection with”.

17

The 7th European Conference on Knowledge Management

Moreover, basic knowledge and interactions as well as continuities with pre- and postliminary processes are illuminated. This was the first step the end user welcomed that and that was transferred into workday routines.

4. Summary and future directions The above provided an introduction to system design, which was derived from the Rubik Cube, and was initiated by the Mondi Packaging Frantschach GmbH with great success. The determining factor for that success was the appropriate description of connections in paper production, and also the linking of theoretical and relevant practical knowledge, which is gained from user feedback. At present the extension of Knowledge Management Systems through an e-Learning component is in process. For further development in the near future stronger integration of communicative tools is planned. For example, to foster the exchange of experiences between old and young employees, there should be access to opportunities for communication and for discourse about machines. In addition, more data should be integrated into the system, for example. Own company Data Warehouse. This would optimise the homogeneity of the operation system and the end user interface development. . There is furthermore the opportunity for exchange of available explicit knowledge among various company sites. The challenge thus will be to extend the Rubik Knowledge-Cube to include a level of a very interesting cultural dimension. The aim of this already started and further planned expansion is in line with the philosophy of Erno Rubik, and includes further development, improvement and adaptation to new demands, in brief, the development of Knowledge Management Systems to include new cube levels. This will enable and open up new dimensions of experiences.

References Anderson, J. R. (1985/2001) Kognitive Psychologie. Heidelberg (et al.): Spektrum, Akad. Verlag, original: Cognitive sychology and its implications, 2. ed., New York: Freeman, 1985. Borchers, A., Herlocker, J., Konstan, J., Riedl, J. (1998) Gaining up on Information overload, Computer, Volume 31, Issue 4, April 1998 Page(s):106 - 108 Frankl, G. (2006) eUsability. Emergente Usability und Accessibility im Knowledge Management und eLearning, in print Huizinga, J. (1938/1997) Homo ludens: vom Ursprung der Kultur im Spiel, reprint, Reinbek bei Hamburg: Rowohlt. Jochum, M. (1999) Wissenschaftswissen und Medienwissen oder: die Virtualität des Wissens, Innsbruck, Wien: Studien-Verlag. Hrsg. Vom Senatsarbeitskreis für Wissenschaft und Verantwortlichkeit Mandl, H., Huber, G. L. (1983) Emotion und Kognition, München [et al.]: Urban & Schwarzenberg. Röll, F. J. (2003) Pädagogik der Navigation: selbstgesteuertes Lernen durch Neue Medien, München: kopaed-Verlag Toms, E. (2000) Serendipitous Information Retrieval, Proceedings of the First DELOS Network of Excellence Workshop on Information Seeking, Searching and Querying in Digital Libraries, Zurich, Switzerland: European Research Consortium for Informatics and Mathematics. Welsch, J. (2000) Globalisierung, neue Technologien und regionale Qualifizierungspolitik: welche Regionen sind die “Gewinner der Informationsgesellschaft?”. Marburg: Metropolis-Verlag für Ökonomie, Ge. U. Politik. Online- Sources Puzzle-Shop online, 2006 http://www.puzzle-shop.de/loesung.html, May 2006 Rubik online, 2006 www.rubiks.com, May 2006 IBM, 2006 www306.ibm.com/software/data/db2/db2olap/docs/v82docs/miner82/omugtfrm.htm?howitworks. htm, May 2006

18

Innovation Focus and Middle-up-down Management Model: Empirical Evidence Nekane Aramburu, Josune Sáenz and Olga Rivera ESTE School of Management, University of Deusto, San Sebastián, Spain [email protected] [email protected] [email protected] Abstract: This paper is aimed at measuring to what extent manufacturing companies from the Basque Region (Spain) which place a greater emphasis on innovation have adapted their management context consistently, and in accordance with the middle-up-down management model put forward by Nonaka and Takeuchi (1995), and Nonaka, Toyama, and Byosière (2003). Moreover, the paper explores whether the degree of adoption of the aforementioned model is influenced by factors such as company size or technological level. Keywords: Knowledge creation; knowledge sharing; innovation; management systems; manufacturing firms; Spain

1. Conceptual framework 1.1 Catalysing factors in knowledge generation and innovation processes Assuming that the capacity of organizations to innovate lies in their capacity to generate new knowledge (Nonaka, and Takeuchi, 1995; Nonaka, Toyama, and Byosière, 2003), social interaction among individuals and groups constitutes a key element in this process. In accordance with the aforementioned authors, organizational knowledge is generated dynamically by means of social interaction among individuals and groups, both those belonging to the same organization and different ones. To manage this process effectively, conventional top-down and bottom-up management models are not valid (Nonaka, and Takeuchi, 1995), due to the fact that such models make it difficult for individuals and groups from different organizational levels to interact with each other, thus preventing the exchange of both tacit and explicit knowledge, as well as that of knowledge assets accumulated in the organization. In order to be able to create an organizational context which may facilitate interaction among individuals and groups and, therefore, the dynamic process of organizational knowledge creation, Nonaka and Takeuchi (1995) and Nonaka, Toyama, and Byosière (2003) put forward a new management model: the so-called middle-up-down one. This model considers all members of the organization to be major players in knowledge generation, interacting through both vertical and horizontal relations (Nonaka, 1988, 1991; Nonaka and Takeuchi, 1995). Specifically, the model is characterized by the wide scope of relations involving cooperation and exchange among top level managers, middle, and lower level managers, with middle managers being the ones who play a key role as links between the upper and lower organizational levels. Three essential aspects characterize this middle-up-down management model: the existence of a knowledge vision; the construction and dynamization of “BAs” (or areas in which knowledge is shared, created, and used), and the exchange of knowledge assets.

1.2

Relationship between the middle-up-down model with specific aspects of the management system

In the study carried out, two crucial aspects are analysed, which shape the middle-up-down model: (1) the construction and dynamization of “BAs”, and (2) the exchange of knowledge assets. For the purposes of this research, the existence of a knowledge vision is considered as forming part of the

19

The 7th European Conference on Knowledge Management

organizational “BAs”. Indeed, the aforementioned vision may be seen as a mental space for knowledge exchange. Regarding the construction and dynamization of “BAs”, as said earlier, these are the areas in which knowledge is shared, created and used; in other words, spaces - physical or virtual - where socialization, externalization, combination, and internalization processes could take place (Nonaka, Reinmoeller, and Senoo, 1998). Along these lines, Nonaka and Takeuchi (1995) and Nonaka, Toyama, and Byosière (2003), put forward a type of specific organizational structure - the so-called Hypertext Organization, which combines adhocracy with hierarchy - as that which to a greater extent favours the existence of areas for sharing knowledge or “BAs”. On the other hand, and in addition to the type of organizational structure implemented (which may foster or, conversely, hinder the generation of areas for creating, sharing, and using knowledge), the strategic formulation process itself is considered to be a critical element in this field. The fact of carrying out a process of systematic strategic reflection implies that top management generates time and space – in other words, “BAs” – for sharing knowledge about what has occurred in the organization, what has functioned and what has not, and about what has happened and may be expected to happen in the environment. The purpose of all this is none other than to generate new "organizational theories in use" (Argyris and Schön, 1978) or new improved guidelines for action – or radically different ones – or reaffirm those already embarked on. Moreover, such reflection processes involve dialogue and interaction not only among the members of the organization, but also sometimes with different stakeholders which may be affected by the company's action. In short, systematic reflection or strategic formulation processes foster the processes of knowledge generation and organizational learning (De Geus, 1988; Pedler, Boydell, and Burgoyne, 1991; Mayo and Lank, 1994; Drew and Smith, 1995) and, consequently, act as catalysers in the innovation processes. On the other hand, the degree of decentralization of the strategic formulation process is also an essential factor. This decentralization fosters the incorporation of a greater number of people and organizational levels into the process and, consequently, the creation of broader “BAs" for the exchange of knowledge, with organizational learning processes (Swieringa and Wierdsma, 1992) and innovation being favoured to a greater extent. As far as the existence of a vision of knowledge is concerned, this is identified in the study with the existence of a global strategic framework as a reference point for the organization: that is, with a formal definition of mission, vision, values, and policy of the company. To create value by means of knowledge generation activities, the organization needs a vision which may gear it all towards the type of knowledge it has to acquire, and which may foster spontaneous bonding on the part of individuals and groups involved in knowledge creation and sharing. In this sense, the formally defined mission, vision, values, and policy of the company delimit a shared mental space - or virtual “BA” - which determines the way knowledge is generated and distributed among organizational members. Finally, another aspect considered in the survey as forming part of the existing organizational “BAs” is the one referring to the improvement of internal communication processes. Such processes may act as catalysers or, on the contrary, as obstacles for knowledge sharing, depending on their nature. If a relevant effort is made by the organization in order to improve its internal communication processes - more fluid and flexible ones - a good context for knowledge distribution is created, expanding organizational “BAs”.

20

Nekane Aramburu, Josune Sáenz and Olga Rivera

Moving on now to the exchange of knowledge assets (and also linked to the previous point), the configuration of the control and the compensation systems, as well as the fact of fostering teamwork, may make a major contribution to this purpose. In the area of control, the use of tools such as the balanced scorecard and annual management plans (together with the associated information system) may to a great extent promote the exchange of knowledge. In particular, the balanced scorecard is a tool, which fosters the interaction and information redundancy advocated by Nonaka, Toyama, and Byosière (2003). Indeed and as Simons (1995, 2000) points out, the balanced scorecard may be used perfectly as an interactive control tool, promoting continued dialogue and discussion about those indicators which are related to elements of strategic uncertainty, as occurs with everything surrounding innovation. Additionally, the balanced scorecard enables a synthetic and overall vision to be obtained of what occurs in the company, and of the impact, which each part has on the whole, thanks to the strategic maps that accompany it and to the cause-effect relationships reflected in them. In this way, the spreading of information contained in the balanced scorecard enables each individual to go beyond the limits of their job, thus fostering mutual understanding and knowledge generation. As regards annual management plans (in the preparation of which middle managers carry a lot of weight), these are the translation of the strategic directives of a general nature to more specific, quantified actions. In other words, the annual management plans, in some way, enable the role given by Nonaka and Takeuchi (1995) and Nonaka, Toyama, and Byosière (2003) to middle managers in their middle-up-down management model to be specified, as these plans make the materialization of values, vision, and strategy generated by top management possible, by means of concepts and images which may guide their implementation more effectively. On the other hand, the conscious promotion of teamwork by top management throughout the company may help to reinforce the exchange of knowledge among people on all organizational levels. In the same way, the establishment of incentives linked to teamwork may help to strengthen this aspect.

2. Research method 2.1 Research hypotheses According to the aforementioned conceptual framework, the following hypotheses have been tested: H1: The innovation focus of companies is positively related to the implementation of management systems which are in accordance with the so-called middle-up-down management model. This means that: H1(a): The innovation focus of companies is positively related to the implementation of management systems which foster the creation and dynamization of “BAs”, or areas in which knowledge is shared, created, and used. H1(b): The innovation focus of companies is positively related to the implementation of management systems which foster the exchange of knowledge assets. H2: The degree of adoption of management systems which are in accordance with the so-called middle-up-down management model is stronger in the case of medium- and large-sized companies, which have a greater innovation focus. H3: The degree of adoption of management systems which are in accordance with the so-called middle-up-down management model is stronger in the case of those companies which belong to medium-high and high technology industries, and which have a greater innovation focus.

2.2 Target population and sample of companies surveyed The population subject to study is limited to the set of manufacturing companies from the Spanish region of the Basque Country, excluding those so-called micro-companies. Once the companies

21

The 7th European Conference on Knowledge Management

making up the target population had been identified (thanks to the use of the SABI data base, which contains the registered annual accounts of over 190,000 Spanish companies), and in order to carry out a statistical study which could allow minimally significant conclusions to be obtained, a representative sample was then selected from the configuration of the manufacturing industry of the Basque Region, comprising a total of 199 companies. This general sample and the subsequent sub samples considered (see Table 1), meet the minimum size requirements established for the application of Structural Equation Modelling (SEM), based on Partial Least Squares (PLS). The minimum size is determined by the most complex regression analysis, which should be carried out (ten times the number of variables involved in the aforementioned regression). In this case, this number corresponds to 50 companies. Table 1: Number of companies surveyed According to size Medium- and large-sized companies

Small companies

89

110

According to technology level Medium-high or high Low or medium technology technology companies Companies 68 121

Total 199

On the other hand, and in order to assess the different variables subject to study in the research, an ad hoc questionnaire was designed and addressed to the 199 Chief Executive Officers (CEOs) of the companies selected in the sample. The questionnaire was subjected to prior trials so as to check the extent to which it was understood by the CEOs concerned, the time required for their response, and the extent to which the type of answer to the questions was adjusted to real situations envisaged by the companies.

2.3 Statistical analysis The statistical analysis has comprised two types of studies:

2.3.1 Descriptive analysis First, a descriptive study has been carried out, aimed at measuring the innovation focus developed by manufacturing companies from the Basque Region, as a consequence of the changes they experienced during the period 2000-2003. To this end, the degree of introduction of explicit mechanisms which attempt to strengthen the capacity for organizational innovation have been analysed: the systematic allocation of part of the annual budget to the development of new challenges; linking the compensation policy to the generation of new ideas, and the creation of teams systematically devoted to the generation of new initiatives. In particular, the innovation focus is considered to be low where none of the aforementioned mechanisms has been implemented, medium where at least one of such mechanisms has been implemented, and high where two or more of said mechanisms have been put in place. On the other hand, in the case of those companies showing the greatest innovation focus, the degree of presence of each component of the management system that supports a middle-updown management model has been identified, both for the general sample and for the different sub samples considered.

2.3.2 Multivariate analysis For the hypotheses of the research to be tested, Structural Equation Modelling (SEM) based on Partial Least Squares (PLS) has been applied. For this purpose, PLS-Graph software (version 3.00) has been used (Chinn and Frye, 2003). The multivariate analysis has comprised two phases:

2.3.3 a) Phase one: In accordance with the conceptual framework previously explained, the proposed model for the analysis carried out in this phase encompasses three constructs:

22

Nekane Aramburu, Josune Sáenz and Olga Rivera

ƒ • Innovation focus: This is the exogenous construct of the model and is made up of three dummy indicators: (1) the systematic allocation of part of the annual budget to the development of new challenges (no/yes); (2) linking compensation policy to the generation of new ideas (no/yes), and (3) the creation of teams systematically devoted to the generation of new initiatives (no/yes). The aforementioned indicators are reflective in nature: that is, they “reflect” how strong the innovation focus of the company is. ƒ • Generation and dynamization of “BAs”: This is one of the two endogenous constructs of the model and is made up of five indicators: (1) the existence of a formal definition of company mission, vision, values, and policy (no/yes); (2) the existence of a systematic strategy formulation process (no/yes); (3) the existence of a decentralized strategy formulation process (no/yes); (4) the type of organizational structure in place (learning supportive or not learning supportive), and (5) the effort made to improve internal communication processes (measured on a scale of 1 to 5). These indicators are formative in nature: that is, they give rise to the existence and dynamization of organizational “BAs”. ƒ • Exchange of knowledge assets: This is the second endogenous construct of the proposed model and is made up of four indicators: (1) the degree of use of annual management plans (on a scale of 1 to 5); (2) the degree of use of the balanced scorecard (on a scale of 1 to 5); (3) the effort made in order to foster teamwork in the company (on a scale of 1 to 5), (4) and the relevance of teamwork in compensation policies (on a scale of 1 to 5). Again, these indicators are formative in nature: that is, they favour the exchange of knowledge assets within the company. ƒ In order to Asses the measurement model reliability and validity, several tests have been carried out. In the case of constructs made up of reflective indicators (“innovation focus”), individual item reliability, construct reliability, convergent validity, and discriminate validity have been checked. On the other hand, in the case of constructs made up of formative indicators (“generation and dynamization of BAs”, and “exchange of knowledge assets”), potential collinearity has been explored. ƒ Finally, the structural model validity and its predictive power have been tested, by means of bootstrapping techniques and the Stone Geiser test, respectively.

2.3.4 b) Phase two: In order to verify the existence or non-existence of significant differences between the diverse groups of companies considered, two multigroup analyses have been carried out: medium- and large-sized firms versus small companies; and medium-high or high technology level firms versus low and medium technology level companies.

3. Findings of the research 3.1 Results of the descriptive analysis The descriptive analysis has shown that significant differences exist in the innovation focus of firms, according to their size and technology level (see Table 2). On the other hand, Tables 3 and 4 show the degree of presence of each component of the middleup-down management model in the case of those companies with a higher innovation focus, both in the general sample and in the different subsamples considered.

23

The 7th European Conference on Knowledge Management

Table 2: Innovation focus of companies According to size General

Low Medium High

31.15% 28.14% 40.70%

Medium- and large-sized companies

Small companies

Difference significance

22.47% 20.22% 57.30%

38.18% 34.54% 27.27%

p < 0.001

According to technology level MediumLow and high or Difference medium high signifitechnolotechnolocance gy gy 20.59% 39.67% p < 0.1 32.35% 28.10% 47.06% 40.50%

Table 3: Degree of presence of the variables linked to the generation and dynamization of BAs in the case of those companies with the highest innovation focus

Formal definition of mission, vision, values, and business policy Systematic strategy formulation process Decentralized strategy formulation process Learning supportive organizational structure Effort made to improve internal communication processes (on a scale of 1 to 5)

General

According to size Medium- and Small large-sized companies companies

According to technology level Medium-high Low and or high medium technology technology

91.4%

94.1%

86.7%

93.8%

89.8%

65.4%

70.6%

56.7%

65.6%

65.3%

53.1%

54.9%

50.0%

53.1%

53.1%

27.2%

21.6%

36.7%

15.6%

34.7%

3.85

3.82

3.90

3.84

3.86

As far as the “generation and dynamization of BAs” is concerned, the main component of the management system implemented in firms is the formal definition of mission, vision, values, and business policy. The elements related to the strategy formulation process and internal communication processes are also quite important. However, learning supportive organizational structures (process or project structures and matrix structures, which do contemplate a project axis – hypertext organization – or a process axis), are not very frequent. Nevertheless, in the case of small companies and low and medium technology level firms, these types of organizational structures have a stronger presence. Table 4: Degree of presence of the variables linked to the exchange of knowledge assets in the case of those companies with the highest innovation focus (on a scale of 1 to 5)

Use of annual management plans Use of the balanced scorecard Effort made in order to foster teamwork Relevance of teamwork in compensation policies

General

According to size Medium- and large-sized companies

4.26

4.31

4.17

4.41

4.16

3.73

3.84

3.53

3.78

3.69

4.23

4.27

4.17

4.25

4.22

3.42

3.29

3.63

3.50

3.37

Small companies

According to technology level Low and Medium-high or medium high technology technology

As regards the “exchange of knowledge assets”, the main aspect of the management system is the use of annual management plans. The second position belongs to the fostering of teamwork. The third one corresponds to the use of the balanced scorecard, and finally, the worst positioned element is the degree of relevance of the teamwork as a part of the compensation system. The

24

Nekane Aramburu, Josune Sáenz and Olga Rivera

degree of relative importance of each element is quite homogeneous in all categories of firms considered.

3.2 Results of the multivariate analysis 3.2.1 Results of phase one Referring to the validity and reliability of the measurement model, the following can be observed:

3.2.2 Reflective construct In the case of the reflective construct (“Innovation focus”), four aspects have been studied: the individual item reliability (assessment of the degree of reliability of each one of the indicators making up the construct); the construct reliability (evaluation of the degree of reliability of the set of indicators which form the construct as a whole); the convergent validity (measurement of the percentage of the variance of the construct that is explained by the indicators which make it up); the discriminate validity (aimed at evaluating whether the construct shares a higher variance with its own indicators than with other constructs). As regards the evaluation of individual item reliability, the results obtained are shown in Table 5: Table 5: Innovation focus – Individual item reliability (loadings)

Systematic allocation of part of the annual budget to new challenges Linking compensation policy to the generation of new ideas Creation of teams systematically devoted to new initiatives

General

According to size Medium- and Small large-sized companies companies

According to technology level Medium-high Low and or high medium technology technology

0.804

0.869

0.714

0.835

0.799

0.672

0.640

0.702

0.468

0.771

0.778

0.772

0.764

0.722

0.786

According to Carmines and Zeller (1979), the individual reliability of an item is acceptable when its loading with respect to the construct it belongs to is higher than 0.7. However, 0.5 or 0.6 loadings may be acceptable in early stages of scale development (Chinn, 1998), as is the case in this research. Therefore, individual reliability of the items making up the “Innovation focus” construct could be said to be acceptable for almost all the items considered in the research (they are higher than 0.6). Only one item infringes the aforementioned critera: “linking compensation policy to the generation of new ideas”, in the case of medium-high or high technology level firms (loading = 0.468). Nevertheless, and considering that it is very close to 0.5, it has finally been considered as a valid indicator. Secondly, and as far as construct reliability is concerned, the results obtained are the following (see Table 6): Table 6: Innovation focus – Construct reliability (composite reliability)

Composite reliability

General

According to size Medium- and largesized companies

Small companies

According to technology level Medium-high or Low and medium high technology technology

0.797

0.808

0.771

0.724

0.829

In order to consider the construct reliability acceptable, composite reliability should be higher than 0.8. Nevertheless, in early stages of research (as is the case here) values that at least overcome the threshold of 0.7 are suitable (Nunnally, 1978). Hence, in this research construct reliability is acceptable because it is higher than 0. 7 (0.797).

25

The 7th European Conference on Knowledge Management

Thirdly, convergent validity has been examined, as shown in Table 7: Table 7: Innovation focus – Convergent validity (AVE) General AVE

0.568

According to size Medium- and large-sized companies 0.587

Small companies 0.529

According to technology level Medium-high or high Low and medium technology technology 0.479 0.617

The average variance extracted should be higher than 0.5 (Fornell and Larcker, 1981). Therefore, in this research the AVE is acceptable (> 0.5) in almost all the groups of firms considered, except in the case of medium-high or high technology level firms. However, and considering that the value of the AVE in this case is only slightly lower than 0.5 (0.479), the convergent validity in this case has also been deemed appropriate. Finally, discriminate validity has been analysed in order to verify that, for all groups of companies, the variance that the “Innovation focus” construct shares with its own indicators is higher than the variance it shares with other constructs of the model (squared correlations between constructs). The result of discriminate validity tests is positive (for space reasons tables have been omitted).

3.2.3 Formative constructs: In the case of the two formative constructs (“Generation and dynamization of BAs” and “Exchange of knowledge assets”), a test geared towards exploring the potential collinearity between the indicators which form each construct has been carried out. As a result of this, the absence of collinearity has been proven. As regards the assessment of the structural model, the following has been checked: First, the strength of the hypotheses formulated in the research has been measured (analysis of path coefficients). The results obtained are shown in Table 9: Table 9: Path coefficients According to size

Innovation focus – Bas Innovation focus – Exchange

According to technology level Low and Medium-high or medium high technology technology

General

Medium- and largesized companies

Small companies

0.403

0.383

0.420

0.415

0.445

0.487

0.508

0.449

0.476

0.528

In order to verify the degree of significance of the path coefficients, a “bootstrapping” technique has been applied (for 500 subsamples), in order to obtain the “t” statistics for all groups of firms. In all cases, the “t” value is highly significant, with a critical probability lower than 0.001 (Chinn, 1998). Hence, it can be stated that hypotheses H1(a) and H1(b) are proven. Secondly, the variance explained regarding the endogenous constructs and the prediction power of the model have been analysed (see Table 10): Table 10: Variance explained and prediction power General R2

BAs

0.162

Exchange

0.238

Q2

0.077 0.010

According to size Medium- and largesized companies R2 Q2

Small companies R2 Q2

According to technology level Medium-high or Low and medium high technology technology R2 Q2 R2 Q2

0.147

-0.085

0.176

-0.071

0.172

-0.064

0.198

0.120

0.258

0.001

0.202

-0.053

0.227

-0.044

0.279

0.026

As regards the variance explained, it could be said this is quite high, in the sense that only one exogenous construct (“Innovation focus”) explains a 16.2 % of the variance in the case of the “Generation and dynamization of BAs”, and a 23.8% of the variance in the case of the “Exchange

26

Nekane Aramburu, Josune Sáenz and Olga Rivera

of knowledge assets” (for the whole sample). Therefore, it can be concluded that the implementation of a middle-up-down management model is explained to a large extent by the emphasis the firms put on innovation. On the other hand, and as far as the prediction power of the model is concerned, the “cross validated redundancy” (Q2) parameter has been analysed. This should be higher than 0 to consider that the model has predictive validity (Chinn, 1998). In this case, the Q2 corresponding to the whole sample is slightly lower than 0 (-0.077 for the “BAs” construct, and -0.010 for the Exchange construct). Nevertheless, the value of this parameter is very close to 0, so it could be considered that the model is in the threshold of having predictive power. In the particular case of low and medium technology level firms, Q2 is higher than 0, as well as in the case of medium-sized and large companies for the Exchange construct. Hence, in both cases, the predictive power of the model is higher than in the general case.

3.3 Results of phase two In order to verify the existence or non-existence of significant differences between the diverse categories of companies, a multigroup analysis has been carried out. This has been done for the groups of firms differentiated according to their size and their technology level. As a result of the multigroup analysis, a “t” parameter is obtained that follows a t-Student distribution with m+n-2 degrees of freedom. To obtain the value of “t” the following formula has been applied:

Where: ƒ m = size of the sample 1 (first group). ƒ n = size of the sample 2 (second group). ƒ SE = Standard Error (from the Bootstrap). In all cases, the value of “t” is not significant (p>0.1). Therefore, it can be concluded that there are not statistically significant differences between the firms belonging to each group, according to their size and technology level. Hence, hypotheses H2 and H3 are not proven.

4. Conclusions Finally, the most important conclusion to be drawn is that the first hypothesis of the research H1: The innovation focus of companies is positively related to the implementation of management systems which are in accordance with the so-called middle-up-down management model - and their associated sub-hypotheses - H1(a), H1(b) - are proven. In other words, it is true that the companies with a higher innovation focus tend to implement a management system whose characteristics are those of the middle-up-down management model (Nonaka and Takeuchi, 1995; Nonaka, Toyama, and Byosière, 2003). On the other hand, it is proven that significant differences exist between firms belonging to different categories according to size and technology level, in terms of the emphasis they put on innovation. In this sense, medium-sized and large firms and medium-high or high technology level companies are those with a higher innovation focus. However, and in the case of companies with a higher innovation focus, there are not significant differences with regard to the degree of presence of the different components of the management system that support the middle-up-down management model, depending on their size and technology level (hypotheses H2 and H3 are rejected).

27

The 7th European Conference on Knowledge Management

5. References Argyris, C. and Schön, S. (1978), Organizational learning: a Theory in Action Perspective, AddisonWesley, Boston, Massachusetts. Bagozzi, R. P. (1994), “Structural Equation Models in Marketing Research: Basic Principles”, in Bagozzi, R. P. (Ed.), Principles of Marketing Research, pp. 317-385, Blackwell, Oxford. Carmines, E. G. and Zeller, R. A. (1979), Realiability and Validity Assessment, Sage University, paper series on quantitative applications in the social sciences, N. 07-017, Sage, Beverly Hills, CA. Chinn, W.W. and Gopal, A. (1995), “Adoption Intention in GSS: Relative Importance of Beliefs”, Database, 26, pp. 42-64. Chinn, W.W. (1998), “The Partial Least Squares Approach to Structural Equation Modelling”, in Marcoulides, G. A. (Ed.), Modern Methods for Business Research, pp. 295-336, Lawrence Erlbaum Associates Publisher, Mahwah, NJ. Chinn, W.W. and Frye, T. (2003), PLS-Graph Version 3.00, Build 1017, University of Houston, Houston. Diamantopolulos, A. and Winklhofer, H. M. (2001), “Index Construction with Formative Indicators: an Alternative to Scale Development”, Journal of Marketing Research, 38, pp. 269-277. Drew, S. and Smith, P. (1995), “The Learning Organization: Change Proofing and Strategy”, The Learning Organization, vol.2, nº1, pp. 4-14. Fornell, C. (1982), “A Second Generation of Multivariate Analysis: an Overview”, in Fornell, C. (Ed.), A Second Generation of Multivariate Analysis, Vol. 1, pp. 1-21, Praeger Publishers, New York. Fornell, C. and Larcker, D. F. (1981), “Evaluating Structural Equation Models with Unobservable Variables and Measurement Error”, Journal of Marketing Research, 18, pp. 39-50. Geus, A.P. De (1988), “Planning as Learning”, Harvard Business Review, March-April, p. 70-74. Kaplan, R.S. and Norton, D.P. (1996), The Balanced scorecard: Translating Strategy into Action, Harvard Business School Press, Boston, Massachusetts. Kleinbaum, D. G., Kupper, L. L., and Muller, K. E. (1988), Applied Regression Analysis and Other Multivariate Analysis Methods, PWS-Kent Publishing Company, Boston. Mayo, A. and Lank, E. (1994), The Power of Learning, Institute of Personnel and Development, London. Nonaka, I. (1988), “Creating Organizational Order out of Chaos: Self-Renewal in Japanese Firms”, California Management Review, Spring, pp.57-93. Nonaka, I. (1991), "The Knowledge-Creating Company", Harvard Business Review, vol.69, nº6, pp.96-104. Nonaka, I. and Takeuchi, H. (1995), The Knowledge-Creating Company, Oxford University Press, Oxford. Nonaka, I., Konno, N., and Toyama, R. (1998), “Leading Knowledge Creation: a New Framework for Dynamic Knowledge Management”, paper presented at the Second Annual Knowledge Management Conference, Haas School of Business, University of California, Berkeley. Nonaka, I., Reinmoeller, P., and Senoo, D. (1998), “The Art of Knowledge: Systems to Capitalize on Market Knowledge”, European Management Journal, vol.16, nº6. Nonaka, I., Toyama, R., and Byosière, P. (2003), “A Theory of Organizational Knowledge Creation: Understanding the Dynamic Process of Creating Knowledge”, in Dierkes, M., Berthoin, A., Child, J., and Nonaka, I., Handbook of Organizational learning & Knowledge, Oxford University Press, Oxford. Nunnally, J. C. (1978), Psychometric Theory, McGraw-Hill, New York. Pedler, M., Boydell, T., and Burgoyne, J. (1991), The Learning Company, Mc.Graw-Hill, London. Simons, R. (1995), Levers of Control: How Managers Use Innovative Control Systems to Drive Strategic Renewal, Harvard Business School Press, Boston, Mass. Simons, R. (2000), Performance Measurement & Control Systems for Implementing Strategy, Prentice Hall, Upper Saddle River, N.J. Swieringa, J. and Wierdsma, A.F. (1992), Becoming a Learning Organization, Addison-Wesley, Amsterdam. Wang, C.L. and Ahmed, P.K. (2003), “Organizational learning: a critical review”, The Learning Organization, vol.10, nº1, pp. 8-17.

28

Four Valued Logic: Supporting Complexity in Knowledge Sharing Processes Peter Bednar, Christine Welch and Vasilios Katos University of Portsmouth, Buckingham Building, Portsmouth, UK [email protected] [email protected] [email protected] Abstract: An essential problem of ‘knowledge management’ is the impossibility of codifying ‘knowledge’ which is embedded in human agents. It can never be straightforward for members of an organization to share what they know with one another. Such a process might be facilitated, but would be difficult to ‘manage’. In recognition of this, organizations have sought ways to support knowledge sharing processes, ranging from document-based repositories to on-going mentor/trainee relationships. From day to day, all individuals will need to make choices relating to their organizational roles. A need to recognize the element of choice and judgment available to an individual requires an ability to distinguish and discriminate between different categories of argument or assertion. When attempting to deal with problems, people are capable of using multi-valued logic in a process of creating assertions. It follows, therefore, that any support mechanism based only on bi-valued logic might serve to constrain and inhibit exercise of judgment. Using four-valued logic, it is possible to codify, not knowledge, but categories of argument/assertion. By this means, improved support may be provided for a knowledge-sharing environment, i.e. with a purpose to support knowledge management processes. In this paper, the authors draw on previous research in contextual analysis, complex methods of inquiry and paraconsistent logic in order to develop these ideas. A model of four-valued logic is described and applied for the purpose of categorising arguments. Keywords: Contextual analysis, complex methods, multi-valued logic, knowledge sharing.

1. Introduction The authors’ position is that knowledge is essentially embedded in people, who have created it for them through learning experiences. For this reason, the authors prefer to discuss ‘knowing’ as a subjective process, rather than ‘knowledge’ in an objective sense. (Bednar et al, 2005). It is suggested that processes through which people create and recreate their knowing are at once deeply personal, contextual and social (Bateson, 1972). It follows therefore that complex methods for inquiry are appropriate (Bednar, 2000). We recognize complexity in organisational knowledge sharing processes. Because of this complexity, human analysts would benefit from technological support in their practice of inquiry. We therefore need to look for ways to support development of technological aids for complex methods, which can deal with the inherent contradictions and paradoxes in knowledge sharing processes. When it comes to process support for complex methods of inquiry, we can see great possibilities, drawing upon the potential of paraconsistent logic (Anderson, 2002). Paraconsistent logic we see as a variant of multivalued logic, which is tolerant of contradictions and paradoxes. Multivalued and paraconsistent logic have a long history see for instance discussion of work by Charles Sanders Pierce who extended the truth table method to three-valued logic as early as 1909 (Recher, 1969). It is possible to envisage development of tools, which may act as intelligent agents. Intelligent strategies may include software that is able to keep up with human abilities to categorise and create resolutions that are inherently self-contradictory. It may also incorporate language software tools. For example, software that can store and interpret grammatical rules, and could potentially be used to analyse text to determine logical conclusions from statements made. Human analysts could then review such conclusions for meaningfulness in context. In addition to this, further developments in voice recognition software can be envisaged. Systems that could ‘listen’ to discussion and transform it into coherent discourse, and then analyse (and categorise) the results, may become possible. Such systems would not at one time have been considered feasible due to the huge processing power that they would demand. However, alongside developments in application software, progress is now being made in personal computing, using multiprocessor hardware and associated operating systems capable of multitasking. For example, Justin Rattner of Intel®, addressing a developers’ forum in 2005, suggested that intelligent language processing

29

The 7th European Conference on Knowledge Management

agents will be feasible by 2015. This could enable sophisticated software tools to be made available, and affordable, for everyday use by managers without investing in specialist hardware. Even with today’s technologies we can foresee a time when software support may become available to users in systematic and logical analysis of resolutions. Human analysts can then be helped either to categorize them and use them (inclusive of their inconsistencies), or to isolate unexpected inconsistencies and attempt to reframe those resolutions. While all these things may be possible in the future, the main problem is not necessarily to develop the paths of existing technology to higher precision or correctness. The real difficulty is to find ways to apply that excellence, i.e. we are not looking for innovation in technology but for innovation in application of technology. In this we include, not only algorithmic progress, but also development of application of algorithm. The problem with human knowledge sharing is not only a matter of innovation or excellence in technology, but of developing patterns of application supporting human innovation and excellence. Therefore, a model for four-valued logic is not to the authors of this paper an issue of mathematics; it is about applying logic for the purpose of human knowledge sharing processes. Organizational knowledge, the authors believe, cannot be seen as a commodity. Knowledge, as opposed to data or information, is related to an individual person’s ability to act. It is embedded in real human beings, and even they may not always be aware that they have it. It is clear that procedural knowledge cannot easily be recorded or passed on. As an illustration of this, any person who knows how to drive may try to imagine giving driving lessons to a novice in the form of written instructions (Polyani, 1967; Nonaka, 1991).If it is to be exploited in an organizational context, people must be empowered and enabled to interact in ‘sharing’ ways. They must be enabled to act in order to render their knowledge useful to an organization. This embedded quality of human knowledge makes sharing problematic and makes any attempt at codification extremely challenging. Many would argue that ‘knowledge management’ is for all practical purposes impossible, and that managers can only attempt to facilitate, support and encourage knowledge creation and sharing (Wilson, 2002). In this paper, the authors draw on previous research in contextual analysis, complex methods of inquiry and paraconsistent logic (Bednar et al, 2005). A model of four-valued logic is described and applied for the purpose of categorising arguments. The following section presents examples of four-valued logic as an application of human categorisation of arguments, including uncertainty, which may or may not incorporate contradictions. Then, in section three, a model of paraconsistent logic is explored. This model has a pattern of resolutions that we suggest is sufficiently parallel with human application of four-valued logic to be applied successfully. Finally, in the last section of the paper, the authors outline their main conclusions.

2. Human reasoning Research in contextual analysis shows that people are capable of exercising choice and judgement, and are able to distinguish and discriminate between categories of argument or assertion. Resolutions may include contradictions and in everyday life complexity is regularly acted upon (e.g. not bypassed or regarded as a problem in familiar environments). The authors of this paper suggest that it is our common experience in everyday life that, when posing a question to someone, we might receive the answer ‘it depends’. Here, an individual gives an answer conditional on obtaining further data about the context of the question. We infer from this that people might be comfortable with multi-valued logic when dealing with everyday problems. Things are not necessarily assessed on a scale of ‘truth’ or ‘falsity’ (bi-valued logic). If ‘true’ or ‘false’ is inadequate, we might consider a three-valued model - ‘yes’, ‘no’ or ‘it depends’. However, this still does not include a possibility that the person interrogated is baffled. She might say ‘I have no idea’, which is the essence of uncertainty. Whilst there may be occasions when it is beneficial to break problems down and simplify them, this need not be done as a matter of course. In our view an approach involving routine and systematic attempts to simplify inquiry is reductionist. Attempting to identify every aspect of a problem situation separately, in isolation from its context, in order to establish the ‘truth’ or ‘falsity’ of certain key parameters ignores emergence. Instead, we would advocate ‘complexification’ of inquiry, creating a multi-valued assessment and categorization through elaboration upon individual expression of ‘it depends’. We need a model which we can use

30

Peter Bednar, Christine Welch and Vasilios Katos

as an analogy to the kind of human reasoning we have described. The following Figure 1 illustrates this in relation to knowledge sharing: Certainty Assertion of negative alternative

Assertion of positive alternative

Negative belief

Positive belief Assertion of ignorance of possible alternative

Assertion of possible alternative

Uncertainty

Figure 1: Categories of arguments The authors see emphasis on use of bi-valued logic as restrictive of individual choice. In everyday life, human beings are confronted with the need to make choices. It is important to examine the element of choice and judgement available to individuals. The categorization as presented in Figure 1 outlines a phenomenon, i.e. that decision makers are able to keep in mind that they are asserting beliefs of truths rather than truths - exercising judgement. All four alternatives can be seen to be variants of the answer ‘it depends’ (the main different lies in character and degree of the espoused certainty). The logic also implies that choices need to be made for each individual alternative. Any assertions made, even if assumed to be generally valid, are not obviously valid under all conditions and out of context. Each assertion requires a decision. Each decision is chosen as a result of an assessment of risk of being ‘wrong’ where the fit between assumptions of context and generalization is taken into consideration. This phenomenon is a result of a strategy for dealing with uncertainty in context. It. happens as an aspect of negotiation, when people (analysts) try to make sense of their own, and each other’s, narratives regarding their understandings and definitions of a problem space (see example in figure 2 below). Example: Let us assume that you and I are in Portsmouth, on the south coast of England. Assertion of positive alternative: If you asked me ‘Is it possible to get to Southampton this afternoon?’ I would answer ‘Yes, I believe so. It is twenty miles by road or rail, and there are plenty of services.’ Assertion of negative alternative: If you asked me ‘Is it possible to get to Buenos Aires this afternoon?’ I would answer ‘No, I doubt it. Even if there was a flight from the local airport today, the distance is so great that you would not arrive until tomorrow.’ Assertion of possible alternative: If you asked me ‘Can I get to Paris this afternoon?’ I would answer ‘I expect so. It could be possible if there is an afternoon flight from the local airport. Assuming seats are available and you can afford the fare, then perhaps you can’. Assertion of ignorance of possible alternative: If you asked me ‘Can I get to Timbuktu this afternoon?’ I would say ‘I have no idea. I am not sure where it is or even which continent it is in. I do not know whether there are services from Portsmouth or even direct flights from the UK’.

Figure 2: Example of four-valued logic A major aspect of this kind of sense making and negotiation is an effort made both to create individual understanding and to categorize narratives. This negotiation surrounds agreement on categorisation of narratives regarding problem re-definition and solutions. Complex methods of inquiry are by their nature time consuming and demanding to manage. It would be advantageous to be able to make use of software support for managing both the process of analysis and the resulting data. If we wish to develop suitable systems to provide such support, a model of fourvalued logic may be helpful. NB. We have considered fuzzy logic in this context. However this is an

31

The 7th European Conference on Knowledge Management

approach used to structure uncertainty into certainty. When using complex methods we wish to maintain uncertainty within a problem space but transform it into a more structured form, ie ambiguity. Our purpose is to enrich understandings of a problem space rather than to search for an immediate solution. It should be noted in relation to the model discussed below that the mathematical underpinning of its logic need not map onto the logic of something by a human observer recognized as a real-world inquiry. In establishing four values, we can then allocate any classification to them that is useful in a particular observer’s real-world context. The particular chosen variant of a truth table that could be used in any specific situation is influenced by context and the particular beliefs of the participating stakeholders. For example we would not expect a truth table, which might be produced from the beliefs of a group of Christian stakeholders to be the same as one produced from the beliefs of a group of Buddhists.

3. Logical model Although it could be argued that Aristotle’s Law of Non-Contradiction may guide human reasoning, human behaviour often exhibits true contradictions (dialetheism); that is when both a proposition P and its negation ¬P are true. Societal approaches and aspects of culture encourage these conflicting modes. Most legal systems, for example, are constructed on the assumption that a person is either guilty or not guilty. In practice however, uncertainty is catered for in many systems by application of the principle ‘being deemed innocent until proven guilty’. Probabilities of events and facts in some legal systems such in the UK appear due to the “beyond reasonable doubt” doctrine, as these probabilities may carry a dose of uncertainty. A similar application may be found in science and scientific research. While historically founded on the ‘cause and effect’ paradigm, revolutions in scientific paradigm in the 20th century have resulted in a situation where contemporary science is based, not on ‘laws’, but on probabilities. This incorporates variations of Heisenberg’s famous uncertainty principle. Empirical research is geared towards understanding patterns of behaviour, given a set of ‘input’ conditions and assumptions. The consequences of this uncertainty principle are that ‘no causal law can be proved anymore, but only statistical laws’ (Einstein, 1990, p59, cited in Monod, 2004, p108). We acknowledge that systems based in bi-valued logic are adequate in assisting us to perform many tasks, and can provide an effective problem-solving tool. Although this may hold true most of the time, bi-valued logic has received considerably more credit than its real value merits, and has entered realms ‘incompatible’ with its nature. In our view human behaviour may not be regarded as deterministic. Each human individual observes phenomena and interprets them from her own unique perspective. Human beings have free will to adapt their behaviour to their perceptions in any feasible way. The greater the experienced complexity of the problem situation, the greater is likely to be the uncertainty experienced by the individuals. In the context of a complex problem space therefore it is likely that behaviour patterns of different individuals will vary widely. In our experience very many computational models are based on bi-valued logic. Historically this has included models on which research in artificial intelligence were based. Consequently, efforts in creating autonomous intelligence machine may encounter the Law of Non-Contradiction as a barrier. Researchers began to realise that a system that will not be able to accommodate contradictions will be far from ‘intelligent’. Given the system in Fig. 1, we can semantically define it as a four valued logic system as follows. In this example we use negation (logical NOT), conjunction (logical AND), disjunction (logical OR), equivalence (logical XNOR) and implication (IF THEN). The four values are 0,1,X,Y, corresponding to FALSE, TRUE, PARADOX, UNKNOWN. More specifically, the first two values correspond to the classic bi-valued logic values. The third value arises when a statement can be both true and false, allowing contradictions and dialetheias. Since we are modelling human reasoning, we need to include a fourth value representing the void, i.e. when a person is refusing to commit to an opinion. These four values together with their truth tables definitions are adequate in modelling human conversational interactions in the context of a proposed system. The truth tables are essentially addressing the different types of combined statements or evidence, which are formally referred to as operations. It is of great importance to develop a logic where the

32

Peter Bednar, Christine Welch and Vasilios Katos

different operations are understood and provide “meaningful” results. In order to demonstrate this, we refer to the example in Figure 2. Table 1: Truth tables of the four valued logic model P

Q

0 0 0 0 1 1 1 1 X X X X Y Y Y Y

0 1 X Y 0 1 X Y 0 1 X Y 0 1 X Y

Negation ¬P 1 1 1 1 0 0 0 0 X X X X Y Y Y Y

Conjunction P ∧Q 0 0 0 0 0 1 X Y 0 X X Y 0 0 Y Y

Disjunction P∨Q 0 1 X Y 1 1 1 1 X 1 X X Y 1 X Y

Equivalence P↔Q 1 0 0 0 0 1 0 0 0 0 X 0 0 0 0 Y

Implication P→Q 1 1 X Y 0 1 X Y 0 1 X Y 0 1 X Y

Consider for example the first question ‘Is it possible to get to Southampton this afternoon?’ and the given answer: P= ‘Yes, I believe so’(=1) . The negation of P would be the straightforward negative alternative ¬P = ‘No, I doubt it’(=0). This would formally be described as: F(‘It is possible to get to Southampton from Portsmouth this afternoon’)=1 and F(‘It is NOT possible to get to Southampton from Portsmouth this afternoon’)= ¬F(‘It is possible to get to Southampton from Portsmouth this afternoon’)=0 Where F()is an evaluating function (i.e. a mapping between the statement and the four value range). However, in the case of a possible alternative or of ignorance, negation should not affect the outcome: F(‘It is possible to get to Buenos Aires from Portsmouth this afternoon’) =¬F(‘It is NOT possible to get to Buenos Aires from Portsmouth this afternoon’)= X. Similar rule follows for the UNKNOWN value, Y. The problem is of more significance and interest when there are two or more statements and their combination should yield a meaningful outcome as previously stated. Referring to the Southampton question, suppose that we get different answers from two people: – ‘Is it possible to get to Southampton this afternoon?’ P= ‘Yes, I believe so’ (=1) Q= ‘I expect so’ (=X) In the case of conjunction, we expect that all people have to be fundamentally positive about the truth of the statement, therefore P ∧Q=1∧X=X, i.e. an answer like ‘I expect so’ is adequate, as if one agrees that it is possible to get to the destination, then they will not object if one demonstrates a positive but less stronger opinion. On the contrary, in a disjunction any indication of a possible alternative would result to TRUE. In the case of equivalence, we are more interested in whether two people (or statements) agree, but in the case when both claim ignorance, then the result will also be ignorance (i.e. UNKNOWN), which is the only information we have gained from their answer. A similar logic follows for the implication operation. The truth tables described in Table 1 represent one of many alternatives (see for example Alnakari et al, 2000; Anderson 2002). When these truth tables are constructed, they will need to be done in the specific context. For instance, if we are dealing with life threatening critical systems, then the conjunction should be used in the strict sense, i.e. P ∧Q=1∧X=0, i.e. an answer like ‘I expect so’ is

33

The 7th European Conference on Knowledge Management

not good enough to warrant a ‘Yes’ output. The main design approach is that the classic bi-valued logic would still be valid, as it is considered to be a subset. That is, by removing the additional two values, the corresponding truth tables would be identical to the classic bi-valued logic truth tables. In the majority of cases (including critical systems) we would not wish to remove the additional values since they signify an uncertainty. This is often a telltale sign of conditioned response specific to particular criteria, circumstances or exceptions. Examination of truth tables based in four-valued logic can show us that the many (16) apparent alternatives actually resolve into only four, which are relevant. In order to support possible processes of human reasoning, we have to escape bi-valued logic and begin to explore higher orders of multi-valued logic allowing paradoxical reasoning to be modelled. The model of four valued logic described here appears to cover most categories of human reasoning. The technical challenges and problems in transforming such multi-valued logic into supportive systems, which can be implemented, are naturally amplified. Systems supporting human reasoning maybe judged against benchmark criteria such as whether they can cope with exclusive disjunction’ i.e. the challenge to correctly combine two truth statements to yield a result of false (1 XOR 1=0). In many cases researchers have found for example that it is difficult to model and train neural networks, which will meet this criterion. If four valued logic is to provide a new domain for creating supporting software then it will be useful to maintain similar benchmark criteria for success and indeed to incorporate further peculiarities or constraints of X and Y values. The model that we offer in this paper does produce a basis for developing software, which could meet this benchmark without sacrificing any benefits of two-valued logic.

4. Conclusion The authors believe that a focus on bi-valued logic would constrain normal exercise of human judgement, since people are capable of using multi-valued logic in a process of creating assertions. In recognizing that human reasoning supports contradictions in forming judgements, we perceive a need for any supportive system we create to extend beyond bi-valued logic. Using four-valued logic, improved support might be provided for knowledge sharing environments, since it follows more closely human practice and capability for reasoning. In this paper we have outlined an example of a model, which could be used to parallel human reasoning. We offer a model of fourvalued logic as a basis for possible future development of software. We have shown with examples how four-valued logic may be mapped against human reasoning. If it becomes possible to follow human reasoning in applying the proposed categories, then complex resolutions can be tolerated in technological support systems. If developers aspire to be able, in the future, to create systems, which provide support for knowledge sharing environments, then we need to be able to recognize, codify and trace different categories of resolutions. These categories need to be consistent with a capability of human judgement to tolerate inconsistency.

References Alnakari R., and Rine, D. (2000) ‘A Four-Valued Logic B(4) of E(9) for Modeling Human Communication,’ ismvl, p. 285, 30th IEEE International Symposium on Multiple-Valued Logic (ISMVL 2000), 2000. Anderson, D. (2002). ‘LM4: a classically paraconsistent logic for autonomous intelligent machines,’ Proceedings of the 6th World Multi-Conference on Systemics, Cybernetics and Informatics (SCI 2002), Florida Bateson, G. (1972). Steps to an Ecology of Mind. Chicago: University of Chicago Press. Bednar, P., M. (2000). ‘A Contextual Integration of Individual and Organisational learning perspectives as part of IS analysis’. Informing Science, Vol.3, No. 3, 2000. Bednar, P. M., Andersen, D. and Welch, C. (2005). Knowledge Creation and Sharing – Complex Methods of Inquiry and Inconsistent Theory. Proceedings of ECKM 2005, Limerick, Ireland 8-9th September. Einstein, A. (1990). Conceptions Scientifiques, Champs. Flammarion, Paris. Rattner, J. (2005). Keynote Address, Intel Developers Forum Spring 2005, http://www.intel.com/technology/techresearch/idf/platform-2015-keynote.htm, accessed 28 April 2005.

34

Peter Bednar, Christine Welch and Vasilios Katos

Recher, N (1969), Many-valued Logic, McGraw Hill, New York Nonaka, I (1991), ‘The Knowledge Creating Company’, Harvard Business Review, 69 Nov-Dec 1991. Monod, E. (2004). Einstein, Heisenberg, Kant: Methodological distinction and condition of possibilities. Information and Organisation. No. 14, p 105 – 121. Elsevier, Amsterdam. Polyani, M (1967), The Tacit Dimension, Garden City, N Y, Doubleday Wilson, T D (2002), ‘The nonsense of ‘knowledge management’, Information Research, Vol. 8 No. 1, October 2002.

35

Knowledge Cooperation in Online Communities: A Duality of Participation and Cultivation Marco Bettoni, Silvio Andenmatten and Ronny Mathieu Swiss Distance University of Applied Science, Brig, Switzerland [email protected] [email protected] [email protected] Abstract: This paper is an attempt to answer the question “How to design for engagement in communityoriented knowledge management?” In order to do this we need an approach that has its primary focus on distinguishing, balancing, connecting and negotiating between knowledge in its two fundamental dimensions: individual and social. The concept of “knowledge cooperation” that we have defined as “the participative cultivation of knowledge in a voluntary, informal social group”, is our proposal for fulfilling the previously mentioned requirements. After introducing this definition of “knowledge cooperation” with its background in community-oriented knowledge management, we will explain and give reasons for its constitutive elements (participation, cultivation, knowledge, voluntary, informal, social) and their unique combination in our approach. On this basis we will then describe the two coupled learning loops (participation and cultivation), which in our conception characterise the dynamics of knowledge cooperation and argue for the importance of looking at participation and cultivation as an interacting duality. Our main message is that the duality of participation and cultivation that constitutes our model of knowledge cooperation allows both to better understand knowledge processes in an online community and to design active, dynamic, healthy communities where cultivating knowledge and participation in cultivating that knowledge mutually activate and sustain each other. Keywords: Online communities, community-oriented knowledge management, participation, cultivation, knowledge cooperation, and communities of practice

1. Introduction A recent survey report on collaboration in enterprises shows that participation in online communities is growing, that technology for online communities is continuing to improve and that retention of community participants is not a significant problem (Ambrozek & Cothrel 2004). Unfortunately, despite these positive signs, one major obstacle remains: the discipline of creating and managing communities is widely perceived as poorly defined. Both experience and research show that we do not know enough about how something resembling an online community of practice (CoP) can be designed (Barab et al. 2004). Some researchers even claim that enthusiasm about CoP is well beyond empirical evidence (Schwen & Hara 2004). In fact, many communities lack sustainability: either they fall apart soon after their initial launch or they adopt a short-term, opportunity driven behaviour, which allows them to survive in some way. In both cases however, they are not able to generate enough energy and synergies for engaging in long-term cooperation’s. Moreover their short-term thinking and opportunistic behaviour leads to uncertainty and mistrust between the members and consequently to low quality of shared work results. This is where our concept of “knowledge cooperation” comes into play as an attempt to convert the promise of social networks and collaborative technologies into the reality of active, dynamic, healthy communities integrating learning and knowledge processes. This paper is an attempt to contribute to the discipline of creating and managing online communities, especially those with a focus on knowledge and research, by answering the question “How to design for engagement in community-oriented knowledge management?”. In order to do this we need an approach that has its primary focus on distinguishing, balancing, connecting and negotiating between knowledge in its two fundamental dimensions: individual and social.

2. What is “knowledge cooperation”? Knowledge is bound to human action. Knowledge cooperation – the cooperation and collaboration of different domain experts with the aim of stewarding knowledge – is a living process with both tacit and explicit elements, with both individual and social components, a process that constantly changes and further develops through actions and interactions. Knowledge in such processes can

36

Marco Bettoni, Silvio Andenmatten and Ronny Mathieu

not be completely reduced to an object of managerial actions, but must be treated as a kind of organic entity, bound to persons, to interactions as well as to social contexts (Wenger et al. 2002; Bettoni & Schneider 2003; Bettoni et al. 2004). On this background the point of view of work psychology becomes more relevant: thanks to its focus on social dynamics the work psychological approach views knowledge management as analysis and organization of knowledge oriented cooperation (Clases, Dick & Wehner 2002, Wehner & Clases 2002). From this perspective one recognizes, that human interactions and relationships are of greatest importance for knowledge management and it appears thus more reasonable, to design the management of organizational knowledge processes by resorting to socially oriented approaches and methods, like for instance “Communities of Practice" (Wenger et al. 2002; Huysman et al. 2003). On this basis, our proposal for fulfilling the previously mentioned requirements is a concept of “knowledge cooperation” inspired by the CoP approach and defined as “the participative cultivation of knowledge in a voluntary, informal social group” (Bettoni 2005). The group is informal in the sense that its members meet within their organization but outside the reporting roles connected to their position in the formal, organizational hierarchy to which they belong. According to our model, cooperating and collaborating on knowledge consists of two cross-coupled learning loops that activate and sustain one another: “cultivation of knowledge” and “participation in knowledge” (Figure 1). Each individual learning loop is defined in its own terms and is in principle autonomous, meaning that it could function alone, independently from the other. As a result the two loops are not mutually exclusive. On the contrary, they must take place together, they are two intrinsic constituents of knowledge cooperation and only their cross-coupling, represented in the diagram by the lemniscate curve (∞ - the infinity symbol), allows to create an interacting, resonating duality with a sufficient activity level. In this duality what is of interest in relation to engagement in knowledge stewarding is understanding or promoting the interplay and integration of learning and knowledge processes.

Figure 1: Circular processes of knowledge cooperation The right loop in Figure 1, cultivation of knowledge, is the circular process by which a community stewards its knowledge resources (by processes like acquiring, developing, making transparent, sharing and preserving knowledge), uses them in daily work and then feeds these experiences back into the stewarding process. The left loop in Figure 1, participation in knowledge, is the circular process by which community members build social capital (establish and take care of personal relationships, develop individual and collective identities, etc.), “invest” this social capital in stewarding the knowledge resources of their community and feed these experiences back into the socializing process. These two processes are circular because in both cases a second process and returns to the previous one as input transform the output of one process. In this model cultivation and participation come as a pair, a dyad, a tandem: they form a unity in their duality. The three processes or groups of knowledge processes connected by means the two mentioned learning loops are (Figure 1):

37

The 7th European Conference on Knowledge Management

ƒ Stewarding knowledge – This group of knowledge processes encompasses processes like acquiring, developing, making transparent, sharing and preserving knowledge. They are used for handing down, reproducing and renewing knowledge and experience. What should be noticed here is that these processes are not considered at a cognitive but at a coordinativecooperative level (see the cooperation model by Wehner et al. 1998): knowledge stewarding does not intervene therefore directly in individual cognitive processes as too easily alleged by certain critics of Knowledge Management. ƒ Applying knowledge – This group of knowledge processes collects what happens when knowledge resources are used in business processes. The learning loop of ‘cultivation’ is established, if employees of the formal organization (teams, departments) informally participate at the same time also in communities of practice (Wenger et al. 2002, 18 ff). This multiple membership creates a learning loop which has its focal point in the employee: she gains experiences in her daily work within business processes and can incorporate them in the community of practice, where this knowledge is stewarded collectively and prepared for flowing back to the business processes from where it originated. ƒ Socializing knowledge – This group of knowledge processes collects what happens in personal and institutional relationships between the people involved in stewarding and applying knowledge. Relevant dimensions to be considered here are for example those which lead to effective knowledge sharing like meta-knowledge, accessibility, engagement in problem-solving and safety (Cross et al. 2003). Important elements to be considered in this group are: involved people as individual persons, their ties, their interactions (regularity, frequency and rhythm), the atmosphere, the evolution of individual and collective identities and, last but not least, spaces (physical or virtual) for meeting together. This group is very important because it allows taking into account the social aspects of stewarding knowledge, applying it and learning together.

3. Participation and cultivation as an interacting duality In our concept of Knowledge Cooperation the circularity of participation and cultivation and the interaction (cross-coupling) of these loops can be modelled more technically (Figure 2) as consisting of two feedback loops applied as control systems to knowledge stewarding viewed as a performing system whose performance (stewarded knowledge) must be maintained in line with reference values (organizational performance and culture) in the presence of disturbances. As in physiological or ecological systems, feedback is here the process by which the system’s inputs are altered by its output (stewarded knowledge). But which are the reasons that make this design suitable for better understanding knowledge processes and for designing healthy communities? Our basic idea in developing this model was to focus on the issue of “engagement” as a central design feature. The question is then: how to get a lasting engagement in the community? The most common approach is to look for incentives, for motivation (Bettoni et al. 2003). This may be a useful perspective in many organisational development initiatives, but in the case of knowledge we claim (and will argue for in a future paper) that the incentives view on engagement should be extended by a complementary and at least equally important consideration of the issue of “meaning”. In fact our knowledge is of course strongly related to motivation but probably much more intimately connected and directly influenced by our experience of meaning. More specifically our claim is that if we want to get enough engagement for stewarding knowledge in a community of practice, then we need to: ƒ Better understand the human experience of meaning ƒ Extend our community design by a design for meaning.

38

Marco Bettoni, Silvio Andenmatten and Ronny Mathieu

Figure 2: Feedback view of knowledge cooperation A basic aspect of our engagement is that we thrive for experiencing our actions, our practice as meaningful; we do not simply want to get something done (a report written, an event organized, a request answered, etc.): what counts in what we do is always more than the result, it is the experience of meaning connected with that result. In the end the meaning we produce matters even more than the product or service we deliver. The kind of meaning involved here is an experience of everyday life, the experience that what we did, are doing or plan to do “makes sense” to us. But how do we operate to produce these meanings and to put them in relation to the histories of meanings of which they are part? In his investigation of this issue Wenger (1998, p. 53) introduces the notion of negotiation of meaning as “the process by which we experience the world and our engagement in it as meaningful.” This process has the following characteristics: ƒ An active, dynamic, historical process ƒ It affects the elements which shape it ƒ The meaning we experience is not imposed, it is produced, but not from scratch ƒ The meaning we experience is not pre-existing and not simply made up ƒ The meaning we experience does not exist as an independent entity outside the process ƒ The meaning we experience exists in the process (in fieri) Which elements are necessary for constituting a process with these characteristics? Wenger proposes a model, which distinguishes two constituent processes: 1) a process embodied in human operators, called participation; 2) a process embodied in an artificial operand (artefact), called reification. The human operators contribute to the negotiation of meaning by their histories of interactions in the practices of a community. The artificial operand contributes to the negotiation of meaning by reflecting aspects of the practice of the community (histories of transformations). Thus the negotiation of meaning takes place as a convergence of two histories, that of the human operators and that of the artificial operands. In Wenger’s model participation is conceived as: a) the social experience of living in the world in terms of membership in social communities; b) active involvement in social enterprises. In the same model reification is seen as the process of giving form to our understandings, experiences, practice by producing objects, which express them. Writing down a law, producing a tool or even

39

The 7th European Conference on Knowledge Management

even putting back a book in a shelf are examples of this process. Participation and reification are both distinct and complementary. They cannot be considered in isolation, they come as a pair. They form a unity in their duality (Wenger 1998, p. 62). According to this model, our experience of meaning is viewed as a duality, as an interplay of participation and reification with the following implications: a) when you understand one, you should also understand the other; b) when one is given, you should wonder where the other is; c) when you enable one, you should also enable the other; d) one comes about through the other, but they cannot replace each other. By taking seriously Wenger’s theory and appreciating its potential impact on knowledge management we can now deduce the following main guideline for our design for meaning: If meaning as a constituent of a social theory of learning should be viewed as a duality of participation and reification, then engagement in stewarding knowledge should be implemented as a duality of two corresponding processes, in our case participation in knowledge and cultivation of knowledge. To conceive and implement participation and cultivation as a duality means that they should take place together, they should both require and enable each other. There should not be any cultivation without participation and no participation without cultivation. Participation and cultivation should imply each other. Increasing the level of cultivation should not substitute an equal amount of participation; on the contrary it should tend to require an increase of participation. Cultivation of knowledge should always rest on participation in knowledge: applying knowledge requires a history of participation as a context for its interpretation. In turn, also participation in knowledge should rest on cultivation because it always involves words, concepts and artefacts that allow it to proceed. Finally, the processes of participation and cultivation should not be considered just as a distinction between people (human operators) and explicit knowledge (artificial operands, things) that embody them. In terms of meaning, people and things cannot be defined independently of each other. On one hand our sense of ourselves includes the objects of our practice, on the other hand what these objects are depends on the people that shape them through their experiences.

4. Participation and cultivation: An experiment At the Swiss Distance University of Applied Sciences (FFHS) we are experimenting with this model of Knowledge Cooperation in the realisation of a virtual research networking space called „CoRe Square“ and implemented in MOODLE (Bettoni et al. 2006). This networking space for research activities is a central issue in an ongoing project that has as its goal the integration of teaching and research by means of the design launch and cultivation of an online “community of research” (acronym: CoRe) for distributed research cooperation by 3 types of research partners: lecturers, students and research staff. In the current version the CoRe Square space is divided in the following seven areas that correspond to aspects of community life: Individual Hut, Community Circle, Domain Club, Practice Lab, Connections Room, Leadership Lounge and Technology Corner. Following the design for meaning guideline presented above, we have designed the inner structure of all these seven activity spaces as one or more pairs of tools, each of which should form a unity in its duality. In terms of technology each pair is a dyad constituted by a forum-tool and a wiki-tool (Figure 3). The forum is a tool for enabling participation in knowledge: creating new discussion threads, reading posts and replying to them supports participation as the social experience of being connected with other and being actively involved in a collective enterprise (stewarding research knowledge). The wiki is a tool for enabling cultivation of knowledge that preserves the results of conversations (new ideas, insights, best practices, lessons learned, definitions, procedures, etc.) by organizing them in a structured way and independently of time.

40

Marco Bettoni, Silvio Andenmatten and Ronny Mathieu

Figure 3: Dyad tool of knowledge cooperation Following this design, in the current version of CoRe Square the seven activity spaces contain for example the following dyads: a) Individual Hut: each member has an own forum (“personal blog”) and an own wiki; b) Community Circle: a forum for talking about experiences with the platform and a wiki for making a systematic overview of these experiences; c) Domain Club: a wiki for collecting an overview of research methods and a forum for talking about individual methods; d) Practice Lab: each project has an own forum for talking about project steps and issues and an associated wiki for a systematic overview of project work and outcomes; e) Leadership Lounge: a wiki where members can sign up for tasks and a forum for talking about engagement for the community.

Figure 4: Practice lab area As an example of an activity area the “Practice Lab” is shown in Figure 4. Just below the title bar there is a file named “… about Practice Lab”. It explains the primary activity in this area. Further explanations are given in three additional “about” files below it. The Practice Lab is an area for research practices, i.e. working in research projects, writing articles and giving presentations at conferences. Each research project has an own forum for conversations about project steps and issues and an associated wiki for a systematic overview of conversation results, project work and research outcomes. With many projects the topic area would become very long and difficult to navigate. For this reason we have assigned an individual project area (a MOODLE topic) to each project and collected all project names and short descriptions in a table from where a links leads to

41

The 7th European Conference on Knowledge Management

the associated project area. Below the file with the project table the Project Lab gives access to 4 dyads: Cases, Stories, Publications and Conferences.

5. Conclusion CoRe Square will be launched during a future search event and opened to all members of CoRe (a community of about 250-300 persons) in June 2006. By then many dyads Forum&Wiki – our fundamental design unit – will be ready for use and further development by the users. At this point we plan to start an empirical investigation (formative evaluation) for assessing the suitability of Knowledge Cooperation and of our dyad tool as a way for fostering and maintaining engagement in community-oriented knowledge management.

References Ambrozek, J. & Cothrel, J. (2004). Online Communities in Business: Past Progress, Future Directions. http://www.sageway.com/ocib.html. Barab, S.A., Kling, R. & Gray, J.H. (2004). Designing for Virtual Communities in the Service of Learning. Cambridge Univ. Press, Cambridge, UK. Bettoni M., Andenmatten S., Mathieu R., (2006). Research Networking with “CoRe Square”. In: Proc. of MApEC 2006, Multimedia Applications in Education Conference, Graz, Austria, Sept. 4-6. Bettoni, M. und Schneider, S. (2003). The Essence of Knowledge Management: A Constructivist Approach. In: Proc. of the Fifth Intern. Conf. on Enterprise Information Systems, ICEIS 2003, Angers, France, April 22-26, Vol. 2, 191-196. Bettoni, M. (2005). Wissenskooperation: Die Zukunft des Wissensmanagements, Lernende Organisation, Nr. 25, Mai/Juni. Bettoni, M., Clases, C. & Wehner, T. (2004). “Communities of Practice” as a way to a more humanoriented Knowledge Management. International Conference on HRM in a knowledgebased economy, 2.-4. 6. 2004, Ljubljana, Slovenia. Bettoni, M., Braun, A. & Weber, W. (2003). “What motivates cooperation and sharing in communities of practice?”. In: F. McGrath & D. Remenyi (eds.), Proc. of the 4th Europ. Conference on Knowledge Management (ECKM 2003), Oriel College, Oxford University, UK, Sept. 2003, pp 67-72. Clases, C., Dick, M. und Wehner, T. (2002). Vom Wissensmanagement zur Analyse und Gestaltung wissensorientierter Kooperation, Journal Arbeit, Herbst 2002. Cross, R., Parker, A., Prusak, L., & Borgatti, S.P. (2003). Knowing What We Know. Supporting Knowledge Creation and Sharing in Social Networks. In: Cross, R., Parker, A. & Sasson, L. (eds.), Networks in the Knowledge Economy. Oxford University Press, Oxford. Huysman, M., Wenger, E. & Wulf, V., eds. (2003). Communities and Technologies. Proc. 1st Int. Conf. on Communities and Technologies. Dordrecht: Kluwer. Schwen, T.M. & Hara, N. (2004). Community of Practice. A Metaphor for Online Design?. In: Barab, S.A., Kling, R. & Gray, J.H. (2004). Wehner, T., Clases, C., Endres, E. & Raeithel, A. (1998). Zwischenbetriebliche Kooperation. Zusammenarbeit als Ereignis und Prozess. In E. Spiess (Hrsg.), Formen der Kooperation (S. 95-124). Göttingen: Verlag für Angewandte Psychologie. Wehner, T. & Clases, C. (2002). Knowledge oriented cooperation. A work psychological approach to knowledge management. In: M. Fischer & N. Boreham: Work process knowledge and work-related learning in Europe. Cedefop, Thessaloniki. Wenger, E. (1998). Communities of Practice. Learning, Meaning, and Identity. Cambridge University Press, Cambridge, UK. Wenger, E., McDermott, R., Snyder, W. (2002). “Cultivating Communities of Practice: a Guide to Managing Knowledge”, Harvard Business School Press, Cambridge, MA.

42

Alternative Accounting to Manage Intellectual Capital György Boda and Peter Szlávik Corvinus University, Budapest, Hungary [email protected] [email protected] Abstract: The connection of intangible assets to cash flow generation is a major management issue. In addition, majority of the investment is made in the intangible capital items. This paper intends to present an approach that supports the continuous measurement of intangible assets and allows an extended value based management framework that considers both tangible and intangible elements. The value of a company includes significant elements that are not described by the generally accepted accounting methods such as relationship capital, organizational capital, and knowledge and competence of employees. These elements are possible to be presented in an expanded balance sheet. We face challenges when we try to quantify the elements for a concrete date or when we intend to capture the exact changes throughout a certain period. Based on generally accepted accounting standards, the investment into intellectual assets is mostly handled as cost. This cost is accountable against the revenue of the period therefore decreasing the period’s profit. This approach does not allow the carrying forward of any cost element for future periods, even in the case of a long-term cost. If we reconsider our cost management framework and costs that serve the development or replacement of intellectual capital items we manage as capital expenditures in intellectual capital items and not as period’s expense, we could build up a ground up approach of handling of intellectual capital items. This approach results in the compilation of two balance sheets and profit & loss accounts that are alternative versions of each other. The visualization of intellectual assets and intellectual capital might significantly change the decision making process and the general thinking of the management. The authors suggest further research in order to support the development of the conceptual framework and the operational rules of practice. Keywords: Intellectual capital; value-based management; measurement; alternative accounting; intangible balanced sheet

1. Introduction The value of a company includes significant elements that are not described by the generally accepted accounting methods. The customer and supplier relationships, the knowledge related to the organization and the knowledge and competence owned by employees are such elements. Current accounting practices – in general – do not allow for the visualization of these assets (often referred as intellectual assets, or intellectual capital 1 ) in the company’s balance sheet. There is one exception however; in the case of a company acquisition the buyer is allowed to represent goodwill in its accounts when the book value of the purchased entity is below the purchase price. The importance of intellectual capital in the value of a company is increasing rapidly. Due to the fact, that accounting standards do not support the reporting of these assets, the task of management teams is getting even more complex because it is extremely difficult to manage something that you cannot properly visualize. Our intention is to provide guidelines and support for the entities that intend to manage these elements by integrating related management information into existing reporting frameworks. By doing this we can help these companies operate more efficiently.

2. The shift among tangible and intangible assets 1. Fewer and fewer companies have operations primarily involving physical assets. Instead, most companies are service providers. 2. Among production factors there is a remarkable shift to the intangible (i.e. intellectual) elements. 1

Intellectual capital and Intellectual assets are referring to the same resources, therefore could be used as synonyms, although the “intellectual asset” label refers to the asset side of the balance sheet while the “intellectual capital” refers to the ownership of these assets. Throughout the text we are using both terms as synonyms.

43

The 7th European Conference on Knowledge Management

The above statements are true in most industry sectors worldwide. Research shows that on average 75% of the company’s value is not described on its balance sheet 2 . Investigation and research carried out by the authors among Hungarian entities resulted in similar findings. Companies taking action in this field are not limited to certain industry segments. In general those entities are more open to putting a lager emphasis on these assets that have a higher “intellectual capital index” 3 . There are several reasons why we should concentrate on this process. In general, the value of a company is significantly higher than its book value; therefore we cannot neglect answering the question why this occurs. The increasing company value – in most cases – does not mean that the total book value of assets is growing with the same rhythm, moreover the proportion of total book value of assets to company value is getting smaller as the company value grows. In such a situation we should be able to predict the resulting increasing company value. Does the profit making capability of the physical assets grow with an increased marginal productivity or are there hidden assets behind the ones presented in the balance sheet? The book value only represents the value of the physical assets and the value of certain – but limited – intangibles (e.g. software and patents). The market value of a company consists of other elements, such as ƒ Relationship capital (external structure related value) ƒ Organizational capital (internal structure related value) ƒ Knowledge and competence of employees (employee related value) Relationship capital includes customer, supplier and other relationships. In addition this represents references, public relationship value and the image of the company (and its products). These elements could represent a significant proportion of the company’s value. Yet only small portions of such elements (such as patents) are allowed to be indicated in the company accounts. Organizational capital includes standardized and customized processes, information and administration systems/frameworks and company culture. These elements differentiate the entity from similar organizations (besides relationship capital and individual employees) and allow the organization to be unique in its own way. Knowledge and competence of employees describes the value assigned to individual employees. This refers to the capability of employees to create material or immaterial assets and properties. These elements are strongly related to the manpower of the company (or are very difficult to harvest it without them) therefore we could define them as knowledge capital or intellectual capital. Based on the previously described factors, the balance sheet describing the capital of a company should be expanded by these newly introduced elements. The expanded balance sheet is described in figure 1.

2

Stefano Zambon: Intellectual assets and value creation: Exploring the „black link”; International Policy Conference, Ferrara. Italy, 20-22 October 2005. 3 =(Company Value – Book value of assets)/Company value

44

György Boda and Peter Szlávik

Figure 1: Structure of expanded balance sheet

3. How can we identify and quantify the intellectual capital of an entity? Theoretically we could easily identify the elements of intellectual capital. The theoretical identification does not need exact data, and the components will be the same for every company. However, we face challenges when we try to quantify the elements for a concrete date or when we intend to capture the exact changes throughout a certain period. The simplest approach to capturing the amount of intellectual capital is to first determine: •

A. The market value of the company, or



B. The quantified strategic plan of the company.

In the latter case we could identify the discounted free cash flow of the entity based on the strategic plan. This enables us to measure the value of the company, therefore finally version B, equals version A. In case of listed companies the value of the entity is always available (although not always reliable for various reasons) – version A –, while for unlisted companies it is available only on a case-bycase basis (i.e. when having an offer from a potential investor). Should there be any problem with the availability of market value, we can always rely on version B, to identify the required information. The basic – and a bit simplified – formula for identifying total amount of intellectual capital is to subtract the book value of total assets from the market value of the company. Having this data available – unfortunately – does not provide information about the elements of intellectual capital. Since the value of the company comes from both the physical assets and the intellectual assets of the entity, proper value based management requires the management of the intellectual assets, too. The balanced way of managing the value of a company is accomplished by concentrating on both crucial elements.

45

The 7th European Conference on Knowledge Management

When managing material assets, we have a lot of information available from the conventional management reports, where we can see the physical assets, the working capital and other elements in detail. The process of identifying intellectual capital that we investigated above only gives us an overall value. This identification process does not give us understanding and therefore the ability to manage the individual components of intellectual capital. We need to find a proper way to capture this information.

4. Money spent on intellectual capital – Is it cost or investment? Based on generally accepted accounting standards, the investment into intellectual assets is mostly handled as cost. This cost is accountable against the revenue of the period therefore decreasing the period’s profit. This approach does not allow the carrying forward of any cost element for future periods, even in the case of a long-term cost (like the tutorial fee of an internal training). There is one exception that we have already mentioned: in the case of a company acquisition the buyer is allowed to represent goodwill in its accounts if the book value of the purchased entity is below the purchase price. The generally accepted accounting standards require having the chart of accounts as described in figure no. 2. ƒ Includes those accounts that are describing the book value of the entity’s assets such as tangible assets, current assets, cash and bank, long term financial assets and prepayments ƒ Includes an intangible account that summarizes those assets that are intangible, but allowed to be presented in the balance sheet (e.g. patents, software) ƒ Includes the most important liabilities that are necessary to determine the company’s shareholders’ equity ƒ The chart describes the creation of assets and the patterns of change; it also describes the balance between assets and liabilities. These accounting rules are extremely important for our analysis, because the axioms behind this logical framework are the points that we would like to modify slightly in order to have an alternative (updated) accounting structure. The cautious, conservative approach of generally accepted accounting methods is understandable. This only supports those quantifying methods that are fully defendable and reliable, and therefore it provides stability for the measurement process. On the contrary the measurement of intellectual capital is highly uncertain. This is true, but we believe that this should not mean that we exclude any element just because there is uncertainty assigned. If declare any intellectual capital related event as cost related to the current period, we do not allow for the visualization of certain assets. Therefore finally the total assets we present in the company’s books will be far below the real value of the company. As the intellectual capital is growing at an accelerating rate, the proportion of company value presented in the books is getting smaller and smaller and the value based management is becoming more and more difficult due to the fact that the necessary information for decision making is not visualized, and therefore it cannot be obtained.

5. Alternative chart of accounts We can capture the value of the company more precisely if we do not distinguish between the creation of tangible capital expenditures, intangible capital expenditures – that are allowed to be accounted for as assets by accounting standards – and other intangible capital expenditures not handled by accounting standards. Moreover the volume of the latter group is more significant than conventional intangibles. Resources invested to intellectual capital – in our view – are basically capital expenditures and not expenses. Without visualizing and controlling this process we can not manage our company properly. This approach allows us to measure better the real extent of the profit of every period.

46

György Boda and Peter Szlávik

Figure 2: Figure of accounts according to the generally accepted accounting standards If we reconsider our cost management framework and costs that serve the development or replacement of intellectual capital items we manage as capital expenditures in intellectual capital items and not as period’s expense, we could build up a ground up approach of handling of intellectual capital items. Of course this requires a significantly modified cost accounting framework. In this framework we should make a decision on each and every individual cost item which asset category it belongs, and what the proportion is that could be capitalized (based on its efficiency). Besides this, we should determine the depreciation rule and the impairment measurement of each group of assets, too. This method expands the generally used chart of accounts with alternative accounts (see figure no. 3.). Figure 3 does not include accounts that are not important for visualizing the creation of intellectual capital. These are not influenced by the newly introduced elements. Neglecting these accounts allows us to simplify the chart, but of course these accounts remain in use. The main difference between the two charts of accounts is that we separated the first expense flow on figure 2 into two relevant pieces. On figure 3 we separated those expenses that refer to the creation of intellectual assets (1b expense flow) and we determined the capitalization of these expenses. This method produces fewer expenses in our profit and loss of the first period compared to the conventional approach, but – in parallel with this – additional depreciation cost should be presented relating to the new categories of assets in each period. This new cost management approach results in profit reallocation among different periods. The total effect of profit reallocation is zero in the long term (although the costs are redefined as capital

47

The 7th European Conference on Knowledge Management

expenditures, the newly created assets should be depreciated during the time horizon), therefore compared to the conventional approach this approach creates different foundation for management decisions. This approach could be helpful for management if they see the conventional reports in parallel with the extended reporting. Companies have to calculate taxes based on their conventional accounting statements; therefore the modification of the chart of accounts should not result in neglecting tax accounting information. In the new structure we should be able to visualize both the conventional and the extended accounting information. The extended accounting approach not only modifies the asset side of the balance sheet but also requires an alternative calculation of shareholders’ equity (see figure 4).

Figure 3: Chart of accounts expanded by alternative accounts Shareholders’ equity and profit/loss for the year can be calculated in an alternative way: ƒ With the help of the wide arrows we can obtain the company’s shareholders’ equity and profit/loss for the period according to accounting standards. In this case we apply cost items of accounting standards, which measure the company’s shareholders’ equity and the change of shareholders’ equity. On the side of dotted arrows we determine the company’s intellectual assets and its shareholders’ equity and profit/loss for the period influenced by its intellectual assets. This method results in different values than the values calculated based on the conventional accounting standards.

48

György Boda and Peter Szlávik

Figure 4: Principle of alternative accounting that supports the management of intellectual capital This approach results in the compilation of two balance sheets and profit & loss accounts that are alternative versions of each other. ƒ The balance sheet compiled by accounting standards includes assets and liabilities. Assets consist of intangible assets, tangible assets, long-term financial assets, cash and bank, other current assets and prepayments. Liabilities consist of visible shareholders’ equity, provisions, short-term and long-term liabilities and accruals. The balance between the two sides is kept stable. ƒ The balance sheet that allows us to support the measurement of the company’s enterprise value, includes additional intellectual assets, such as internal structure related, external structure related and individuals’ competence related elements. These additional intellectual assets are balanced by the invisible shareholders’ equity on the other side of the balance sheet. Management that is used to solely relying on conventional accounting information might be frightened to give up this comfortable framework. The introduction of extended elements however is not contrary to the classic way of reporting, but rather it can provide useful additional information that might be critical for certain management decisions. Besides the calculation of cash flow related information remains exactly the same in both approaches. Similar to conventional group of assets, the efficiency and obsolescence of intellectual assets is a crucial topic, as well. The method of intellectual asset evaluation should be the same that is used in

49

The 7th European Conference on Knowledge Management the case of tangible assets according to IFRS 4 . At the end of each period we have to measure the actual value of our intellectual assets (item by item) and decide upon their impairment loss.

6. Controls related to intellectual capital The visualization of intellectual capital is not yet a settled procedure; therefore it is difficult to provide exact guidance for decision-making dilemmas assigned to the topic. Several samples show that if one can misuse something then it will be misused. Accounting standards disallow the application of alternative solutions, because they might take uncertainties into accounting measures. The figures of one of the largest Hungarian pharmaceutical companies figure 5) support these dilemmas. Book value DCF enterprise value by 10% discount rate & 6% valorisation rate Market price of assets based on share prices 900 000 800 000 700 000

THUF

600 000 500 000 400 000 300 000 200 000 100 000 0 1997

1998

1999

2000

2001

2002

2003

2004

2005

Figure 6: Visible and invisible capital of Richter Gedeon Ltd. Estimations of enterprise value are very sensitive to prospective expectations. This can be presented by the enterprise value estimated based on share prices. Stability of enterprise value measured by discounted cash flow (DCF) is deceptive. This is also sensitive to the applied discount rate and weighted average cost of capital (WACC). Growth of book value is significant and stable. The problem is that the book value is not in the neighborhood of enterprise value, it is less than the half of the company’s enterprise value. An accounting approach that continuously undervalues the company’s enterprise value with such an extent could be questionable by this market data. When measuring the company’s intellectual capital the management intends to gather information for its own purposes, thus it does not want to give a short weight and cheat itself. Of course, mistakes are always possible. The situation is similar to planning: the planning process is the real value for the management and not the plan itself. Translating it to the company valuation: the base value is the process of company valuation (and the related value based management) and not the enterprise value itself. The good planner knows that planning is indispensable despite the fact that a plan is often built on some level of uncertainty. Risk management and sensitivity analysis should handle uncertainties during the planning process. The person who deals with measurement of company value has to do 4

IAS 36

50

György Boda and Peter Szlávik

something similar: he has to identify the sensitive points of the adopted procedure and work out methods for decreasing the volume of sensitivity. Uncertainties can be managed in different ways: ƒ The first way can be the methodology of discounted cash flow (DCF) based company valuation. This method calculates enterprise value – total value of assets operated by the company 5 – based on the discounted cash flows produced by the company in the foreseeable future. In the case of listed companies this method can be completed by the share price based company valuation method. ƒ The other way can be the alternative (expanded) accounting approach presented above. Alternative accounting could be an important control instrument. It is possible, that if we capitalize all types of intellectual capital expenditures, which are not allowed to be accounted for as assets by accounting standards, we might get higher intellectual capital value than the difference between the DCF-based enterprise value and the tangible assets would allow. In such a case the intellectual assets might be under managed, or not efficient and therefore their value is not appropriate, thus impairment should be applied. However, if the intellectual capital calculated during the bottom to top approach is smaller than the difference between the DCF-based enterprise value and the tangible assets, we have another problem that requires action. In this case we might not had identified some items of intellectual capital which might lead to the under management of these assets without proper management focus. There are some other instruments – scorecards – that support the managing of uncertainties. Skandia Navigator 6 , Intangible Asset Monitor by Sveiby 7 , and Balanced Scorecard 8 all try to grasp the efficiency of intangible assets from the point of view of financial result of the entity. During the identification and quantification process of the company’s intellectual capital we face significantly larger uncertainties than during the process of identifying the visible (tangible) capital based on the physical assets. In order to support these efforts we need to have scorecards and efficiency indexes. These indexes monitor the company’s invisible capital (although they are not measuring it). Basically they all operate based on four viewpoints: financial, (customer/client) relationship, organizational and human. The general rule is that if the value of capitalized intellectual assets is exceeds the intellectual capital measured by DCF, the value of scorecard indexes will be unfavorable. This might help a lot in the management of the elements of intellectual capital. The use of scorecards might be a cost effective way of monitoring intellectual capital elements. Overall the annual itemized review (audit) of intellectual assets cannot be avoided by the simple use of scorecards. The detailed monitoring of intellectual capital will require a lot of resources (both money and time). A profit-oriented company will undertake itself to these additional tasks only if the additional costs and efforts increase the profit and make cost management more efficient. This method provides a basis for the consistent valuation of a complex asset base that is fundamental for an effective cost management framework. Management can control a company’s costs only in if it is fully aware of the characteristics of the company’s assets (including cost of operation and profit generating capability). Due to the fact that the majority of the assets are intellectual, we cannot have an effective cost control environment without having controlling the management of intellectual assets. The new approach gives additional responsibility to the management, thereafter they need to operate based on extended return indexes (such as “ROCA” 9 ) instead of the old indexes (like ROA 10 ). The former deals with the modified profit figure and the expanded asset base compared to the latter one. This might significantly change the decision making process and the general thinking of the management. 5

References of Mills, Copeland, Koller and Murrin Reference of Edvinson - Malone Reference of Sveiby 8 Reference of Norton - Kaplan 9 Return on Complex Assets 10 Return on Assets 6 7

51

The 7th European Conference on Knowledge Management

This new approach does not mean that we can increase our intellectual capital endlessly, without limits. Ascending capital expenditures in intellectual capital, which is a routinely and mechanically increased capital expenditure, imply the devaluation of capitalization index (i.e. a worsening marginal utility ratio). So the decreasing proportion of capital expenditures increases the relevant intellectual capital items and the enterprise value. The rest is depreciated (or impaired) during the period, and finally becomes period cost. Identification of intellectual capital items does not mean that the classical efforts to reduce costs are useless. We can not avoid the cost efficiency steps with capitalization of expenses as intellectual assets, either. Parallel with the calculation of intellectual capital items we have to elaborate the continuous examination of efficiency of these new assets. The effect of an intellectual capital structure controlled with efficiency could be the same as the effect of a well thought out cost reduction and cost control procedure.

7. Further steps Based on research carried out so far, it is clear that the introduction of alternative charts of accounts and the quantification of the intellectual property elements would cause debates among financial experts and academics. The level of uncertainty assigned to the valuation of individual intellectual property items, the depreciation and impairment process related to these elements and the inclusion of this approach into everyday management decisions are areas where further investigation is required. We would like to open a debate that requires the involvement of both acting management accountants and academic researchers, with the aim of identifying those elements of the approach that needs further research. By the development of this conceptual framework we expect to work out a general approach for the introduction and continuous use of alternative chart of accounts.

References Damodaran, Aswath (2001) The Dark Side of Valuation, Prentice Hall PTR, Upper Saddle River, 2001. Danka Starovic and Bernard Marr, CIMA (Chartered Institute of Management Accountants), Cranfield School of Management, [email protected] 28. pages. Davenport, Thomas H. - Probst, Gilbert J. B. (2002) Knowledge Management Case Book, Publicis Corporate Publishing - John Wiley & Sons, Siemens Best Practices Deakin, Edward B. - Maher, Michael W. (1987) Cost Accounting, IRWIN, Homewood, Illinois 60430, 1036 pages. Edvinsson, Leif - Malone, Michael S. (1997) Intellectual Capital, Harper Collins, New York Hermanson,Roger H. - Edwards, James Don - Maher, Michael W. (1992) Accounting Principles, IRWIN, Homewood, IL 60430, Boston, MA 02116, 1266 pages. International Accounting Standards 2003, IAS 36, 38 Kaplan, Robert S. - Atkinson, Anthony A.: Advanced Management Accounting. Kaplan, Robert S. - Norton, David P. (1996) The Balanced Scorecard, Harward Business School Press, Boston, Massachusetts Kaplan, Robert S. - Norton, David P. (1992) The Balanced Scorecard - Measures That Drive Performance, Harvard Business Review, 1992 January-February. Kaplan, Robert S. - Norton, David P. (2004) Measuring the Strategic Readiness of Intangible Assets, Harvard Business Review, 2004 February. Mills, Roger W. (1998) The Dynamics of Shareholder Value - The principles and Practice of Strategic value Analysis, Mars Business Associates Ltd., 256 pages. Standfield, Ken (2002) Intangible Management, Academic Press, Boston Stewart, Thomas A. (2002) The Wealth of Knowledge, Nicholas Brealey Publishing, London Sveiby, Karl Erik (1997) The New Organizational Wealth, Berreth-Koehler Publishers, Inc., San Francisco Sveiby, Karl Erik (2003) A Knowledge-based Theory of the Firm to guide Strategy Formulation, Paper presented at ANZAM Conference, Macquarie University Sydney, 2003 February. Tom Copeland, Tim Coller, Jack Murrin (1995) Measuring and Managing the value of Companies, McKinsey & Company. Inc. Published in Hungarian by PANEM, 550 pages. Volkart, Rudolf (1998) Financial management - A basic framework for corporate finance, Versus Verlag AG, Zürich, 96 pages.

52

Systematic Knowledge Organisation for Marketing Research with Assisted Search Employing Domain Models and Integrated Document Storage Karsten Böhm1, Martin Delp1 and Wolf Engelbach2 1 FH KufsteinTirol – University of Applied Sciences, Kufstein, Austria 2 Fraunhofer Institute for Industrial Engineering IAO, Stuttgart, Germany [email protected] [email protected] [email protected] Abstract: A new approach is described that combines the advantages of search engines that are accessible on the Internet and from professional information providers with a flexible organisation system for documents and conceptual structures obtained from the research in a specific application domain. It aims at supporting systematic research activities that go beyond a single-step ad hoc retrieval and that request a systematic search strategy, which includes iterative query refinement and the organisation of search results and documents. A domain specific information model will be used to organize all items (document and concepts) according to their domain and task specific relevance. During the research this domain model will be used to assist the retrieval process in the research activity. The system is targeted at SMEs and should be applied for the support of systematic researches in the domain of marketing and internationalization for the exploration of new markets for a company. Keywords: Assisted search, domain models, knowledge organisation, evaluation of knowledge management systems, marketing research

1. Introduction Systematic knowledge organisation becomes more important as the amount of information still increases rapidly and is relatively easy to access – compared to the possibilities that existed some years ago. A number of powerful search engines are available to search on Internet and Intranet resources and they provide the expert with powerful means to retrieve the needed information with only a number of iterative searches. Still, the appropriate search for users with less experience in the domain in question will be more difficult. Thus, assistance during the individual search steps would be desirable in order to query appropriate sources with the right questions (Assisted search). In order to be able to trace the often iterative searches, a way of documenting the individual research steps would be useful either to communicate the course of a search process to persons not involved in the research (e.g. clients of a consultancy company) or to document and archive the search activities in order to reuse them later or to share them with colleagues. After the retrieval process itself it is usually necessary to store the relevant information in order to capture the current state of the research for later reporting and to organize the information in some task-specific way, so that they form a dossier that files all the information relevant for the research carried out. Finally, it should be possible to document the knowledge gained from the research in a more abstract way on a conceptual and system independent level, which should in turn be usable to organise the information pieces accordingly. Especially small and medium-sized companies (SMEs) need to look for information without investing too many resources in terms of time and money, in particular without employing external consultants. As valuable information about external business factors is readily available on the Web, what is needed is just to explore the web resources properly. On the other hand the expertise of SMEs in using internet tools is rather restricted, especially if the domain that is researched for is only partly known. Therefore, there is a tremendous need to provide tools that would simplify the Internet exploration process as a foundation for decision making for subsequent processes as an internationalisation project.

53

The 7th European Conference on Knowledge Management

For this purpose, we have developed a software prototype within the project AMI-SME (Analysis of Marketing Information for Small And Medium-Sized Enterprises) that combines advantages of existing Internet search engines with modern text analysis functionalities and an intelligent ontology based storage system for documents and knowledge items. The following two main objectives are addressed: ƒ The solution aims at supporting systematic research activities that goes beyond a single-step ad hoc retrieval and that request for a systematic search strategy, which includes iterative query refinement and the organisation of search results and documents. ƒ

A domain specific information model will be employed and used to organize all items (document and concepts) according to their domain and task specific relevance. During the research this domain model will be used to assist the retrieval process in the research activity.

A systematic research consists of a number of iterative retrieval steps, which have to be carried out in a specific order with a dedicated task specific purpose. The results need to be filtered, sorted and stored for further processing or for documentation purposes. Current search capabilities do not support such a multi-step research activity and do also not support the organisation of the results in an application or domain dependent structure. The paper is structured as follows: after briefly introducing the business case addressed in this project we will explain our methodology of using ontologies as conceptual structures to support the two main objectives of the IT-solution: assisted search and organisation of the research findings in such a conceptual structure. The following section will highlight the technical architecture and illustrate core functionalities using different aspects of the user interface. In section 4 we will outline the evaluation methodology for the IT-solution developed in the AMI-SME project and in section 5 we will discuss some related approaches und give an outlook in the concluding final section.

2. The approach for systematic knowledge organisation 2.1 Objectives: Considerations on the business case marketing research for internationalisation projects Increasing competition and globalisation trends are challenging companies to expand the target markets for their products and services into foreign countries. The process of internationalisation necessitates many decisions. Adequate information (e.g. relevant products and companies, or the market situation) about the specific industry niche is required to support decision-making and ensure the successful implementation of the internationalisation strategy. The research for the required information – which is often distributed over several Websites of companies, research and governmental institutions – consists usually of several individual search steps and requires that one could keep track of the researched information at a later stage, too. The business case of Marketing Research and Internationalization for SME represents a challenging application setting for IT-based research tools, because of the following reasons: Firstly, these types of information researches are related with both the business perspective of the company (products, competitors, markets) as well as with the specific domain branch-related information (such as technologies, competitors, trends), thus spanning a heterogeneous field of research questions with a rather broad scope of potential useful information sources. Secondly, a marketing or internationalization research is carried out in an SME only occasionally, therefore research experts are not likely to be found within the enterprise; a proper guidance how to carry out the research is therefore desirable. External consultants which are carrying out such a research on behalf on a SME, are familiar with the research methodology and need only a limited assistance for the search process itself (e.g. concerning domain specific issues), but are working for a number of client, maybe even simultaneously, and need therefore functionalities to organise the researches in a project like manner.

54

Karsten Böhm, Martin Delp and Wolf Engelbach

2.2

Methodology: The dual use of conceptual structures for assisted search and for document organization

As already mentioned and as illustrated in figure 1 below, we use conceptual structures as a central model to support both the individual search processes as well as the organisation of the findings of the research, which will be referred to as documents throughout the article. In order to realise this functionality we are using ontologies implement the central conceptual structure. Ontology is understood in this article as a common conceptualization to structure a domain. Sort and find search results

Structure relevant topics

Questions

Product

Company

Answers

Project

Suggest search terms

Area

Market

Sort and find knowledge

Improve and extend project ontology

Figure 1: Project ontology usage overview The software application distinguishes between system ontologies and project ontologies. System ontologies structure the domain of several projects, e.g. internationalisation and cover the general methodology for a certain class of researches, such as marketing researches. Project ontologies, however, are specific for one research project, e.g. selling simulation software in France and should cover the specific facets of the individual research project. If one defines a new research project, he selects the appropriate system ontology and all modifications will be stored to its project specific copy. Thus, each project will have its own project ontology, extended and instantiated by modifications during the usage of the system. As illustrated in figure 1, the project ontology is the central element to support the core functionalities: ƒ Structuring of relevant topics is provided by the ability to label search results manually, which allows to find them within the ontology structure easily; ƒ Suggestion of new search terms is available as soon as enough results are labelled manually it is also used to suggest labels for search results automatically; this function will also assist in the definition of queries by topic suggestions related to the concepts and instances of the ontology, e.g. legal issues or distinct products and it also suggests keywords and synonyms as well as relevant Internet pages for concepts and instances. ƒ Sorting and finding of search results allows saving and retrieving the individually extracted information about relevant knowledge items, e.g. about specific competitors or relevant regions; ƒ Sorting and finding of gained knowledge will be supported by organising the researched information on a conceptual level (e.g. the competitors of a marketing research project that will be represented as instances in the ontology). ƒ Finally the improvement and extension of the project ontology itself will be considered as an important functionality for the user to tailor the conceptual structure according to the need of his or her information research. Since all these activities relate to the same project specific ontology, wherever you add a new knowledge item (instance), a knowledge type (concept) or a relation, they are immediately available for the other purposes as well.

55

The 7th European Conference on Knowledge Management

Currently one system ontology is already designed for and integrated in AMI-SME. It describes important concepts in internationalisation and is used to test the system. That ontology has four main top-level concepts, which are crucial to understand a given situation: product, company, target market and regional area. All of them are further specified with sub-concepts, relations and attributes, but also other concepts such as events and associations exist. In order to obtain valuable search results it could be advisable to augment such industry independent system ontology with industry or company specific concepts attributes or relations. This follows the approach in other domains, where generic ontologies exist, e.g. for organisational knowledge, that can be instantiated to a specific situation, e.g. to a specific company (Gualteri, Ruffolo 2005). In addition it is possible to connect the system ontology with project specific ontologies; for example to specify the concepts “product” or “company”. Moreover, it is possible to define other system ontologies related to a different domain, e.g. innovation management, and thus use the system for other purposes. All changes to system ontologies have to be done outside the AMI-SME system, using ontology modelling tools. The sample internationalisation ontology was modelled with the graphically oriented software tool SemTalk (Fillies,Weichhardt, Smith 2005). You can also import existing industry or region specific ontologies and use them as system ontologies, as far as they follow some formal restrictions regarding the supported relations and constraints. Several independent search projects can exist in parallel or successively. Each user can access the projects that he defined, and where the project initiator or the system manager assigns him to.

3. Description of the technological architecture The core innovation in AMI-SME is the multi-purpose usage of the ontology, in particular to provide means for organizing the local document and knowledge repositories and to support the query definition in order to access the information sources properly. The software-system therefore combines a domain specific pre-structuring, automatic analysis and manual annotation and structuring functionalities: ƒ A persistent storage for different search projects and their related queries and results allows working over a long time span on the same project with different users. ƒ

A complex but intuitive and expandable ontology supports definition of queries and information sources, navigation in researched documents and organisation of the collected knowledge items.

ƒ

The integration of text analysis functionalities for clustering, abstracting labelling and filtering helps to keep an overview in the growing repository of information pieces.

The software development environment is based on the existing development framework Object Ledge that is provided by Warsaw University and allows reusing of basic components and simplifying the development process (Caltha 2006). The user interface screens are optimised in order to offer a wide range of intelligent functionalities while hiding the complexity of language processing and ontology manipulation for the user. The following five tasks are crucial for the implementation of the AMI-SME specific technological solution, each under the responsibility of one of the four development partners: ƒ Concept, design and integration of the graphical user interface (Fraunhofer IAO,Germany), ƒ Ontology management and language processing algorithms (GraphiTech, Italy), ƒ Information sources and user management (CIMNE, Spain), ƒ Backend functionality such as task execution, storage and persistency (Warsaw University of Technology, Poland), ƒ Testing of the software solution in terms of acceptance and usability tests from a user’s perspective (FH KufsteinTirol, Austria).

56

Karsten Böhm, Martin Delp and Wolf Engelbach

The following section describes the usage of ontologies and the implemented interfaces from a user’s point of view. They are structured according to the three main project related views: the assistance of the search-steps during a marketing research, the storage of retrieved documents and the organisation of newly acquired knowledge-items.

3.1 Search management For each project separated searches can be defined, executed and saved. There are different support levels for the definition of searches in AMI-SME: ƒ In ad-hoc search, only keywords and a search name have to be entered, and comments to the search are possible. This makes search simple as in other (meta) search engines, but saving and many result processing features are available (see figure 2). ƒ Assisted search gives easy access to the project ontology, which can be used to build more complex queries and to direct the search to relevant Internet pages, e.g. of an association, a magazine or a company. The query is translated to the standard corresponding to the selected search engine’s web services (currently yahoo, google and a9). Search results received from various search engines are merged and ranked according to relevance.

1. Named searches on different Sources

2. Integrated search results

3. Search history

4. Annotation & Local Storage of documents

Figure 2: The main Search screen for individual searches on various information sources. The results of a search and the searches steps will be stored locally for later reference. Additionally result documents can be stored in a local repository and be annotated with concepts from the ontology. For the results of all search modes, a clustering functionality structures the results, and filtering searches in the results or its metadata. A search history allows to view and compare the results of existing queries, and to re-execute queries again. A session history additionally offers a chronological overview of all searches that were executed or tackled during the current use of the system.

3.2 Organisation of search results in project documents A database stores all Search Result Documents (SRD). A SRD represents a document, e.g. web page, PDF or MS Word file, etc. The same document only exists once in the system, even if gets detected by several searches, which allow labelling, and annotations that are available for the document. The view on new search results can be limited to new documents.

57

The 7th European Conference on Knowledge Management

The results are downloaded upon request only to save storage resources and to reduce download time. If downloaded, they are accessible for more detailed analysis, such as automatic abstract generation, or text analysis algorithms including clustering, classification and named entity extraction. A project document screen allows the user to concentrate on the evaluation of the results of all project related searches. Here all results for the whole project are available, but the view can be restricted to the results with a specific ontology label. For each search result, a details pop-up screen is available for annotations, as well from the search screen as from the project document screen (see figure 3). These document details contain partly system suggested values, e.g. by a summariser, language detector, or metadata extracted from the search engine); Here the result document can also be labelled with concepts or instances of the project ontology; in addition the labelling of all selected documents is possible. 1. Extracted document Meta Data

2. Generated information about the document

3. Concepts attached to the result 4. Manual comments on the result

Figure 3: The document details pop-up screen It contains extracted meta data provided by the search engine as well as automatically extracted information (e.g. summaries) and provides means for entering personal comments. Finally it provides functionality to attach concept labels to the result.

3.3 Arranging the acquired concepts in a project knowledge repository Relevant knowledge that is identified from the SRDs can be directly stored in the project specific knowledge base, which is identical to its project ontology: for example, competitors are instances of companies, and they may have values to attributes such as the name of the owner of a company. In this way, the user is able to extend the ontology in an easy and comfortable way. A project knowledge screen (see figure 4) allows editing values to specific instances of the project ontology, e.g. the number of a company’s employees, or adding relations, e.g. from a company to its products. From the two other main screen, search and document, it is possible to open a project knowledge pop-up window to directly view and edit information about identified knowledge items.

58

Karsten Böhm, Martin Delp and Wolf Engelbach

In the project knowledge screen, but also in project details and in the project documents, the concepts of the project specific ontology are displayed comparable to a folder structure in form of a tree in the left column, with the instances of the selected concept listed below. In the knowledge screen only (see figure 4) also relations to other knowledge items and relations to labelled documents are presented in two separated lists in the right column; and also related documents are linked. This allows a well-known navigation, despite of the complex structure of the project ontology itself.

1. Details on the selected concept

2. Additional attributes for the concept

3. Concepts tree and instance list 4. Concepts and documents related to the selected concept

Figure 4: The project knowledge pop-up screen Which presents and organizes research results on a more abstract (conceptual) level. It contains details and available attributes about selected concepts and shows the relation to other concepts and documents.

4. Evaluation methodology The evaluation of the software is carried out with two-step strategy with prototype implementations that will be tested with application partners from different industries. The major goal of the evaluation is to get insights that show how much guidance and organisation in the research is possible using agile semantic technologies without obstructing the creative process of retrieving and organizing information from digital sources as well as how the retrieval process can be carried out more effectively. The goal of the evaluation method is the transfer of the two major objectives of the project AMI-SME, assisted search and integrated document storage, in measurable values.

4.1 Objectives of evaluation methodology Objective of the evaluation method mix is the analysis of search software tools for their support of user requirements. An aggregation of AMI-SME user analysis consists of four major user questions: ƒ Question I: How to deal with heterogeneous information sources? ƒ

Question II: How to design international flexible ontologies?

ƒ

Question III: How to model the business processes that require information?

ƒ

Question IV: How to structure and use the context of information requests?

59

The 7th European Conference on Knowledge Management

Transforming these questions into measurable user requirements we obtained a choice of central criteria

4.1.1 Requirement I: Handling of information sources ƒ Did the AMI-SME solution reduce the number of information sources that need to be consulted to obtain the necessary information to answer domain specific questions from the domain of marketing research and internationalization? ƒ Does AMI-SME provide a useful functionality to store the findings of the search run and thus reduce the number of repositories to store relevant information from marketing research projects?

4.1.2 Requirement II: Designing ontology’s Did the design of the General Marketing Ontology reflect the general needs for assisting and structuring the marketing research activity? Measurable indicators ƒ The number of concepts that were used for the assisted search. ƒ The number of documents that were assigned to concepts in the ontology. ƒ The distribution among all concepts of the ontology. ƒ The amount of concepts and relations that were modified in the derived project ontologies for the individual research projects.

4.1.3 Requirement III: Modelling business processes ƒ Amount of time that is needed to prepare the results of the marketing information research for subsequent business processes and activities.

4.1.4 Requirement IV: Structure and use of context Is the structure provided by the ontology useful for organising the research results and does it provide the suitable contextual framework to support the individual search activities? ƒ Number of concepts that are used to support the domain specific search (coverage) for marketing related search projects. This number can be interpreted as an indicator for the mapping of an information request to the conceptual structure of the ontology

4.2 Testing and evaluation methods We will have two prototypes with added functionality and distinguish between the system test and the evaluation of the AMI-SME software. The evaluation will be carried out with the 2nd prototype, only (as indicated in the table below). Each of the measure and its application to the AMI-SME software will be describes briefly in the following section. ƒ Feature Inspection focuses on the feature set of a product. Each feature is analysed for its availability, comprehensibility, and other aspects of usability. ƒ Usability Evaluation by Question asking Protocol is a technique, where human factors engineers formulate questions about the product based on the kind of issues of interest. ƒ Cognitive Walkthrough involves one or a group of evaluators inspecting a user interface by going through a set of tasks and evaluating its comprehensibility and ease of learning. ƒ Interviews will be used to formulate questions about the product based on the kind of issues of interest after carrying out the evaluation. ƒ Operational Efficiency is of primary interest for the evaluation and will be based on quantitative as well as qualitative aspects. In order to achieve a measure for the quantitative aspects we will use the well-known Precision/Recall measures and do a comparative evaluation of a marketing

60

Karsten Böhm, Martin Delp and Wolf Engelbach

2. Coding

4. Deploy-ment

Project Phase

1. Design

3. Test & Evaluation

research with the AMI-SME tool and without the AMI-SME tool, using the traditional information sources (see the illustration below).

1 Feature Inspection (Testing)

-

- 9 -

2 Usability Evaluation by Questionasking Protocol (Evaluation)

-

- 9 -

3 Cognitive Walkthroughs (Evaluation)

9 - 9 -

4 Interviews (Evaluation)

-

- -

9

4 Operational Efficiency (Evaluation)

-

- -

9

9 Used ; - not used Table 1: Overview on evaluation methods that will be carried out on the first and on the second version of the prototype

Figure 6: Precision and recall measures to obtain results for the operational efficiency of AMISME 1 The usual problem of not knowing defining the optimal information will solve the optimal document collection set by a domain expert (e.g. the known content of a relevant study or by an expert setup of the retrieval task). For the measurement of the qualitative aspects we will concentrate on those features that are not present in common search engines, such as the functions for clustering, classification, and abstracting of search results, as well as the means for organisation of search runs and retrieval results.

5. Related work Since the advent of Web the problem of exploratory search was addressed in many papers, albeit not sufficiently reflected in practical applications. Its growing meaning has been recently confirmed by a number of publications collected in [6], where the issue has been exhaustively revisited. 1

Source of the figures: http://www.hsl.creighton.edu/hsl/Searching/Recall-Precision.html

61

The 7th European Conference on Knowledge Management

Moreover, the focus of many approaches was on the retrieval side of exploratory search and not about organisation of findings, which was often done in other IT-Systems such as local filesystems, DMS or CMS-System. We have attempted to implement AMI-SME in line with the idea expressed in (Gersh et. al. 2006) and paraphrasing the famous Hamming statement to the form “the purpose of exploratory search is insight, not data” (Hamming 1997): “In intelligence analysis, as in other domains, that insight comes from the process of exploration, not just from its end result. We are interested in capturing and visually representing analysts’ iterative query processes and insights to help them collect and compare information more effectively, as well as record and share the products of their analytic insights.” To an extent our approach refers to the idea of web farming systems, as defined by Hackathorn in (Hackathorn 1999) as “…the systematic refining of information resources on the Web for business intelligence”, although the focus of our solution is not tailored to business intelligence, but the project specific information research and organisation to fulfil a dedicated information need. On the other hand it should provide means for quick exploration of new web areas. Until now, there were very few attempts to practically implement it, especially for the scale of SME needs. In this sense, AMI-SME is quite a unique system-combining Web exploring functionality with advanced storage system and text analysis means. One of the main features of AMI-SME is building a repository by a sequence of consecutive queries. Somehow a similar approach has been implemented with the system SensMaker, presented in (Wang et. al. 1997) which is already using clustering techniques for visualizing the search results. Another approach that seems to be close to AMI-SME was INSYDER reported in (Reiterer et. al. 2000). The similarity resulted from the similar goals of gathering the information sources; however the proposed solutions referred mainly to using web agents, whereas in AMI-SME we expect to tap high quality information from the existing web search engines by a simultaneous search covering a number of existing engines. While there are a number of solutions available for assisting during the individual search steps there are not many for approaches for organising the research findings using a conceptual structure that was also applied to assist the search process. There are a number of solutions to organise web resources in Bookmark-Systems, some even follow a collaborative approach (such as del.icio.us, http://del.icio.us) but most of them only keep track of the link to the resource, which is not sufficient for documenting research findings. New application allow to clip parts of WebResources (see ClipMarks, http://www.clipmarks.com), but again this Web application is implemented as a public collaborative service, which is not appropriate for the business professional to support the addressed business case of Marketing Research and internationalization.

6. Conclusion and outlook The realised concept of ontology based search and storage improves the interface between search engines and applied Knowledge Management: it makes Internet search more flexible and integrates it closely to the users’ working context of an information research project, thus bridging the gap between the search activities and the documentation and reporting activities in a research project. Especially for tasks that companies seldom conduct, like internationalising, such a guiding structure helps not to forget important issues while conducting the research. This is even more important in the case of weakly structured information, as for individual industrial niches, since the reader has the effort to extract relevant content on his own and is now supported to store results in a proven way. The main result of the AMI-SME project is the presented prototype. The following lessons were learnt during the development process: ƒ The users do not only want intelligent search, but they are strongly interested in result handling, regarding the retrieved documents and the extracted knowledge. The ontology approach organises the results closely related to their business context. ƒ If several companies are supposed to agree on one common ontology, the ontology design needs close cooperation and a clear method, since there are different conceptualisations in the partners´ minds, as well as different intentions for using the ontology.

62

Karsten Böhm, Martin Delp and Wolf Engelbach

ƒ Each ontology based system faces the question how much ontology editing it should allow, and what kinds of manipulations should only be done outside. The general idea can be transferred to other topics than internationalisation, e.g. product development or innovation management. Also the automatic extraction of available content from the Internet to the knowledge base is a research activity that would additionally extend the usage of the software, either by integration of relevant RDF sources (e.g. the World Factbook) which contains a wide range of useful information about countries, see (CIA 2006) or with specific wrappers, like in the PiggyBank-project (Huynh et. al. 2005).

Acknowledgements The described IT-solution is being developed within a EU-supported CRAFT project “AMI-SME: Analysis of Marketing Information for Small And Medium-Sized Enterprises” (Contract Nr. 017566). More information on the project can be found here: http://www.ami-sme.org.

References Caltha (2006) “Objectledge, [online], http://objectledge.org Fillies, C.; Weichhardt, F.; Smith, B. (2005) “Semantically correct Visio Drawings”, Proceedings of the 2nd European Semantic Web Conference, Heraklion, Crete, May 29 – June 1, 2005. CIA (2006) “The World Factbook” [online] http://www.cia.gov/cia/publications/factbook Gersh, J., Lewis, B., Montemayor, J., Piatko, Ch., Turner, R. (2006) “Supporting exploratory search: Supporting insight-based information exploration in intelligence analysis”, Communications of the ACM, Vol. 49(4). Gualteri A., Ruffolo M. (2005) “An Ontology-Based Framework for Representing Organizational Knowledge”, In: Proceedings of I-Know 05, Graz, Austria, June 29- Juli 1, 2005, pages 7178. Hackathorn, R. (1999) “Web Farming for the Data Warehouse”, Morgan Kaufman. Hamming, R. (1997) “The Art of Doing Science and Engineering: Learning to Learn”. CRC Press. Huynh, D., Mazzocchi, S., Karger. D (2005) Piggy Bank: Experience the Semantic Web Inside Your Web Browser. Proc. of the International Semantic Web Conference 2005. Reiterer, H., Müller, G., Mann, T. M., Handschuh, S. (2000) “INSYDER - An Information Assistant for Business Intelligence”, Proc. of the 23rd Annual Int’l ACM SIGIR Conference Wang, M. Q., Baldonado, Winograd, T. (1997): “SenseMaker: An Information-Exploration Interface Supporting the Contextual Evolution of a User’s Interests”, Proc. of the SIGCHI Conf. on Human Factors in Computing Systems.

63

Knowledge Intermediation in Regulated Electronic Commerce Environments. The Case Study of NETMA (Nato Eurofighter and Tornado Management Agency) Ettore Bolisani1 and Roberto Ruaro2 1 Department of Management and Engineering, University of Padova, Italy 2 Italian Air Force, RMS, Villafranca Airbase, Italy [email protected] [email protected] Abstract: Although ecommerce is often seen as a tool for the direct connection between sellers and buyers, the function of “digital intermediation” is critical. While the presence of intermediaries is considered usual in the case of “open electronic markets” - where a proper and trustworthy environment has to be established in order to perform spot transactions - their role can be equally important in the case of regulated ecommerce systems i.e. in contexts where long-term partners set a strict agreement on the conditions, rules, and standards for electronic transactions. Indeed, ecommerce is not just a matter of "exchanging orders electronically", but implies an articulated exchange of knowledge. In this paper, we argue that the role intermediaries in ecommerce, as well as their managerial implications, can be better understood if these mediating functions are seen as functions of knowledge intermediation between electronic traders. The paper presents the study of NETMA, an agency whose role is to underpin electronic transactions between parties in the e-procurement system of spare parts for military aircrafts within NATO. The case study highlights that even in a highly regulated environment the role of knowledge intermediary between trading parties is essential and involves different complex activities. Keywords: e-Commerce, knowledge intermediation, regulated electronic markets, knowledge exchanges, and defence e-Procurement, case study

1. Introduction e-Commerce has often been seen as a tool for the direct connection between sellers and buyers. However, the function of “digital intermediation” between parties - i.e. third parties that help the traders to solve the problems associated with electronic transations - is critical (Sarkar et al., 1995). The presence of intermediaries is generally considered usual (and, somewhat, necessary) in the case of “open electronic markets”, where a proper and trustworthy environment has to be established in order to perform spot transactions between many suppliers and buyers. However, mediating functions prove to be equally important even in the case of regulated ecommerce systems, i.e. systems where partners establish long-term relationships and set strict agreements on conditions, rules, and standards for e-Commerce. This can be explained when considering that ecommerce is not just a matter of "exchanging orders electronically", but implies an articulated exchange of knowledge during the life cycle of the systems. With regard to this, there is still need for empirical studies to explore the complex mechanisms and activities of knowledge intermediation that are essential in such cases, as well as their managerial and economic implications. This paper presents the study of NETMA (NATO Eurofighter and Tornado Management Agency), an agency whose role is to underpin electronic transactions between parties in the e-procurement system for military aircrafts within NATO. In particular, the case focuses on EPS (Enhanced Procurement System), the e-procurement system developed to support the supply of spare parts in the Tornado programme. The paper highlights the peculiarities of knowledge exchange that are involved in the design, operation, and maintenance of such complex system, and the essential role of NETMA as a Knowledge Intermediary (KMY) between the parties involved.

2. Knowledge management and ecommerce The interplay between knowledge management (KM) and ecommerce is increasingly recognised in the recent literature (Bolisani and Scarso, 2003). Indeed, ecommerce is a way to perform business transactions, which always implies the management and exchange of knowledge between traders.

64

Ettore Bolisani and Roberto Ruaro

The particular electronic transmission performed by an ecommerce application thus underpins a communication of knowledge. In other words, any commercial transaction is not simply the transmission of pure data (e.g. quantity and price of products), but implies a transfer of knowledge between trading partners about all the aspects of the transaction and the associated decisions (Holsapple and Singh, 2000). In this process, different types of knowledge can be identified. One fundamental distinction is that between tacit and explicit knowledge, although other classifications are commonly used in KM literature. This appears to be important for the design and operation of a specific ecommerce system. First, some applications may be more suitable for managing certain kinds of knowledge than others (Zack, 2001). Also, to deal with all the relevant knowledge involved, a complex combination of different ecommerce tools may be necessary. In general, it can be said that although computers just transmit data (i.e. that part of knowledge that has been made explicit entirely), there may be the need for a tacit or contextual component of knowledge to interpret and exploit such data. As the empirical evidence shows, in a particular industry or value chain where common languages, codes and trading procedures have been established, electronic transactions are easier. In other cases, the creation of a shared knowledge environment is a time consuming and expensive process that requires long-term collaboration. As is well known, KM activities are depicted as a number of distinct processes (for instance: knowledge generation, acquisition, processing, delivery, storing, etc. – Mårtensson 2000). Such processes greatly differ from one another, and their shape depends on the particular circumstances. Similarly, the transfer of knowledge in ecommerce is generally directed from specific sources to specific receivers, as this specificity gives value to the process itself. In substance, knowledge is not transferred “at random” or made available like a sort of public good, but in relation to its effective use. Thus, any ecommerce application is valuable when applied to the specific KM process supported. Furthermore, the way in which a single process is performed may influence the outcomes of the subsequent processes. Also, the same KM process may require different methods and tools, in relation to the knowledge type involved (Becerra-Fernandez and Sabherwal, 2001). KM and ecommerce should also be seen in terms of their linkage with knowledge relations (Seufert et al. 1999), that allow traders to acquire external knowledge, create fresh knowledge, exchange or share it with business partners and, eventually, use it to create value. Clearly, building and maintaining effective knowledge relations is no easy task. In principle, ecommerce can combine the positive benefits of knowledge sharing with the effective communication underpinned by information and communication technologies. But the actual role of ecommerce is always linked to the specific nature of trading relations (for instance: what kind of economic co-ordination mechanism are used; how these influence knowledge exchanges; what level of trust is required to sustain such relations; and so on). Thus, the effectiveness of a specific ecommerce application depends on the aims, nature of the task, processes performed, and organisational context of use. In the case of e-procurement, since this a particular application of ecommerce, the points previously illustrated can be particularised in more detail. First, e-procurement is typically a b2b application used to support transactions of intermediate goods along supply chains. The value and volume of transactions, the characteristics of traded goods, and the modality of order processing are thus very different from b2c systems. Accordingly, the transactions (and the associated cognitive transfers) depend specifically on the modalities with which the firms handle commercial data internally, on the goals of efficiency and, more generally, on business strategies of traders. Each e-procurement environment (and, consequently, the kind of knowledge transfer) is integral to the particular configuration of inter-firm relationships. For instance, at one extreme, highly integrated supply chains imply the substantial cognitive homogeneity of the firms in terms of shared visions, common languages, shared procedures of information handling, and so on. Long-term relations are required to establish a common but very specific knowledge environment, which, once built, makes possible the implementation of highly automated (bud rigid) e-procurement systems, such as EDI. At the opposite extreme, open electronic markets and online auctions are designed to support spot transactions between occasional trading partners. In principle, this allows high

65

The 7th European Conference on Knowledge Management

flexibility, but restricts the application to specific transactions, where knowledge contents can be made explicit and transferred by means of electronic ways among all the potential traders, independently from their specific characteristics or aims.

3. Knowledge intermediation in e-procurement The effective functioning of ecommerce systems often requires intermediating functions. Intermediaries act as interface between supply and demand to make transactions more efficient, and it can be argued that many of these functions (e.g.: identification of needs; information on products and suppliers; comparisons; distribution of information on products, order entry, etc.) can be seen in relations to their cognitive contents (Bolisani, Scarso and Di Biagi, 2003). In substance, the value added by intermediaries consists in “bridging” over a cognitive gap between suppliers and buyers, thus facilitating the exchange of knowledge for settling transactions. The “cognitive” implications of intermediation rises key issues for ecommerce. Firstly, although the huge amount of information available on computer networks extends the cognitive capabilities of the users, the growing complexity of the cyberspace makes its exploitation even more difficult. This suggests the importance of “knowledge brokers”, capable of assisting the users in the management of online knowledge sources (Hargadon and Sutton, 2000). Another topic is the structural effect of e-commerce on intermediation. As it is well-known, theoretical arguments and empirical observations show that ecommerce facilitates the creation of new forms of intermediaries (Sarkar et al., 1995). In the case of b2b relations, the literature also proposes analysis of the restructuring effects on value chains, with the creation of new intermediaries establishing connections inside virtually-glued value chains (Lefebvre and Lefebvre, 2000; Upton and McAfee, 1999). Recently, some studies explore the new roles of knowledge intermediation inside ecommerce b2b relations (Scarso et al, 2005). Knowledge intermediaries (KMYs) can be, first, associated to the recognition of knowledge as a core resource for business. Since a single firm may encounter several problems in accessing external sources and managing knowledge relations, this may open opportunities for innovative intermediating services that act as interface between knowledge sources and users, thus favouring knowledge exchanges. Also, the implementation and management of complex e-procurement systems requires specific capabilities to select or design the most adequate solutions. This paper focuses on the role of service organizations supporting the implementation and management of e-procurement systems. In particular, despite the fact that ecommerce applications are often intended as tools to directly connect business partners, and these tools are generally designed to automate procurement activities in integrated supply chains, the assumption here is that the cognitive implications of any e-procurement implementation are so important that the firms often need the assistance of an external KMY service. The functions, roles, and importance of KMY services in e-procurement systems can largely vary in accordance to several aspects (i.e.: number of trading partners, configuration of the supply chain, products traded, homogeneity of firms, degree of automation of transactions, etc.). The case-study described in the next section considers a regulated e-procurement environment, characterized by pre-defined products and processes, and a trustworthy set of inter-firm relationships. We will show that, even in such situations, the problems involved by the management of electronic transactions may be so significant that KMYs assume a vital function in both the design and effective operation of the system. We will illustrate this point by adopting a cognitive interpretation of such issues.

4. Case-study: NETMA as knowledge intermediary in defense eProcurement 4.1 NATO procurement and the role of NETMA The defence procurement, and particularly the provision of sophisticated weapon systems, represents a particular situation compared to the “normal” supply chains in private sectors.

66

Ettore Bolisani and Roberto Ruaro

Generally, there is one buyer (the customer Nation) in a monopsony position, and a few sellers that generally reduce to one lead contractor. Thus, for each weapon system, there is often a “one-toone” bilateral relationship. In principle, this environment favours long-term trustworthy links, which is an important pre-condition for the establishment of a shared knowledge context. As we argued before, this is the starting point for the implementation of highly automated e-procurement systems. However, as we illustrate here, all this is not so simple. First, the establishment of the bilateral relationships between buyer and seller is not easy. A long lasting process of knowledge exchange is requires. After the identification of the requirements of the new weapon system (made by the Ministry of Defence - MoD), the selected supplier should become responsible for the supply of the weapon system and also for the provision of spare parts. This is necessary to ensure efficiency and reliability to provisions, but to achieve this goal; a complicated activity of knowledge exchange and standardisation is needed, which can take months or years. The procurement relationship requires pre-negotiated detailed contracts that cover all procurement aspects (supply, invoicing, payments, after-sale, disputes, etc.). Also, it should be noted that the supplier can be a single company, but more frequently, is a group or a joint venture of several companies with different roles and functions, which implies the co-ordination of all activities and the sharing of critical information between several partners. In addition, the increasing sophistication of weapon systems makes defence procurement increasingly complicated. The high cost of R&D and management favours aggregation on both demand (i.e.: MoDs) and supply side (producers). Very often, a certain number of Nations with interest in a common weapon system aggregate to reach a “critical mass”. In the case of NATO, this organisation plays an active role in developing common procurement strategies. Although plans and standardisation of weapons systems are considered a responsibility of each Nation, NATO supports and co-ordinates the co-operations of all members with the industrial production. This is also made (see the Article 9 of the Treaty) via international Agencies, created for the joint development and production of weapon systems. The main purpose of these Agencies is thus to manage the different needs of the Nations along the entire life cycle of a certain product (from design, to production, and to in-service support). An Agency becomes the single purchaser of all industrial products related to the military programme, and conducts business relationships with industry. National MoDs usually share the costs of the Agency, in proportion with their participation to the programme. NETMA (Nato Eurofighter and Tornado Management Agency), which is the case-study illustrated here, is the agency created for the development, production and in-service support of two military aircrafs, namely the “Panavia Tornado” and the “Eurofighter Typhoon”. NETMA was created in 1996 by merging two different NATO agencies (NAMMA, responsible for the management of the Tornado, and NEFMA – for the Typhoon), and is located in Munich (Germany). The structure, mission, and role of NETMA are defined in the Memorandum of Understanding (MoU), signed by four Nations (UK, Germany, Italy and Spain) in 1995. The organisation is fully hierarchical, with three Directions under the control of a General Manager, which reports to a Board of Directors (composed by national members): a personnel director, a commercial director, and the operational and engineering Director. The mission of NETMA is to manage all the procurement aspects of Typhoon and Tornado programmes, including the development, production and supply of spare parts and dedicated equipment, and in particular: ƒ To receive, merge and forward to the industry representatives all requirements coming from MoDs. This implies also the assessment of operational performances, production and supply time, and prices; ƒ To write, negotiate with industry, and sign all procurement contracts, as well as any modifications, in accordance with the national procurement strategies; ƒ To manage the programme, in terms of continuous monitoring and problem solving; ƒ To plan the budget and control the commitment;

67

The 7th European Conference on Knowledge Management

ƒ To perform payments and accounting. The Logistic Support Section (TL3) is the part of the Tornado Division that, working for the Operational and Engineering Director, manages all the procurement of spare parts and the repair and overhaul (R&O) activities of the Tornado. In particular, this section is responsible for the management of transaction flows related to the procurement of spare parts via an e-procurement system called “Enhanced Procurement System” (EPS), connecting Nations and suppliers involved in the Tornado programme. Next, we provide a short description of the characteristics and the peculiarities of the EPS. With regard to this, we will be able to show the role of NETMA as KMY.

4.2 EPS - The Tornado e-Procurement system As said, the “traditional” defence procurement is usually characterised by a one-to-one relationship between the single MoD and the main contractor. This relation is often maintained via a “pen and paper” system, with no use of ecommerce. Different from this, a programme like Tornado, even though it maintains some characteristics of the “one-to-one relation” (such as: a regulated market, and the use of pre-negotiated contracts), is characterised by a more complex configuration, involving more than one subject, both on demand and supply side. Three national MoDs (Germany, UK and Italy) are the customers, and two main consortia (Panavia - responsible for the production of airframe and formed by EADS, BAES and Alenia; and Turbo Union - responsible for the engine and formed by MTU, Rolls Royce and Fiat Avio) are the main contractors. NETMA was mandated to automate all procurement procedures, which resulted in the development of the EPS. The aim was to establish a common methodology to exchange information, to improve effectiveness and efficiency in the management of spare parts procurement, which is an extremely critical aspect of complex weapon systems. If this activity is not carefully managed, this can affect the operational effectiveness of the Air Forces. On the other hand, if efficiency levels are not maximised, this can lead to budget problems for the customers or unused industrial capacity. This system connects in real time all the users through a dedicated network and a central node (CADPS - Central ADP System) directly managed by NETMA. Various information flows are handled, including: on-line catalogues, order entry, consignment, and invoicing. This system allows users to perform the typical functions of an EDI system (in practice, this means orders and invoices), which is compatible with the business context, characterised by pre-codified information and long-term rigid relationships. As a matter of fact, the EPS system manages transactions based on standard formats codified by AECMA (European Association of Aerospace Companies) and validated by CADPS. All transactions are stored on the NETMA databases, and made available for monitoring and performance measurement. Clearly, all this would not be possible without the fundamental functions of an intermediary organisation (i.e. NETMA), which is necessary to coordinate business transactions and, thus to regulate flows of codified knowledge between all the partners.

4.3 NETMA as knowledge intermediary Considering the EPS as a mean for exchanging knowledge in standardised format and agreed messages, and having introduced the central position of NETMA in this context, we can now illustrate how NETMA can be seen as a “knowledge intermediary” (KMY), and discuss the main issues arising i.e.: the kind of intermediation that the Agency is supposed to perform, how critical the role of NETMA is, how big the value added is to the supply chain and in which terms. The proposed analysis uses the classification of explicit vs. tacit knowledge, where the first one indicates knowledge contents that are simple to obtain, transfer and store, while tacit knowledge relates to ideas, perceptions and experience and is impossible or difficult to codify and, thus, transfer electronically. The role of NETMA as a KMY can be seen in relation to both the two kinds of knowledge. Explicit knowledge intermediation. Due to its nature, the EPS system allows only the exchange of explicit knowledge contents (i.e.: order forms, invoices, etc), which can be simply transferred, with

68

Ettore Bolisani and Roberto Ruaro

the use of pre-defined and fixed code. Each message arrives at CADPS (NETMA) where, after automatic handling, is sent to the receiver’s information systems. The interpretation of this coded knowledge is relatively easy, since meanings and consequent actions are pre-defined (as is usual in EDI-based systems). However, the role of KMY of NETMA is valuable. NETMA plays the essential function of facilitator of the correct functionality of the entire system. In other words, although the communication between the information systems of all the parties involved can be seen as automatic connections, NETMA ensures that all messages sent to CADPS arrive at destination on time and with no errors, thus avoiding system failure, ensuring prevention from malfunctioning and intrusions, and resolving disputes and misunderstanding. What is more, in performing this fundamental activity, the NETMA staff must have the competence and ability both to constantly monitor the flows of messages, and to technically intervene if a problem occurs. However, this is only a part of the KMY role of NETMA. Tacit knowledge intermediation. In addition to the operational management of the e-procurement system, NETMA has also the task to continuously monitor all order cycles, from order entry to payment. From a KM perspective, this implies three main activities directly involving the transfer of tacit knowledge, i.e.: a) the management of updating meetings among partners; b) the development of ad-hoc advancements in the e-procurement systems; and c) the creation and delivery of “fresh” knowledge on the use of the system. ƒ A) Management of meetings among partners. NETMA has been asked by all parties to manage procurement meetings at different levels, with the aim to exchange knowledge “outside” the core technical staff, and exchange useful information, impressions, and opinions on the system operation that cannot be clearly communicated via EDI. A first level of meetings has the purpose to discuss the so called “low stock” situations – i.e. cases where wrong forecasts in the consumption rates or missing deliveries have generated criticality in the availability of spare parts. This information is essential to eliminate malpractice and for a correct use of e-procurement by all parties. A higher level of meetings regards discussion of procedures, procurement policies, and logistics strategy. NETMA is entirely responsible for calling the meeting, organising the event, recording the discussion, and monitoring the way forward. Such meetings thus provide “tacit interpretation” and comprehension of the phenomena, and collateral issues that are associated to the exchange of electronic data. This complex role of “meeting facilitator” is clearly typical of a context characterised by long term relationship - as is the Tornado programme, allows the exchange of precious and non-standard knowledge, and the creation of a collaborative environment. ƒ B) Ad-hoc developments. This is strictly connected to the life cycle of the programme itself, and the consequent flexibility required. The production of Tornado aircrafts stopped in 1996, as well as the regular production of parts. Consequently, the major challenge becomes to keep the flow of parts active. The problem is that the stocks, created in accordance with the “initial provisioning”, are now nearly finished, and new parts have to be ordered directly to the manufacturers. However, without a regular ordering level, the main suppliers may find it difficult to provide parts that meet the specifications. NETMA was asked to find a solution to this issue, by creating a system for maintaining open the production and avoiding cost increase. A new component of the procurement system was created, i.e. the “long term ordering” procedure (“Reprovisioning Conference” – RPC). This implies a procedure to select requested items, quantify requirements, and start negotiations for forecasted provisions. A huge amount of data needs to be exchanged among partners. From the buyer’s viewpoint, this requires new forecasting and evaluating techniques to handle information (such as future size of the air fleet, average flying hours, adoption of new systems, etc.) and translate this into explicit data (in terms of quantities, times, and prices) that can be of use by suppliers. On the industry’s side, firms have to inform buyers on their business strategies (such as closure of plans, outsourcing strategies, modifications of components), and provide all data that allow MoDs to plan their procurement. To support this kind of knowledge, characterised by a low level of standardization, the EDI-based system is of no use, and NETMA had to develop new procedures and ad hoc systems. This example clearly shows the role of NETMA in solving particular KM situations that can emerge during the life cycle of the defence programme.

69

The 7th European Conference on Knowledge Management

ƒ C) Creation and delivery of fresh knowledge. NETMA, thanks to its role in the EPS procurement methodology, supervises the evolution of all parts of the Tornado business. The monitoring activity helps the Agency to control the progress of the programme and formulate the actions that have to be taken to keep a good quality level of service, for the benefits of all partners. Again, this is clearly an intermediation function that involves cognitive aspects. In addition, since this intermediation is associated with the management of electronic data, we can again see the role of NETMA as one of a KMY. In fact, NETMA manages the EPS central database, and can thus extract significant information, that can be elaborated, organised and made available for further analysis. This is a sort of Management Information System (MIS) activity conducted by the NETMA TL3 staff members, and sponsored by both Nations and industry partners. The MIS activity consists of monitoring the ongoing procurement business in terms of level of ordering, procedure and timescale adherence and performance. The value added by the Agency consists of production an delivery of fresh knowledge to help analysis, monitor trends, show performance, etc. Again, this implies a complex activity of managing tacit and explicit knowledge combined together, and of translating knowledge contents from one form to another (Nonaka and Tageuchi, 1995). For instance, a new series of statistics (“Key Performance Indicators”) was established by the top level management of NETMA, MoDs and suppliers, to focus on particular issues. To formulate and implement such indicators, both explicit knowledge (i.e.: quantitative data, and their meanings) and tacit knowledge (i.e.: knowledge of requirements and strategies, industry’s characteristics, etc.) are involved, as is typical in the development of MIS tools. What is relatively new here, is that there is a central player here (i.e.: NETMA) that takes the responsibility for combining the different characteristics of the various partners to provide a “shared view”.

5. Conclusion Reading the role of NETMA as a KMY highlights the essential functions played by this agency in the context of the regulated e-procurement system described above. Those “knowledge mediating functions” are clearer if we consider the different stages of the system life cycle. In fact, eprocurement is finalised to the electronic exchange of “explicit knowledge contents” (i.e.: orders, etc.), but this requires that a high degree of formalisation of contents is reached first. To achieve this result, buyers and sellers need sharing a knowledge context that allow them to: a) understand electronic messages; b) activate the proper procedures to generate or handle them; and c) share the goals of the system. The experience of NETMA clearly shows how the function of a KMY can be essential here, to facilitate the reciprocal knowledge among interacting parties in the preliminary stage of design of the e-procurement system. Even once knowledge contents have been defined and agreed on (i.e.: electronic order formats are set), the operation of the e-procurement system still requires that the electronic exchanges meet the performance requirements. Again, the deep knowledge of the entire system is essential here, as the KMY plays the role of “referee” in this playground. Equally important, the KMY performs functions in the development and maintenance of the system. For instance, the implementation of a mechanism to measure the performance of e-procurement from the viewpoint of the users, requires the capability to capture knowledge requirements from business partners, translate them into key performance indicators, develop appropriate tools to extract and calculate data, and provide evaluations to the users. In short, the activity of knowledge intermediation can be seen as involving three main aspects, i.e.: ƒ The capability to integrate exchanges of tacit and explicit knowledge, since both the two kinds of knowledge are essential to the design and functioning of the system; ƒ The capability to select and use the appropriate mechanism to support those knowledge exchanges (i.e.: electronic vs. face-to-face mechanisms, formal vs. formal mechanisms, etc.); ƒ The capability to act as a recognised “third party” in such knowledge exchanges, thus guaranteeing a fair balance between the needs and viewpoints of the different parties.

70

Ettore Bolisani and Roberto Ruaro

Acknowledgements This paper contributes to the FIRB 2003 project “ Knowledge management in the Extended Enterprise: new organizational models in the digital age”, funded by the Italian Ministry of Education, University and Research.

References Becerra-Fernandez, I. and Sabherwal, R. (2001) “Organizational knowledge management: a contingency perspective”, Journal of Management Information Systems, Vol. 18, No. 1, pp23–55. Bolisani, E. and Scarso, E. (2003) “Editorial: Seeking out the links between knowledge management and electronic commerce”, International Journal of Electronic Business, Vol. 1, No. 2, pp107-117 Bolisani, E., Di Biagi, M. and Scarso, E. (2003) “Knowledge Intermediation: New Business Models in the Digital Economy”, Proceedings of the 16th Bled eCommerce Conference, pp987-999. Holsapple, C.W. and Singh, M. (2000) “Electronic commerce: from a definitional taxonomy toward a knowledge-management view”, Journal of Organizational Computing and Electronic Commerce, Vol. 10, No. 3, pp149–170. Lefebvre, L.A. and Lefebvre, E. (2000) “Virtual Enterprises and Virtual Economy: Manifestations and Policy Challenges”, International Journal of Technology Management, vol. 20, No. 1/2, pp58-71 Mårtensson, M. (2000) “A critical review of knowledge management as a management tool”, Journal of Knowledge Management, Vol. 4, No. 4, pp.204-216 Nonaka, I. and Takeuchi, H. (1995) The Knowledge-Creating Company, Oxford, Oxford University Press Ruaro R. (2004) Prospettive dell’e-procurement nel settore Difesa: il programma “in-service support” del velivolo Tornado, Tesi di laurea, Università di Padova, DTG Sarkar, M.B., Butler, B. and Steinfield, C. (1995) “Intermediaries and Cybermediaries: A Continuing Role for Mediating Players in the Electronic Marketplace”, Journal of Computer Mediated Communication, Vol. 1, No 3. Scarso E. Bolisani E., Di Biagi M. (2005), “Knowledge intermediation”, in Schwartz D. (ed.), Encyclopedia of Knowledge Management, Hershey (PA), IDEA Group Publishing Seufert, A., von Krogh, G. and Bach, A. (1999) “Towards knowledge networking”, Journal of Knowledge Management, Vol. 3, No. 3, pp180–190. Upton, D.M. and McAfee, A. (1999) “The Real Virtual Factory”, in Tapscott D. (ed.) Creating value in the Network Economy, Boston, Harvard Business School Press Zack, M.H. (2001) “If managing knowledge is the solution, then what’s the problem?”, in Y. Malhotra (Ed.) Knowledge Management and Business Model Innovation, Hershey, PA, Idea Group Publishing

71

The Cognitive Rationalization of Professional Services: Expected and Unexpected Effects of Knowledge Management Systems Marion Brivot HEC, Jouy-en-Josas cedex, France [email protected] Abstract: Professional service firms (PSFs) are increasingly coming under pressure in the three areas of cost, flexibility and quality, from clients seeking to rationalize their service purchasing behavior. To survive in this selective environment, most PSFs are forced to think in terms of efficiency and productivity, two values imported from the industrial world that may conflict with their traditional professional identity. This paper illustrates how environmental pressures can translate into organizational tensions that finally being about the adoption of a rationalization plan. Our purpose is to compare the expected rationalizing effects of knowledge management systems with their unexpected effects on PSF’s work and structure. The research concentrates on the case study of a law firm, ex member of a “Big Four” worldwide network of auditors, tax and legal advisers and management consultants, and analyzes the dialectical interactions between its knowledge management system, its professionals and the structural properties of the firm, by using an adapted version of Orlikowski (1992)’s structurational theory of technology. This research fits into the structural perspective on the impact of technology on organizations. It also borrows from the sociology of professions and adds to the literature on organizational knowledge management. Keywords: Knowledge management systems, professional service firms (PSF), structurational theory of technology, work rationalization

1. Introduction Mergers and acquisitions and the globalization trend in many industries have resulted in fewer large target clients for PSFs, which have found themselves up against tougher competition (Stumpf Doh and Clark 2002). This encourages clients to benchmark PSFs, negotiate lower billing rates and place limits on the amounts of fees they are prepared to pay, despite growing demand in a number of professional service areas (Huon 2004). PSFs, whose business models are based on the selling of technical expertise coupled with diagnostic and problem-solving skills, must also absorb and comprehend their clients’ increasingly complex and constantly changing products and services, and cope with the growing heterogeneity and volume of their clients’ technical and commercial knowledge. This general economic acceleration and diversification (Hatchuel and Weil 1992) not only affects the technical complexity of PSFs’ work, but also the ever-shortening lifecycle of their solutions and ideas. As quality and flexibility are no longer differentiating factors, one of the few remaining discriminators has to be cost, which forces PSFs to think in terms of efficiency and productivity, two values imported from the industrial world that potentially clash with the very “soul of professionalism” (Freidson 2001). As a result of these external pressures, most professional service firms are tempted to standardize their service offerings, control their costs, and rationalize their production in the same way their clients are rationalizing their purchasing behaviors. PSFs’ rationalization efforts usually comprise 3 steps (Gadrey 1999): the cataloging of recurrent business issues and client questions, the formalization of diagnostic and problem-solving methods and techniques, and the adoption of a portfolio of organizational routines. 0. This study examines the effects – whether intentional or accidental, negative or positive – of the use of a knowledge management system (KM system) at a large French law firm, ABC & Associates (not its real name), with reference to the rationalization and efficiency strategy the system is supposed to serve. Two types of consequences are considered: processual and structural consequences (Orlikowski 1992, 2000). This paper continues with a presentation of the theoretical framework and methodology adopted. After a section highlighting the main results, the conclusion discusses the contribution of this study and identifies some of the research avenues it opens up.

72

Marion Brivot

2. Theory 2.1 Knowledge management systems and organizational changes Following the publication of the pioneering work by Polanyi (1958), developed by evolutionists (Nelson, and Winter, 1987) and management researchers (including Boisot 1995; Drucker 1993; Grant 1996; Nonaka 1991, 1994; Spender 1996; Starbuck 1992), knowledge management (KM) came to be considered by knowledge-intensive firms as the new Holy Grail of management in the early 1990s. Fifteen years later, contrary to the forecasts of a number of academic observers (Trepo 1987) who believed KM to be nothing more than a passing fad, far from disappearing from the corporate agenda, KM seems in fact to have grown in popularity. Bain & Co’s Management Tools and Trends 2005 survey shows that “54 per cent of companies use KM systems, compared with 28 per cent in 1996”. This popularity also concerns the academic sphere, as confirmed by Bang (2005). This study proposes to analyze the effects of the use of a KM system on a large law firm’s structural properties and the work of its professionals. Assessing the effects of a technological instrument does not necessarily require a deterministic view, or a technocratic approach. The term “KM system” as used here refers to a “technology-in-use”, i.e., an artifact comprising hardware and software, with physical properties and functionalities on the one hand, and two types of uses on the other hand: prescribed and actual uses (Rabardel 1995). In line with the ergonomic view of technology, human actors can be considered as highly creative in their usages of technology. Usage modes are inherently personal; they depend on the user’s technology proficiency, experience, personal history, as well as other factors. For this reason, actual usage modes can hardly ever be fully anticipated by technology designers. Users, however, do not have total freedom of maneuver or, to put it differently, there is a limit to the malleability of uses, due to the physical characteristics of the artifact. Evaluation of the impacts or effects of IT on organizations - a common theme in research literature and in practice - is difficult due to the lack of a utilizable and commonly accepted IT evaluation model despite the overabundance of techniques available. To add to this difficulty, companies vary in IT literacy, use and intensity, which make any deterministic evaluation model hazardous. The major point of interest in this study is the dual nature of technology. Humans structure technology, firstly by those human actors who design it, and secondly by the variety of usages that human actors then make of it. Technology, in turn, structures human action in that it both constrains and enables it (Orlikowski 1992, 2000). This dialectical model, adapted from Giddens’ theory of structuration (1984) is inspired by an analogy with language: speech (c.f., technology-in-use) structures grammar; it either reinforces its existing rules or causes grammatical evolution. In turn, grammar (c.f., the artifact and its prescribed uses) structures speech in that it simultaneously enables it and constrains it. For DeSanctis and Poole (1994), the “spirit” of a technology is the general intent with regard to the values and goals underlying a given set of features. “The spirit is the official line which the technology presents to people regarding how to act when using the system, how to interpret its features, and how to fill gaps in the procedure which are not explicitly specified.” (p. 126). Accordingly, the prescribed uses of the KM system, as presented by the firm’s top management and the system’s promoters, encapsulate the structures of domination, legitimation, and signification that they wish to promote (Orlikowski 2000). DeSanctis and Poole (1994) further argue that the “use and re-use of technology structures […] lead, over time, to their institutionalization […]. As technology structures are applied in group interaction, they are produced and reproduced. Over time, new forms of social structure may emerge in interaction […]. Once emergent structures are used and accepted they may become institutions in their own right and the change is fixed in the organization.” These considerations lead us to the formulation of a first research proposition: Research proposition 1: the routinization of professionals’ uses of the KM system leads to the institutionalization of the structure of domination, legitimation and signification virtually encapsulated in the KM system.

73

The 7th European Conference on Knowledge Management

Lastly, it is worth considering the unintended consequences of the adoption of those structural properties that have been encoded into the KM system. Like Giddens (1990), who focuses on the reflexivity of knowledge and its unintended effects, which “transcend the intentions of those who apply it to transformative ends” (p. 54), my focus is on analyzing the unintended transformations originated by the enactment of ABC & Associates’ KM system. Research proposition 2: the transformations originated by the enactment of ABC & Associates’ KM system were not all expected by the firm’s leaders.

2.2 Rationalization of work and professionalism A number of terms with distinct semantic nuances gravitate around the concept of rationalization (industrialization, formalization, standardization, normalization, homogenization, etc.). Gadrey (1994) distinguishes two types of rationalization strategies. The first form of work rationalization is called “cognitive” and has three intertwined modalities: (1) the classification of recurrent client issues, (2) the formalization of diagnostic and problem-solving methods and techniques, and cataloging of the service offering and (3) the adoption of a portfolio of “organizational routines” as defined by Nelson and Winter (1987). According to Gadrey, this type of rationalization has always existed – to varying extents – in professional service firms, whether they be large or small. The second rationalization model proposed by Gadrey (1994, 1999) has a neo-Taylorian basis and can be considered comparable to the concept of “industrialization”. This form of rationalization is usually implemented by mechanistic organizations evolving in relatively stable environments, such as Mintzberg (1980)’s Administrative Bureaucracies. The main characteristics of this rationalization model are (1) the specialization of tasks and work processes, (2) the production of standardized products, or services that the author refers to as “quasiproducts”, (3) the predominance of Technostructure (Mintzberg 1980) and (4) the adoption of hierarchized control mechanisms. This model is usually accompanied by extensive use of quantitative measures of performance and productivity. According to Gadrey (1994, 1999), adoption by PSFs of a purely neo-Taylorian model of rationalization would prompt the end of the professional identity, and transform the firm into a mechanistic bureaucracy, whereas adoption of the professional rationalization model can accommodate the spirit of professional enterprise. The concept of “rationalization” is nebulous but the same ambiguity applies to the concept of “profession”: there is no standard definition available and few propositions have the clarity and comprehensiveness of Freidson’s (1994, 2001). According to the author, professions are inherently local and contingent to national socio-historical contexts but there are five critical conditions for establishing and supporting “professionalism” in most countries: (1) “an esoteric body of knowledge requiring ‘considerable discretion’”, (2), “an occupationally controlled division of labor” in which occupations struggle for and negotiate “territories” between themselves, independently of labor consumers – as explained by Abbott (1988) in his famous “system of professions”, (3) an occupationally centralized and controlled labor market “organized by a monocratic rational-legal authority that determines the qualifications required of those employed, the work they do, how they do it, and the way they are evaluated and compensated” (4) “an occupationally controlled training program”, and (5) “an ideology serving some transcendent value”. In the last chapter of his 2001 book, Freidson deals with “the soul” and “the essence” of professionalism, i.e., professionalism-as-ideology, and points out the inherent clash between “commercialism”, reflecting the core values of competition and the profit motive, and “professionalism”, grounded in values such as duty and diligence, skill, advocacy and altruism. For the author, the core difference between professional and non-professional services lies in professionals’ “independence of judgment and freedom of action”. Professionals have the independence and expertise to make choices for others, and even violate the wishes of their clients. This “freedom to judge and choose the ends of work, is […] what expresses the soul of professionalism.” (p. 217). The question therefore arises of whether the rationalization –

74

Marion Brivot

hypothesized above – of ABC & Associates’ activities might be perceived by its employees as a transgression or a transformation of their identity as professionals: Research proposition 3: The use of a KM system has contributed to the transformation of the professional identity of ABC & Associates’ employees.

3. Method To address the above research propositions, a case study was conducted at ABC & Associates from April through October 2005. Various sources of information were triangulated and Orlikowski (1992)’s structurational model of technology was applied.

3.1 Research field and sources of data ABC & Associates owns more than 100 offices throughout France. Its various provincial offices deal with local accounts, the Parisian office concentrates on national accounts, and the international office (IO) focuses on multinational clients and cross-border operations. Of all ABC & Associates’ main offices, only the IO uses a formal KM system, which is the reason why the study focuses solely on this entity. ABC & Associates employs more than 1200 licensed lawyers, 230 of whom work for the IO. The KM system developed by the IO in 1999 was designed to meet specific needs defined by a team of managers and partners. Semi-structured interviews were conducted with the creators and promoters of the system, so as to grasp the structural properties encoded into the technology, understand its “prescribed uses” and its “expected effects”. The interviewees were ABC & Associates’ managing partner, the partner promoting the KM system, the system manager and five additional partners designated by the managing partner. Next, internal documents dealing with the launch and deployment of the firm’s KM system were analyzed. All system training materials and the internal memos that were sent to foster its use from 1999 to 2005 were compiled and examined, to assess any evolution in the system’s prescribed use and expected effects. Thirdly, the database of hits and downloads of documents stored in the system was analyzed over a period of 3 months – April through June 2005, which represented 17 861 observations - and five professionals were observed in their day-to-day interactions with the system during the month of July 2005, in order to explore the “actual uses” of the system and assess its unexpected effects.

3.2 Data analysis model To analyze the data collected as described above and address the research propositions presented earlier, an adapted version of the model proposed by Orlikowski (1992), was used, as shown in Figure 1. Orlikowski’s original triadic model outlines the dialectical interactions between human actors, structure and technology. It shows that technology is an outcome of human actions such as design, development, appropriation and modification (arrow 2). At the same time, technology is a medium of human action, since it both facilitates and constrains human action through the provision of interpretive schemes, facilities and norms (arrow 3). Besides this, institutional properties influence human action in their interactions with technology (arrow 1), and interaction with technology influences the institutional properties of the organization (arrow 4) by reinforcing or transforming structures of signification, domination and legitimation. In order to observe the interactions designated by those four arrows, this study imports the concept of “prescribed uses” and “actual uses” from the ergonomic literature (Rabardel 1995), and considers two dimensions of effects: structural versus processual effects on the one hand, and expected versus unexpected effects on the other hand, as shown in Figure 1. Orlikowski has already considered these two dimensions of effects in her 2000 paper, but she did not clearly relate them to the various elements of her model, which makes it difficult to understand how she was able to observe such effects.

75

The 7th European Conference on Knowledge Management

Figure 1: Structurational model of technology. adapted from Orlikowski (1992)

4. Results First, the artifactual component of the system is described, highlighting the uses prescribed by its designers and promoters (see Arrow 1 in Figure 1). This is followed by a presentation of the effects that the system’s designers and promoters had anticipated when they launched the technology (see Arrows 3a and 4a in Figure 1). The various enactments of the technology actually made by users (see Arrow 2 in Figure 1) are then discussed, with illustrations of some of the effects observed (see Arrows 3b and 4b in Figure 1). Finally the three research propositions are addressed, and a discussion ends the paper.

4.1 The knowledge management system The IO’s KM system comprises four sub-sections, respectively called “Internal Contributions”, “Tax and Legal News”, “Library” and “Marketing”. The “Internal Contributions” section contains the legal opinions sent out to clients (necessarily validated by at least 2 partners). Every lawyer is supposed to be self-disciplined and upload each of his/her opinion letters into the system either directly or with the help of a secretary. The “Tax and Legal news” section contains press articles and technical news selected and uploaded either by the professionals themselves or by the IO’s librarians. The “Library section” includes solutions to standard tax and legal questions, checklists, information notes, or “subject files” prepared by junior staff on chosen legal issues under the supervision of a “knowledge committee”. The last section, called “Marketing” or “Client Base” is designed to allow professionals to store their client contact names and report their activities (time spent, type of work performed, number of professionals involved, status of work, etc.) by line of business, line of service, company name, client contact name and partner in charge. The whole system is equipped with a search engine that allows users to perform a “simple” or “advanced” search based on keywords and other criteria, through all four sections of the database simultaneously if required. Filters can also be activated, to restrict search results to documents that are relevant for certain specialist areas, such as “Value Added Tax” for example. Certain “lines of service” or “lines of business” as virtual portals to the firm’s specialized knowledge use these filters.

4.1.1 Prescribed uses and corresponding structures Interviews reveal that the uses, which the system’s developers and promoters prescribe, embody a number of structures listed in Table 1.

76

Marion Brivot

Table 1: Prescribed uses and corresponding structures

4.1.2 Expected effects The analysis of internal documents shows that the reasons why the system was developed in 1999 were not explicitly communicated to future users. The a-posteriori justifications obtained from the interviewees cover four main arguments, which correspond to the effects that they expected from adoption of the KM system. The first implicit motivation was to legitimize the reorganization indirectly imposed by the international tax and legal network which the IO belonged to at that time. The IO needed to transform its ad-hoc structure (Mintzberg 1980) into a matrix structure based on “lines of business” and “lines of services”. This organizational change would not have had any substance or “body” without the identification and categorization of specialized knowledge and solutions at the intersections of this matrix, and this was made possible by the KM system. The interviewees also

77

The 7th European Conference on Knowledge Management

supplied 3 operational justifications for the existence of the system: improvement of productivity, improvement of monitoring and risk management procedures due to increased visibility of employees’ work via the system, and finally, improvement of the technical quality of opinion letters, i.e., lawyers’ end-products, the materialization of the service that they deliver to their clients.

4.2 Technology enactments and observed effects Observations and interviews reveal that users exert great pressure to have the tool modified to suit their own vision, which in many instances differs from the structural properties virtually encoded into the “prescribed uses” of the system.

4.2.1 Structure of legitimation The system was initially designed to grant all lawyers unrestricted access to any kind of content. However, users insisted on a less “open” model in the second release of the system, where access to content could be restricted according to the user’s hierarchical level. Their justification for this functional evolution of the system was that junior professionals might not be able to make good use of highly technical tax and legal strategies. The system’s promoters conceded this point, seeing it as the necessary price to pay for ensuring the system was actually used. Today however, this access restriction feature is not widely used. Ninety-five percent of the opinion letters are freely downloadable by everyone, suggesting that – apparently – the attempt by professionals to resist the institutionalization of the “opinions-are-owned-by-the-firm-and-should-beaccessible-to-all” normative property has failed. However, the interviews reveal that people continue to produce their opinions “from scratch”, not so much because they are hostile to industrialization of their practices – which they are – but because they seldom find pre-packaged solutions to suit their needs when they browse the system. Two explanations have been put forward by interviewees: the contingent nature of lawyers’ work and the potential withholding of the most valuable solutions by lawyers who derive power and prestige from the asymmetry of knowledge they are thus able to maintain. These findings show that all professionals have not adopted the second normative property sought by the system’s promoters, i.e., the fact that lawyers should leverage the firm’s existing experience rather than starting their work from scratch. Junior lawyers tend to abide by these new rule more than senior lawyers, and professionals operating in “industrializable” areas of expertise are more prone to apply this rule than those who feel that their specialty area requires an ad-hoc production mode.

4.2.2 Structure of domination The perception by some interviewees that a certain number of experts carefully avoid sharing their most valuable opinions through the system shows that the traditional professional power structure has somehow survived in the organization, despite the efforts of the system’s promoters to unlock the doors of each professional’s knowledge closet. Knowledge asymmetry is still used to obtain and maintain power. The “democratic” power structure encoded into the system is thus only imperfectly espoused. What is perceived as a threat is not so much the “panoptical” control intended by the system’s promoters, as the potential “robbery” of expertise the system is believed to facilitate. This fear is even greater as regards client contacts. Senior professionals consider client contacts as their most important asset. Sharing personal client contact names and details in a central database available to their peers is thus perceived as a major violation of professional powers. In addition, people are very suspicious of what others may do with their contacts. The slightest “faux pas” might result in a client the firm took great pains to win turning to competitors. For these reasons, not openly mentioned during the interviews but perceptible “between the lines”, the “Client Base” section of the KM system has been rejected. By collectively refusing to use this section of the system, the IO’s lawyers display their resistance to the new power and control structure mediated by the system, but without managing to fully preserve their traditional power and control structure. Professionals at the IO are not autonomous or independent; they are directly controlled by their lines of services and lines of businesses, and are indirectly monitored – although less overtly - by potentially any of their

78

Marion Brivot

peers. The resultant structure is a hybridization of both forces, the conservative professional force that aims for autonomy and self-discipline, and the bureaucratic force that seeks authority and control. In this particular structure, the IO thus oscillates between Gadrey (1999)’s industrial and professional rationalization models.

4.2.3 Structure of signification Our analysis reveals that two conflicting interpretive schemes coexist within the IO. The first is a hybridization of professional and bureaucratic values, whereby “productivity” and “cost effectiveness”, two bureaucratic values, have been added to the traditional professional values of autonomy, freedom of judgment, diligence, skill, advocacy and altruism. The professionals who best embody this mixture of values are professionals who, on top of their day-to-day legal and tax advisory work, also take on managerial responsibilities such as heading knowledge committees, participating in the IO’s research and development entity set up to design new tax and legal solutions, or involvement in the management of lines of businesses and lines of services. These lawyers recognize that the bureaucratization of the firm is a painful but inevitable process, in which junior lawyers are more “comfortable” than senior professionals. The second interpretive scheme that can be found at the IO is more conservative. Lawyers in this scheme will share standard knowledge, or “common knowledge” as Grant (1996) puts it, but resent sharing specific, high-value knowledge outside the small circle of their community of practice. They take a proprietarial view of knowledge management, defending what they see as their own private property, and are anxious that others could potentially “steal” their hard-won knowledge.

4.2.4 Expected and unexpected effects Let us now consider the two dimensions of effects observable (Table 2). Table 2: Expected and unexpected effects of technology enactments on the International Office’s structure and production process

The results presented above allow us to address each of the three research propositions: (1) contrary to our forecast, users’ technology enactments have caused the transformation of a number of structures, the reinforcement of other structures and quite simply the rejection of a few other structures encoded into the system. (2) Although some of the transformations, reinforcements and rejections mentioned above had been expected by the system’s promoters, some had not been anticipated, as summarized in Table 2. Lastly (3), the use of a KM system has fostered a transformation in the firm’s professional identity, by bringing professional and bureaucratic values

79

The 7th European Conference on Knowledge Management

to coexist. This is perceived by those lawyers who are hostile to the rationalization of the firm’s production process as a betrayal of the “soul of professionalism” (Freidson 2001).

5. Discussion One important result that this study reveals is that professional service firms’ attempts to rationalize their operations may cause professionals to either hybridize their identity, adopting a personal blend of bureaucratic and professional values, or reinforce their traditional identity when they perceive this attempt as a violation of the very “soul of Professionalism”. One potential avenue for future research would be to analyze the consequences of this identity clash in professional service firms. Secondly, it would be useful to go further and explore the concept of “mass-customization” in professional service firms. Mass-customization can take two forms: (1) individualization, which involves the adjunction of specific, tailored elements to a standardized product or service, or (2) “massification” or generalization of a product or service initially designed to serve specific client needs (see Bressand Distler and Nicolaïdis 1989). Studying the ways these mass-customization production strategies are being applied by professional service firms would give a better understanding of the rationalization of knowledge-intensive services. Thirdly, this study shows that the implementation of a KM system can be regarded as a solution to a control crisis in professional service firms, but that the very use of such a system causes the emergence of a new “control crisis”, in line with Beniger’s theory (1986). At ABC & Associates, junior professionals tend to turn to the system too readily, and re-use tax and legal opinions produced by more senior lawyers without having to make the effort to fully understand their client’s issues. Besides, the law changes quickly, and the opinion letters available in the system do not carry a “sell-by” date that could alert system users to the risk they run in re-using them. It would be informative, in future studies, to examine this control-revolution / control crisis paradox further, and analyze how the control crisis induced by the use of a KM system in professional firms could be overcome. Fourthly, the traditional master-apprentice model, which used to be a successful way of training and developing professionals, is endangered by two factors: the growing size of professional service firms is causing individual bonds between junior and senior professionals to fade, and the availability of a KM system leads junior professionals to acquire a document re-use reflex. This reflex engenders a new type of skill which is a source of cost-effectiveness for the firm, but, at the same time may damage junior lawyers’ ability to fully understand their client’s issues, as well as their ability to develop creative solutions from scratch when need be. It would thus be worthwhile to conduct an in-depth analysis of the effects that KM systems can have on the evolution of the PSF’s professional training and development model.

References Abbott, A. (1988) The System of Professions: An essay on the division of expert labor. Chicago: University of Chicago Press. Bain & Co. (2005) Management Tools & Trends survey. Available at: http://www.bain.com/management_tools/home.asp Bang, A. (2005) Knowledge Management in Practice: Examining knowledge as modes of production. In Buono, A. F., and Poulfelt, F. (Eds), Challenges and issues in knowledge management: Greenwich, Connecticut: Information Age Publishing, pp. 317-335. Barley, S. (1986) Technology as an occasion for structuring: Evidence from observations of CT scanners and the social order of radiology departments. Administrative Science Quarterly, Vol. 31, No. 1, pp. 78-108. Boisot, M. H. (1995) Information space: A necessary unity. New York: E. P. Dutton. Bressand, A., Distler, C. and Nicolaïdis, K. (1989) Vers une économie de réseaux. Politique Industrielle, Winter, pp. 155-168.

80

Marion Brivot

DeSanctis, G., and Poole, M. S. (1994) Capturing the complexity in advanced technology use: Adaptative structuration theory. Organization Science, Vol. 5, No. 2, pp. 121-147. Drucker, P. F. (1993) Post-capitalist society. Oxford: Butterworth Heinemann. Freidson, E. (1994) Method and substance in the comparative study of professions. Conference on Regulating Expertise, Paris, April 14, 1994. Freidson, E. (2001) Professionalism: The third logic. Chicago: University of Chicago Press. Gadrey, J. (1994) La modernisation des services professionnels. Rationalisation industrielle ou rationalisation professionnelle? Revue Française de sociologie, Vol. 35, pp. 163-195. Gadrey, J. (1999) Flexibilité et professionalisation du travail dans les services: Des strategies et des modèles distincts. Economies et Sociétés, Série Economie et Gestion des Services, Vol. 1, pp. 117-141. Giddens, A. (1984) The constitution of society. Cambridge: Polity Press. Giddens, A. (1990) The consequences of modernity. Stanford: Stanford University Press. Grant, R. M. (1996) Toward a knowledge-based theory of the firm. Strategic Management Journal, Vol. 17, pp. 109-122. Hatchuel, A., and Weil, B. (1992) L'expert et le système. Paris: Economica. Huon, Y. (2004) Professions juridiques. Etude de marché Xerfi700. Mintzberg, H. (1980) Structure in 5’s: A synthesis of the research on organization design. Management Science, Vol. 26, No. 3, pp. 322-341. Nelson, R. R., and Winter, S. G. (1987) An evolutionary theory of economic change. Cambridge, MA: Harvard University Press. Nonaka, I. (1991) The knowledge creating company. Harvard Business Review, Vol. 69, No. 6, pp. 96-104. Nonaka, I. (1994) A dynamic theory of organizational knowledge creation. Organization Science, Vol. 5, No. 1, pp. 14-37. Orlikowski, W. (1992) The duality of technology: Rethinking the concept of technology in organizations. Organization Science, Vol. 3, No. 3, pp. 398-427. Orlikowski, W. (2000) Using Technology and Constituting Structures: A practical lens for studying technology in organizations. Organization Science, Vol. 11, No. 4, pp. 404-428. Polanyi, M. (1958) Personal knowledge: Towards a post-critical philosophy. New York: Harper Torchbooks. Rabardel, P. (1995) Les hommes et les technologies. Approche cognitive des instruments contemporains. Paris: Armand Colin. Spender (1996) Making knowledge the basis of a dynamic theory of the firm. Strategic Management Journal, Vol. 17, pp. 45-62. Starbuck, W. H. (1992) Learning By Knowledge-Intensive Firms. Journal of Management Studies, Vol. 29, No. 6, pp. 713-740. Stumpf, S. A., Doh, J. P., and Clark K. D. (2002) Professional service firms in transition: Challenges and opportunities for improving performance. Organizational Dynamics, Vol. 31, No. 3, pp. 259-279. Trepo, G. (1987) Introduction and diffusion of management tools. European Management Journal, Vol. 5, No. 4, pp. 287-293.

81

Managerial Perceptions of Linking Intellectual Capital and Strategy Maria do Rosário Cabrita Faculty of Science and Technology, Universidade Nova de Lisboa, Portugal [email protected] Abstract: Intellectual capital is becoming increasingly important to the scientific community as well as the business world. However, this enthusiasm has not been matched by an understanding of how this occurs and what conditions can encourage it. The purpose of this paper is to bring together the visions of different Portuguese bank managers and to make their differences explicit. Portuguese bank managers recognize the strategic importance of IC though its components are distinctly evaluated because different organizations develop different visions about the value of resources and industry strengths and weaknesses. We argue that intellectual capital is specific-context, unique for each organization and it is more like a characteristic rather than property of a company.

Keywords: Intellectual capital, core competences, strategy, value creation, banking sector.

1. Introduction The knowledge economy increasingly relies on the diffusion and use of knowledge, as well as its creation. The growing importance of knowledge to the success of enterprises and to national economies as a whole is reflected in a dramatic increase in literature that addresses the role and nature of knowledge management and intellectual capital (IC). In spite of this increase and the resulting vibrancy within the topic, it suffers from a relative lack of meta-level concepts, which help one put intellectual capital into practice (Wexler, 2002). Companies poorly understand the relevant IC components, and therefore are not able to adequately identify, measure, report and manage their IC (Petty and Guthrie, 2000). Despite these limitations, the empirical evidence suggests that successful players in competitive markets are those that have access to a corpus of unique – or at least difficult-to-replicate – capabilities and competences (Andriessen, 2001; Eustace, 2003; Viedma, 2004) which are generally bundled together and interdependent to such an extent that they are difficult to isolate and value. Helfert (2000) argues that there are key elements that stand out as significant in the creation of value in intellectual capital assets. These “value-drivers” represent the source of value of intellectual capital, and understanding and identifying their underlying characteristics would allow us to effectively manage the value chain of intellectual capital. Hoskisson et al. (1999) suggest that single industries provide a particularly important context for examining resources critical to the industries and markets in question. However, Barney (1986) argues that strategic choices should flow mainly from the analysis of the firm’s unique skills and capabilities rather than from an analysis of its competitive environment. For this author, differences in firm expectations concerning the future value of a strategy are the central source of abovenormal returns from acquiring resources from strategic factor markets (i.e. a market where the resources necessary to implement a strategy are acquired).The purpose of this study is to develop and validate a measurement instrument for IC and to understand whether managers perceive a dimension of IC that can be considered strategically important for the Portuguese banking industry. The findings reported in this paper are based on data collected in two phases. In the first phase, data were collected from a survey administered to 53 banks. The second phase included 22 semistructured interviews with senior bank managers, aiming to identify the most relevant IC components for the banking industry. In this step the respondents were expected to identify which intangibles contribute most strongly to the business performance of their banks. Our study aims to provide a deeper understanding of the strategic perspective of intellectual capital. We argue that IC is context-specific, unique to each organization, and it is more like a characteristic than a property of a company. The relative importance companies place on their IC components differs even among organizations operating in the same sector. This is because each firm exists within a context (vision and strategy) that shapes its view of what is or is not of value.

82

Maria do Rosário Cabrita

2. Theoretical background 2.1 Defining IC It is difficult to define IC due to its intangible and dynamic nature. Several definitions of IC have been proposed (Roos et al., 1997; Stewart, 1997). However there is presently no universally acceptable one, although it would appear that researchers and practitioners have the same broad set of theoretical and practical assumptions in mind. They also realize that in principle great economies and efficiencies could be achieved if knowledge could be managed and distributed efficiently around the organization. To simplify, we argue that there are at least three basic components that emerge from the various definitions of intellectual capital: ƒ The knowledge itself is its basic element; ƒ The need for a structure to encourage, maintain, distribute and deliver that knowledge appropriately and; ƒ The perception that it is the effect of a collective practice. A well-known definition is the one proposed by Edvinsson and Malone (1997:3): “IC is the knowledge applied to work to create value”. Viedma (2004) uses the terms “intellectual capital” and “core competences” interchangeably. This approach is in agreement with Andriessen (2001) who refers to IC as a unique bundle of intangible assets that are the bases of sustainable competitive advantage. It seems that the difference between various IC definitions lies mainly in the content that IC may cover. According to Zhou and Fink (2003), a definition of IC should be broad enough to enable organizations to include the full range of their intangibles and specific enough to provide guidance for management to take action. Due to its dynamic nature, the concept of IC is often identified as a phenomenon of interdependencies and interactions (Marr et al., 2004; Cabrita and Vaz, 2006), a driver of business performance and value creation (Bontis, 1998; Cabrita and Bontis, 2007). Although frameworks are useful for classifying intangible assets, they do not reflect the flow of intellectual capital in an organization or provide a basis for its management. Some attempts to operationalize the concept have emerged in the literature, classifying IC into the categories of human capital (HC), structural capital (SC) and relational capital (RC) (Edvinsson and Malone, 1997; Stewart, 1997; Bontis, 1998). Human capital is the brain and soul of an organization. It is considered the primary element of intellectual capital (Choo and Bontis, 2002; Cabrita and Bontis, 2007) and the most important source of sustainable competitive advantage (Nonaka and Takeuchi, 1995). Employees generate IC through their competence, attitude and intellectual agility (Roos et al., 1997). Competence includes skills and education, while attitude represents the behavioural element of the employee’s work. Intellectual agility enables one to change practices and to think of innovative solutions to problems. Structural capital is what remains in the company when employees go home at night (Roos et al., 1997). It comprises internal processes, organizational structure, databases, culture and all that enable organizations to make their human capital more productive. The role of organizations is to provide the necessary structure for individuals to collaborate in a way that leverages their talent and existing market opportunities in order to create economic value. The focus is on getting a higher leverage of the human capital through structural capital, producing a “multiplier effect” (Edvinsson, 2002). Relational capital is the knowledge embedded in the network of relationships that an organization develops by conducting its business. This network of relationships is the product of investment strategies, individual or collective, consciously or unconsciously aimed at establishing useful relationships that can be converted into capital. In this networked value creation, customer relationships are decisive in converting IC into market value while customer satisfaction can maintain the business relationship, decrease price elasticity and improve the company’s image and

83

The 7th European Conference on Knowledge Management

reputation (Fornell, 1992). In this context, trust among the members of network is a major driver for moving the value of relational capital up and down.

2.2 Linking strategy and IC The literature on strategy stresses three common value creation themes. Porter’s (1980) view corresponds to the horizontal value creation processes in a competitive landscape. This activitybased perspective sees firms as entities that create value by transforming a set of inputs into more valuable outputs. Barney (1991) outlines a resource-based view of strategy and value creation in which firms are seen as heterogeneous entities characterised by their unique base of resources. When these resources are unique, valuable, rare, and hard to imitate, appropriate usage of these resources will help to maintain a competitive advantage for the business. Finally, Prahalad and Hamel (1990), founders of the core competence theory, state that the real future of a company lies in the optimal utilisation and maintenance of unique skills: core competences. In this theory, strategic success demands competences that continually evolve slowly and incrementally, requiring training to be learnt. IC perspective emphasizes the importance of matching those complementary visions of value creation. Viedma (2004) provides an important contribution to this aim, integrating in its Strategic Kowledge Benchmarking System (SKBS) the activity-based view, the resource-based view and the core competence perspective. The activity-based view (Porter, 1980) focuses in what the firm does, while the resource-based view (Barney, 1991) focuses in what the firm has. Additionally, the core competence theory (Prahalad and Hamel, 1990) converges to the crucial role of core competences, as particular competences “that embedded in the value chain’s core business activities deliver products and services with competitive advantages and high knowledge and IC content” (Viedma, 2003:217). Any company has a range of competences, though only some of them are fundamental to its performance and strategy. A firm’s specific core competences are not usually very numerous. Thompson and Cole (1997) consider that competitive success in a dynamic environment is dependent on three groups of competences: the content of the actual strategies; strategic change competences and strategic learning competences.The central question for strategy theory has always been how firms compete in their industries or in global markets and how they create value. Operating in a turbulent and unpredictable environment, firms realize that when formulating a strategy it is not enough to identify the competitive forces of industry (Porter, 1979) or to possess resources, which are unique and difficult for competitors to imitate (Barney, 1991). Additionally, firms must understand the strategic importance of the right combination of resources, capabilities and competences and complementary it is also important to know what the firm does with these resources. This means that the static view (what the firm has) needs to be complemented with a dynamic perspective (what the firm does). From a strategic perspective, IC is viewed as a valuable resource used to create and leverage organizational value, playing a central role in the formulation of strategy. Intellectual Capital is linked to questions about identity: “who we are and what we want to be” (Roos et al., 1997:62). Thus, any consideration on IC has little value unless it is linked to the firm’s strategy (Marr et al., 2003). While a company’s strategy defines what IC is needed to achieve its strategic intent, the reverse is also true, as a company’s IC defines what strategy it can formulate and implement. Linking IC with strategy allow companies to identify the core IC needed to accomplish the strategy and the path to achieve the strategic intent.

2.3 Banking sector overview Since joining the EU in 1986, Portugal has achieved a good track record in terms of real economic convergence benefiting from the modernization of its economy and the financial stability provided by the euro. In this context, financial sector has playing a crucial role in the development and growth of the Portuguese economy. The changing nature of the banking sector, where banks are moving from on-balance sheet to offbalance sheet activities, together with new challenges driven by the Basle II Agreement and the

84

Maria do Rosário Cabrita

International Accounting Standard (IAS) rules, has created a need for skills, technologies, and organizational models that are quite different from those of traditional lending. The Basel II Agreement looks to modernize the global framework for calculating regulatory capital in the financial services industry to protect it from shocks to the system caused by credit defaults, operational failures and market fluctuation. Its scope is much broader than any previous capital regime. It attempts to both standardize regulatory approaches and introduce more flexible and risksensitive measures for the calculation of regulatory capital. The International Accounting Standards are a set of globally accepted guidelines that seeks to unify the way in which accounting information is reported. Additional information about IAS can be found at http://www.iasb.org.uk. In the past, banks sought to improve their balance sheet and asset growth to increase profitability. However, since the Basle II Agreement, the emphasis is on asset productivity, capital efficiency and revenue growth. In this context, the key challenges to banks have been to develop and implement strategies to create the critical mass and capabilities needed to operate in the sector. Major players are, therefore, engaged in mergers and acquisitions to extend their geographical influence, create an adequate capital base, improve cost structures and extend their product range. In this context, banks share strategic factor markets where innovation and information technology (IT) represent key drivers to improve efficiency, productivity and competitiveness.

3. Research methodology This study extends prior research (Cabrita and Vaz, 2006) by seeking to create a deeper understanding of how senior managers value the IC components and perceives their influence on the banking strategy formulation process. There are no precedents for a study of this topic in the Portuguese context. The study follows two phases. First phase was conducted prior to operationalizing the constructs to enhance the construct validity of our measures. For this purpose, a survey data were collected from a sample of 53 banks, all affiliated members of the Portuguese Bankers Association. Following conventional IC conceptual frameworks, human capital, structural capital and relational capital were used as independent variables. The questionnaire included questions related to all components of IC as well as business performance. Day (1999) states that business performance a strategic step in establishing a firm’s competitive position, and intended market share. All measures are perceptual and a seven-point scale was employed. The second phase includes 22 semi-structured interviews to senior managers aiming to perceive their perceptions about: (i) the strategic importance of IC; (ii) the most important IC components, and; (iii) the drivers of bank’s business performance. A range of key informants was sought, including CEO’s, regional directors and the directors of functional areas. Based on the concept of “strategic awareness” (Hambrick, 1981), the “key informant” methodology (Phillips, 1981) was employed because the organizational characteristics we intend to measure are only known by a small select set of members in the upper echelons of a bank (Bontis, 1998; Bukh et al., 1999). Preliminary interviews allowed us to identify people (function and level) who possessed special qualifications (status, experience or specialized knowledge) to answer the questionnaire. From the analysis of the survey data, we identified the items perceived to influence the organizational performance, i.e., the IC elements that fulfil the conditions of having high importance towards value creation process. Because senior managers have different expectations about the future value of a strategic resource and accordingly adopt different strategies, the importance of each component of IC to the organization is different. In order to identify these specific elements, personal semi-structured interviews were conducted seeking to explore the following questions: ƒ Q1: Is IC strategically important for your bank? ƒ Q2: In your opinion what are the most important IC components (HC, SC, RC) that enable your bank to deliver value to your stakeholders? ƒ Q3: In your opinion what are the key drivers of bank business performance? IC is socially complex and therefore a deeper understanding of its strategic importance requires a higher level of intrusion or involvement in an organization. As stressed by Rouse and Daellenbach (1999:964) “research in organizations has a distinct advantage over research on organizations”

85

The 7th European Conference on Knowledge Management

and because of the causally ambiguous nature of the companies’ idiosyncrasies, only intrusive work can provide an improved understanding of how and why organizations develop their IC. A multi-method approach to data collection has been emphasized in the field of strategic management (Levitas and Chi, 2002) and IC (Petty and Guthrie, 2000) as well.

4. Discussion of results 4.1 First phase of the study The measurement instrument was pre-tested for clarity and relevance. To validate the instrument, a pilot test was carried out at a convenience sample of 178 senior managers. The 151 returned questionnaires helped us to purify the measures and retain the reliable items. A scale validation procedure was accomplished using: ƒ The analysis of item correlation; ƒ The analysis of item-total correlations; ƒ The analysis of Cronbach’s alpha and; ƒ An exploratory factor analysis with Varimax rotation. Inter-item consistency reliability was established using Cronbach’s alpha coefficient, which are greater than 0.93 for all constructs, exceeding the level of 0.7, considered good for exploratory research (Nunnally, 1978). Convergent validity is verified, given that all factors rise significant tvalues (p process -> output) has long been used as basis for management control systems design. As a crude skeleton of management activity, it provides process design with a cyclic model reiterating the stages of control. This paper will adapt this control cycle to reflect issues discussed and describe the governance processes necessary to achieve the above objectives: Set standards: ƒ Quantify, qualify or express standards, providing specific targets and accurate predictable measurement for management and teams to rely on. Deploy and operate: ƒ Conceptualise projects, using maturity to consider current maturity and agree on new targets, and criteria to design deliverables. ƒ Inform major decisions using maturity levels, and minor decisions using criteria. ƒ Schedule realistic milestones, to ensure project success (Kerzner 2003). ƒ Impose standards early, lessening resistance to change and improve direction of projects and design. ƒ Apply governance to each project phase. ƒ Let different groups work towards different maturity levels. Measure against standards: ƒ Measure on delivery of project outputs, project milestones, during regular review of initiatives, when determining rewards and recognition. ƒ Report management information constructively, with: ƒ A criteria checklist for project deliverables, to team leaders or project managers. ƒ A project progress report against knowledge criteria, to team leaders or project managers. ƒ A monthly or quarterly progress report against knowledge criteria and maturity levels, to stakeholder management including unit managers and sponsors. ƒ An annual report on average or aggregate maturity levels, provided to knowledge management sponsors and department heads. ƒ Regular review of knowledge management governance efficiency and effectiveness, to the knowledge management practice and relevant management. Take corrective action: ƒ When planned or when triggered by deviation.

368

Patrick Onions and René de Langen

ƒ Corrective action is planned and undertaken by the knowledge manager and line manager. ƒ Amend standards and processes if necessary. ƒ Recommend and implement changes.

3. Research methodology Research aims to ‘investigate thoroughly’ in an ‘active, diligent and systematic’ manner (Wikipedia 2006), to build broader understanding and pave the way for change (O’Leary 2004), and improve learning through teaching or generating new knowledge and theory (McNiff and Whitehead 2006). A research methodology appropriate for this model should: ƒ Evaluate whether governance can provide benefits to knowledge management. ƒ Evaluate the model’s suitability for knowledge management governance. ƒ Evaluate whether the model performs as intended. ƒ Determine the transferability of the model to other organisations. Researching in the field can be difficult. Whilst commercial pressures and exigent circumstances often prevent practitioners from employing formal research methods, research aims are nevertheless still important to clients and the professional practice. Action research is one of the more frequently used and appropriate methods in professional practice for the following reasons: There is considerable precedent for its use; “action research is about practitioners creating new ideas about how to improve practice, and putting those ideas forward as their personal theories of practice.” (McNiff and Whitehead 2006, p5) Governance in knowledge management is intended to be practical, offering situational improvements and outcomes that would be of significance to the organisation. This favours action research (O’Leary 2005). Problem and solution are often tangible and capable of articulation, and there is a need for and presence of political support. These too favour action research (O’Leary 2005). Knowledge management has a sociological nature (McAdam and Reid 2000), favouring qualitative research, a paradigm to which action research belongs. Solutions are usually developed and implemented as project deliverables, requiring immersion and time constraints, which favour action research (McNiff and Whitehead 2006). Development and implementation tends to be investigational. Cause-and-effect relationships may be difficult to determine given the organisation complexity and many influencing factors (Wong and Aspinwall 2005; Chourides et al. 2003). This again favours action research (O’Leary 2005). Balance must be reached between commercial delivery, organisation learning and incidental academic outcomes. Practitioners often intuitively follow action research’s iterative action-reflection cycle of observereflect-act-evaluate-modify (McNiff and Whitehead 2006, p9). This paper’s case study involved development of the governance model, and the following sequence was followed: ƒ Review literature for relevant techniques ƒ Design model using theory and input from organisation experts ƒ Define standards ƒ Design a pilot implementation with project team participation. ƒ Communicate designs to project participants and stakeholders and set localised targets. ƒ Participate in project implementation.

369

The 7th European Conference on Knowledge Management

ƒ Assess project outcomes and analyse adoption and performance. ƒ Adjust the model and make any recommendations.

3.1 The case study Sasol Synfuels in Secunda South Africa approached consultants The Knowledge Studio in 2005 to rebuild knowledge management. Sasol Synfuels is a division of Sasol Limited, a global petrochemical firm producing synthetic fuels, plastics and chemicals. Listed on two stock exchanges, the organisation employs 30000 people in almost 50 corporate entities on several continents in the production of innovative products from coal and other hydrocarbons (Sasol Annual Report 2005).

3.2 Setting standards Governance requires alignment of performance management with stakeholder interests. To do this, definitions for five maturity levels were prepared by analysing strategic requirements in the organisation and reviewing various IT maturity models: ƒ None: no knowledge management activity. ƒ Basic: knowledge is identified, listed and perhaps codified or mapped. ƒ Managed: knowledge is mapped, codified, linked, expertise located and tools implemented. ƒ Sustainable: knowledge that improves or enables efficiency is being reused, best practices are identified and reused, and knowledge management has a measurable contribution in achieving cost savings and achieving productivity. ƒ Innovative: knowledge that improves effectiveness is being generated, reused, synthesised; enabling and assisting in achieving a measurable revenue or functional improvement. Information technology standards, quality standards (Schonberger and Knod 1997) and knowledge management standards (Curley and Kivowitz 2001; Davenport and Prusak 1998, Stewart 1997) were then consulted whilst preparing knowledge criteria that propose knowledge outputs and ‘products’ be: sufficient, reused and reusable, accurate, up to date, prompt and punctual, shared, available, accessible, usable, useful and extendable. Analysis followed by consultation with management confirmed these criteria were suitable. Time constraints during the engagement prevented immediate definition of each criterion. Criteria were later enhanced so as to be expressed in common terms, published and communicated so as to become ubiquitous and integrated, quantifiable or measurable, relevant, actionable, applied equally, predictable and objective. Designs for governance processes were analysed by the knowledge management team, then assessed and accepted by management. Processes were then prescribed to each new initiative. To reduce resistance to change, control overall was light and lenient deployment relied more on selling the concept than on directives.

3.3 Preliminary outcomes and findings Initial outcomes and findings were documented in consulting notes. The knowledge manager documented subsequent findings. These notes are summarised here according to project phase: Concept phase; during which projects are planned: ƒ Standards facilitated the practice roles of champion and strategist; by providing guidelines to for project direction, identifying improvement areas and suggesting solutions. ƒ Standards tended to improve knowledge management credibility, project motivation and feasibility by offering technical design criteria, more rigour and precision, more focussed tangible and unambiguous goals, and lowering perceptible risk through precision. ƒ Maturity levels helped guide and motivate adoption by demonstrating the clear path that other teams have successfully negotiated to lower apprehension.

370

Patrick Onions and René de Langen

ƒ Senior management received more reliable and detailed resource allocation, budgeting and strategic information than before. ƒ Illustrative examples included reuse of safety knowledge from areas with higher knowledge maturity levels, motivating safety knowledge consolidation efforts, and use in new project proposals for staff morale programs and an expertise locator. ƒ Findings from this phase show the model can improve setting of direction, alignment with stakeholder requirements and transparency of outcomes. Project phase; during which projects are designed and developed: ƒ Standards facilitated the practice role of architect, instead of developer or administrator. ƒ Criteria clarified goals, informed selection of templates and made project requirements more explicit. ƒ Maturity levels helped manage expectations, such as for ‘web sites’ where they recommended iterative development rather than a ‘big-bang’ approach. ƒ Illustrative examples include applications that recognised whether explicit knowledge must be shared (codified and stored) or tacit knowledge must be shared (the knower identified), and faster and better quality portal ‘web site’ development. ‘Clients’ were informed how criteria would affect page design, pages were developed to more consistent and higher quality, and content and style guidelines more promptly and even automatically complied with. ƒ Phase findings show that direction and control can be established early, the model does assist in delegation and monitoring, and alignment with strategy is possible. Suggested modifications include planning of maturity goals to reflect resource availability, risks and contingencies, and introducing a surprise element to ensure freshness and reduce monotony. Transition phase; during which the practice transfers ownership and control to the ‘client’: ƒ Use of knowledge criteria to test outputs and performance allowed the knowledge manager to consult rather than manage or participate, enabling practice oversight, quality control, efficient use of resources and skills transfer. ƒ Teams used knowledge criteria to better manage themselves, determine own readiness and evaluate own need for assistance. ƒ Maturity levels have been used to suggest future iterations, functionality, unresolved requirements and help manage budgets and expectations. ƒ Performance measured in the first few months against the maturity levels provided more specific, communicable feedback to management. ƒ Use of criteria such as ‘sufficient’ and ‘accurate’ is illustrated in a request for assistance in completing web pages prior to publication. ƒ Findings include that the model enabled performance management and governance, with monitoring and control of operational activity and team management possible. More stringent management of knowledge management may improve compliance and formalisation of performance management, but at the possible cost of adoption and resistance to change. Practice phase; during which teams manage knowledge independently: ƒ Standards allowed the practice to delegate rather than participate. Monitoring project criteria found a best-practice initiative’s knowledge to be organised, accurate, usable, punctual, up to date and accessible, but requiring audits to confirm capture and storage in one area. ƒ The model fostered progress. Evaluation at an aggregated level during a management review found the knowledge management practice had matured from none to basic in eight months and was expected to achieve managed within a further six months. ƒ Information supported management decisions. Progress helped justify a plan to select and train appropriate people to act as knowledge management coordinators, and performance information has justified a decision to survey validity and usefulness of portals every six months.

371

The 7th European Conference on Knowledge Management

ƒ Findings are that the model enables output and process improvement, decision making, rapid monitoring of key issues and interactions, learning and review of the knowledge management practice. Other areas in which standards and governance have had a contribution: ƒ Promoting individual contribution has inspired a series of reward and recognition initiatives that use criteria to judge entries. The first, a “Day of Collaboration”, provides a forum for twelve exhibitors to share best practices with other employees in an informal atmosphere, and awards one plant-related and one people-related winner and four runner-ups. A second bi-monthly challenge receives peer nominations for employee knowledge sharing. Winners from both of these initiatives are entered for the Sasol Knowledge Sharing Luminous award, a prestigious company-wide gala evening. ƒ Knowledge sharing reward-and-recognition was chosen as a company wide Best Practice ƒ The knowledge manager has been appointed a member of the benchmarking team. Other factors may have contributed to knowledge management performance and governance: ƒ Renewed organisation impetus and attention may have encouraged compliance. ƒ Organisation predisposition to knowledge that may have eased adoption, lowered resistance, encouraged participation and reduced the need for knowledge management ‘marketing’. ƒ Previous knowledge management failure may have predisposed managers towards these more quantitative or rigorous approaches. ƒ Light levels of control exercised by knowledge management may have reduced resistance and encouraged participation.

4. Conclusions This paper has proposed a model for governance that consists of standards for direction and control and processes to apply these standards to knowledge management throughout its lifecycle. Knowledge management outcomes in the case study organisation cannot be ascribed solely to governance. Environment and research methodology prevent confirmation of cause and effect relationships. However outcomes, positive management feedback and comparison with previous efforts suggest that governance has been of benefit to knowledge management in the case study organisation. Findings suggest this model is strongly performance management oriented. It has successfully contributed to knowledge management direction and control with little modification since first introduction. The model has been applied to governance in reviewing of knowledge management performance and standards themselves, and improving alignment with strategy, transparency, delegation and monitoring. Issues related to organisation, tangibility of knowledge and measurability of knowledge management have been minimised. These confirm the model is capable of governance and suited to knowledge management governance. Assessment of the model’s performance is hampered by availability of similar projects to compare against, no quantitative performance audit, a qualitative research methodology and the short test duration. Overall results however suggest most intended outcomes have been achieved. It may therefore be concluded that governance has been beneficial to knowledge management in Sasol Synfuels, the model has subjected knowledge management there to governance, and has performed satisfactorily. Limitations do restrict transferability of the model to other organisations, and further research is warranted before making overly optimistic claims (Storey and Barnett 2000).

References Anthony, R.N. (1965), Planning and Control Systems, Harvard Business School Press, Boston. Arora, R. (2002) “Implementing KM – a balanced score card approach”, Journal of Knowledge Management, Vol 6 No.3, pp 240-249.

372

Patrick Onions and René de Langen

Australian Standards for Knowledge Management (2005), [online] http://www.standards.com.au/ Bailey, C. and Clarke, M. (2001) “Managing knowledge for personal and organisational benefit”, Journal of Knowledge Management, Vol 5. No.1, pp 58-67. Busi, M. and Bititci, U.S. (2006) “Collaborative performance management: present gaps and future research”, International Journal of Productivity and Performance Management, Vol 55 No.1, pp 7-25. Chourides, P., Longbottom, D. and Murphy, W. (2003) “Excellence in knowledge management”, Measuring Business Excellence, Vol 7 No.2, pp 29-45. Curley, K.F. and Kivowitz, B. (2001), “Knowledge Management: The Manager's Pocket Guide”, HRD Press, Boston Darroch, J. (2005) “Knowledge management, innovation and firm performance”, Journal of Knowledge Management, Vol 9 No.3, pp 101-115. Davenport, T.H. and Prusak, L. (1998) Working Knowledge, Harvard Business School Press, Boston. Davenport, T.H., De Long, D.W. and Beers, M.C. (1998) “Successful knowledge management projects”, Sloan Management Review, Vol 39 No.2, pp 43-57. Dayan, R. and Evans, S. (2006) “KM your way to CMMI”, Journal of Knowledge Management, Vol 10 No.1, pp 69-80. De Feo, J.A. (2006) “Breakthrough performance”, Handbook of Business Strategy, pp 209-218. Dobbs, R., Leslie, K. and Mendonca, L.T. (2005) “Building the healthy corporation”, McKinsey Quarterly Web exclusive, August, No.3. Grant, G.H. (2003) “The evolution of corporate governance and its impact on modern corporate America”, Management Decision, Vol 41 No. 9, pp 923-934. Hansen, F. and Smith, M. (2006) “The ethics of business strategy”, Handbook of Business Strategy, pp 201-206. Hefke, M. and Trunko, R. (2002) “A Methodological Basis for Bringing Knowledge Management to Real-World Environments”, Practical Aspects of Knowledge Management, 4th International Conference, PAKM 2002, Vienna, Austria, 2-3 December. Holsapple, C.W. and Joshi, K.D. (2000) “An investigation of factors that influence the management of knowledge in organizations”, Journal of Strategic Information Systems, Vol 9 No.s 2/3, pp 235-61. Housel, T. and Bell, A.H. (2001) Measuring and Managing Knowledge, McGraw-Hill Irwin, New York. Hsieh, T. and Yik, S. (2005) “Leadership as the starting point of strategy”, McKinsey Quarterly, No.1. Iftikhar, Z., Eriksson, I.V. and Dickson, G.W.(2003) “Developing an Instrument for Knowledge Management Project Evaluation”, Electronic Journal of Knowledge Management, Volume 1 Issue 1, pp. 55-62 http://www.ejkm.com IT Governance Institute (1996) “COBIT - Control Objectives for IT”, www.itgi.org, CobiT 4.0 [online] www.isaca.org. Kerzner, H. (2003) Project Management, Wiley, New York. Malhotra, Y. (2005) “Integrating knowledge management technologies in organisational business processes”, Journal of Knowledge Management, Vo. 9 No.1, pp 7-28. Marr, B. and Spender, J.C. (2004) “Measuring knowledge”, Measuring Business Excellence, Vol 8 No 1, pp 18-27. McAdam, R. and Reid, R. (2000) “A comparison of public and private sector perceptions and use of knowledge management”, Journal of European Industrial Training, Vol 24 No.6, pp 317329. McNiff, J. and Whitehead, J. (2006) All you need to know about Action Research, Sage, London. Mingay, S., Furlonger, J., Magee, F. and Andren, E. (1998) “The Five Pillars of IS Organizational Effectiveness”, Gartner, 18 November. Nolan, R.L. (1973) “Managing the computer resource: a stage hypothesis”, Communications of the ACM, Vol.16 No.7, July, pp.399-405. O’Leary, Z. (2005) Researching Real-World Problems, Sage, London. Onions, P.E.W. (2004), “Implementing Knowledge Management”, Knowledge Management in the Public Sector Conference, Indaba Hotel, Johannesburg, 1-3 September. OECD, Organization for Economic Co-operation and Development (2004), OECD Principles of Corporate Governance, Paris.

373

The 7th European Conference on Knowledge Management

Patriotta, G. (2004) “On studying organizational knowledge”, Knowledge Management Research & Practice, Vol 2, pp 3-12. Prusak, L. (2000) “Review of good practices of knowledge management in firms and organisations”, Knowledge Management: The New Challenge for Firms and Organisations, 21-22 September, Courtyard Marriot Hotel, Ottawa, Canada. Rezaee, Z., Olibe, K.O. Minmier, G. (2003) “Improving corporate governance”, Managerial Auditing Journal, 18/6/7, pp 530-537. Sasol Annual Report (2005) [online] www.sasol.com. Scarbrough, H., Swan, J. and Preston, J. (1999) “Knowledge Management and the Learning Organization”, Report for the Institute of Personnel Development, London, October. Schonberger, R.J. and Knod, E.M. (1997) Operations Management: Continuous Improvement, Irwin, New York. Schuman, C.A. and Brent, A.C. (2005) “Asset life cycle management”, International Journal of Operations & Production Management, Vol 25 No.6, pp 566-579. SEI (2002) “Capability Maturity Model version 1.1”, Software Engineering Institute, Carnegie-Mellon University, [online] www.sei.cmu.edu. Shewhart, W.A. (1939) Statistical Method from the Viewpoint of Quality Control, Dover Publications, NY. Smart, P.A., Maull, R.S., Radnor, Z.J. and Housel, T.J. (2003) “An approach for identifying value in business processes”, Journal of Knowledge Management, Vol 7 No.4, pp.49-61. Sterling, J. (2003) “Translating strategy into effective implementation”, Strategy and Leadership, Vol 31, No.3, pp 27-34 Stewart, T.A. (1997), Intellectual Capital: The New Wealth of Organisations, Doubleday, New York Storey, J. and Barnett, E. (2000) “Knowledge management initiatives”, Journal of Knowledge Management, Vol 4 No.2, pp 145-156. Strassman, P.A. (2001) “Letting go of the profit motive”, Knowledge Management Magazine, August 6 Sussland, W.A. (2004 ) “Business value and corporate governance: a new approach”, Journal of Business Strategy, Vol 25 No.1, pp 49-56 van den Berg, C. and Popescu, I. (2005) “An experience in knowledge mapping”, Journal of Knowledge Management, Vol 9 No.2, pp 123-128. Whyte, G. and Bytheway, A. (1996) “Factors affecting information systems success”, International Journal of Service Industry Management, Vol 7 No.1, pp 74-93. Wiig, K.M. (1997) "Knowledge Management: An Introduction and Perspective", Journal of Knowledge Management, Vol 1 No.1, September Wikipedia (2006), “Research”, Wikipedia, The Free Encyclopedia, [online] www.en.wikipedia.org/w/index.php?title=Research Wong, K.Y. and Aspinwall, E. (2005) “An empirical study of the important factors for knowledgemanagement adoption in the SME sector”, Journal of Knowledge Management, Vol 9 No.3, pp 64-82. Yakhou, M. and Dorweiler, V.P. (2004) “Dual reforms: Accounting and corporate governance”, Managerial Auditing Journal, Vol 19 No.3, pp 361-377. Yoo, Y., and Ginzberg, M. (2003) “One Size Doesn’t Fit All: Knowledge Management Systems and Knowledge Sharing Practices in Global Learning Organizations”, Sprouts: Working Papers on Information Environments, Systems and Organizations, Vol 3, Issue 2, pp. 83-106. Zyngier, S.A., Burstein, F. and McKay, J. (2004) "Knowledge Management Governance: A Multifaceted Approach to Organisational Decision and Innovation Support", Decision Support in an Uncertain and Complex World: The IFIP TC8/WG8.3 International Conference

374

Innovation Management: Complexity and Systems Thinking Paul Otterson, Zoë Dann, Ian Barclay and Peter Bond Liverpool John Moores University, Merseyside, UK [email protected] [email protected] [email protected] [email protected] Abstract: New Product Development (NPD) is one of the most complex organisational environments that exist. The NPD process ranges from the simple repositioning of products in new markets through to the development of products that are entirely new to the world. In the latter case, the degree and level of complexity and innovation can be extreme because we are introducing a new component into a complex adaptive system. In this case, not only is the NPD process inherently difficult, the problem of complexity is extended by the nature of the product being developed and the market into which it is to be launched. It is the role of management that must bring together the product’s development process and launch it into the market successfully. In the last few years, the science of complexity theory has been developed and various attempts have been made to relate this to the NPD environment. This paper describes our work in this area and shows how we have developed a NPD Complexity Application Model (CAM) that represents the management processes encompassing the three dimensions of NPD:

ƒ Activity and process management ƒ Knowledge management, development and dissemination ƒ Learning development and application The paper then goes on to show how the use of systems thinking may be used to address the details of the CAM to allow practitioners to better understand the knowledge requirements of the NPD process. It shows how the use of soft systems thinking may be used to aid the practitioners’ understanding of the root definitions of NPD knowledge and to define customer requirements and educate customers. Keywords: New product development, complexity representation and application, systems thinking, knowledge management, learning organisation.

1. Introduction In many companies, NPD is both a creative process and also a highly formalised and systematic project. There are two fundamental concerns. First, that the product is truly right for its target market (acceptable to the targeted end-users). Second, whether the organisation/system that will deliver the product (and which is co-designed with it), will be truly capable of delivery. The complexity of NPD not only lies in the number of components that have to be drawn together to make the final product, but in the nature of the adaptive human system that needs to draw it together. Arguably, the complexity of NPD from a systems perspective is greater than more established and stable systems of production and would be nearer the ‘edge of chaos’. The introduction of product introduction management systems is considered an attempt to create order and predictability from an otherwise chaotic process, even though this may have the effect of reducing the freedom for creativity. Therefore a balance has to be struck between the formal (structured and tightly coupled) and the informal (less structured and loosely coupled) systems respectively both of which are integral parts of the same NPD system. The basic premise of our adaption of the CAM model (fig 1) to NPD is, that because of the scale and complexity of the NPD unpredicted events will occur, creating the opportunity to identify knowledge gaps. This presupposes that there is recognition of such events based on a dissonance between predicted and actual performance, Boisot explains this as the scanning process. (Boisot, 1995). The link points in the scanning process can be mapped to the organisation’s ‘intelligence’ function and its ‘environment’ in Beer’s Viable Systems Model, (a cybernetic systems approach to

375

The 7th European Conference on Knowledge Management

modelling organisations). More simply put, it is the proactive activity of information gathering between the organisation and the wider community in which simultaneously exists, serves, competes etc. It is here that the mutual engagement of formalised and informal systems is needed for accumulating, converting and disseminating knowledge. We can define “formal systems” here as being principally populated from within the organisation such as an internal project team based on a process stages of NPD (and may have specialist external members co-opted to it) and “informal systems” principally populated by people external to the organisation such as a Community of Practice (CoP), professional institution etc. By encouraging both formal (internal) and informal (external) systems a company develops a more holistic knowledge generation capability. To paraphrase Bond and loosely apply Ashby’s Law of Requiste Vareity, “The more an individual learns and the more they share, the greater the variety of situations and events they and their community are able to handle. The more problems they are able to solve, the more strategies their community has for dealing with unanticipated events”. (Bond, 2004)

2. CAM in NPD The basis of the CAM model development and its relationship to complexity theory on which it was based was discussed in an earlier paper (Dann & Barclay, 2005). The following is a brief recap: ƒ Central core: of the CAM uses as its basis the “flow” from Organisation to Evolution. This interacts with the organisation’s formal and informal systems and processes. ƒ Formal aspects: The left-hand side “box” represents the organisation’s formal knowledge, processes, systems and procedures that deal with foreseen/predicted potential events through contingency planning and systems. ƒ Informal aspects: The right-hand side “box” represents the organisation’s informal knowledge and learning processes etc. that deal with unforeseen events through the capability of the organisation’s people and culture. The spiral between formal and informal represents the continuous interplay between the two. ƒ Confirmation of knowledge: This has two dimensions. If the events “match” requirements and/or predictions, then the existing knowledge is valid and is confirmed. If events create a “mismatch” then the new, appropriate knowledge has to be incorporated, validated and confirmed. In either case, new learning may be absorbed. ƒ Incorporation of learning: Again, this has two dimensions. Whether events “match” or “mismatch” new and appropriate learning may be incorporated into the organisation as required for improvement ƒ * Organisational development: Here there are two extremes. If the formal processes work well, then the organisation’s knowledge base is confirmed as being correct (i.e. matches). If the formal processes do not work well enough (mismatches), then the learning from the informal processes may be built into the formal processes. Minor perturbations are addressed by the formal systems that are capable of dealing with them on an ad hoc basis by internal teams, major perturbations where the formal system is unable to cope (Ashby’s Law - where the disturbance in the environment has greater variety than the system has capacity to cope with) is aided by the inclusion of an informal system (CoP). It is the informal aspects of the model (right-hand side “box”) that exhibit the basics of complexity theory. It represents the organisation’s informal knowledge and learning processes etc. that deals with unforeseen events through the capability of the organisation’s people, culture and its interaction with informal systems (CoPs). This model is, in effect, a system that adapts through a process of ‘self organisation’ and selection into coherent new behaviours, structures and patterns, in short, a ‘learning organisation’ model. For management, access to knowledge held and created in both internally and from CoPs, can be a driving force in the successful application to NPD. The managerial challenge is to allow CoPs to flourish and so, must be comfortable with creating and joining them and embedding the knowledge they bring into their formal systems. The managerial environment is a dynamic ‘Human Activity

376

Paul Otterson, Zoë Dann, Ian Barclay et al

System’ that should learn, adapt and evolve. Understanding the relationship between the whole system, the organisation and its relationships to CoPs is vital, and it is here the CAM model acts a conceptual guide as to the “ideal” system, one that self-organises, learns, adapts and evolves with its environment. ORGANISATION TO DIRECT & CONTROL

INTERACTIONS

INFORMAL

RESULTING IN

FORESEEN

EVENTS

UNFORESEEN

REQUIRING

PREDICTED

ADAPTATION

REACTIVE

LEADING TO

SYSTEMS

EVOLUTION

INCORPORATION OF LEARNING

COMFIRMATION OF KNOWLEDGE

FORMAL

FLUID

ORGANISATIONAL DEVELOPMENT

Figure 1: Complexity application model

3. CoPs for NPD in SMEs Communities of practice differ from the ‘formal systems’ that naturally form part of the NPD process (user focus groups, project task teams etc.), they share a common concern do and continuously learn how to do it better through regular interaction. “The term ‘community of practice’ is of relatively recent coinage, even though the phenomenon it refers to is age-old. The concept has turned out to provide a useful perspective on knowing and learning. A growing number of people and organizations in various sectors are now focusing on communities of practice as a key to improving their performance. Communities of practice provided a new approach, which focused on people and on the social structures that enable them to learn with and from each other. Today, there is hardly any organization of a reasonable size that does not have some form of ‘communities of practice’ initiative. A number of characteristics explain this rush of interest in communities of practice as a vehicle for developing strategic capabilities in organizations: ƒ Communities of practice enable practitioners to take collective responsibility for managing the knowledge they need, recognizing that, given the proper structure, they are in the best position to do this. ƒ Communities among practitioners create a direct link between learning and performance, because the same people participate in communities of practice and in teams and business units. ƒ Practitioners can address the tacit and dynamic aspects of knowledge creation and sharing, as well as the more explicit aspects. ƒ Communities are not limited by formal structures: they create connections among people across organizational and geographic boundaries”. (Wenger, 2006) We are all familiar with external CoPs that loosely include members with shared interests, such as Knowledge Management. CoPs like this give a wide airing to a variety of ideas but are not normally

377

The 7th European Conference on Knowledge Management

aimed to a specific company problem. There are other communities grouped around a shared task orientation, such as user self-help groups for Microsoft Developers. Here the shared knowledge is very much more specific to the Individual’s tackling a task problem, and potentially common to all members. Organisational CoPs mentioned by Wenger are based for larger organisations. What differs in our approach is our aim to apply this to NPD process within small local companies (typically the small – micro business sector < 50 employees), so it is clear that for CoP to work it must become a wider organisation, encompassing key stakeholders with a common interest (e.g. customer/supplier focus groups, 3rd party specialists, company employees etc) who are known and trusted by the company to be included in their NPD process. Such a CoP needs to encompass the SME and its external support in a symbiotic relationship. Hence it will both influence and be influenced by the organisation’s structure and its communication practices, as the key elements for success for CoPs appear to be shared vision, open-mindedness (especially in management) and inter-organisational knowledge sharing. Our intention was to use the CoP to develop a discourse on Continuous Improvement (CI), stemming from a Senge (Senge, 1990) notion of creative tension between the informal and formalised systems. To our CoP, CI extends the concept Continuous Product Innovation as defined by Bartezzaghi et al (in Chapman R, Hyland P, 2003) to all aspects of innovation and management related to NPD.

4. Gaining knowledge through systems thinking within the CoP Knowledge needs structure. It is the creative tension between the constraining and enabling aspects of structure that provides the framework to change. Innovation stems from application of knowledge to a dissonant state when a level of dissatisfaction with the status quo has been reached. At this point the informal system pushes change onto the formalised systems. The process of evaluating the acceptability of designed systems as a continuous cyclical model stems from our work in innovation techniques, (Bond P & Otterson P, 1998 and Bond P, 2006). The methodology chosen to implement the cycle was based on Checkland’s Soft Systems Methodology (SSM). "Systems-based methodologies for tackling real-world problems in which the known-to-be-desirable ends cannot be taken as given. Soft systems Methodology is based on a phenomenological stance... a philosophical position characterised by a readiness to concede primacy to the mental processes of observers rather than to the external world; this contrasts to a positivist stance... characterised by a readiness to concede primacy to the given world as known through experimental evidence." (Checkland 1991) Soft Systems Methodology was created to deal with problems where a plurality of viewpoints (i.e. stakeholder centric world views which also called ‘weltanschauung’) needed to be synthesised into a singular working definition. Within our view, ‘Knowledge Management’ is definable as a Human Activity Systems (HAS). HAS are social systems, and as such are purposive, which Checkland defines as "describable by an observer as serving a purpose" and so the purpose of the system defines which elements are relevant for inclusion and which other systems it relates to. The tools of SSM are the development of root definitions of stakeholder weltanschauung, conceptual modelling and testing. The different backgrounds of the CoP lead to a variety of modelling techniques but all based on systems thinking. Root definitions are the key tool used for engendering discourse within the CoP. It was used to further develop the definition of a CoP A CoP is a system owned by its actors (members) who value and seek collaboration (their world view or the values they hold and bring to the system and which influence their behaviour and attitudes). The purpose of the system is to achieve a positive transformation in their collective capacity to solve problems (so they are customers of their own system, i.e. their own marketplace) in the context of its host community (environment), in a timely and effective manner in the service of the owners of the problems they are motivated to solve.

378

Paul Otterson, Zoë Dann, Ian Barclay et al

5. Conclusions "…Scientific and technological knowledge needs to be combined with other forms of expertise, such as knowledge of markets and customer needs, to create innovative new products and services. We have to manage innovation more effectively and be more ambitious in exploiting our ideas to the full. The ability to market products and services successfully both at home and abroad is essential to ensure properly targeted innovation. It is only when ideas are commercialised that jobs and wealth are created." (DTI 1998). Such a push in the late 1990’s/ early 2000s justification the creation of a Technology Management group within the University, funded to our work with local SME businesses in NPD. Specifically included were a variety of subject specialists with a common interest in enterprise and innovation, and a common attempt to embed systems and systems modelling as our ‘lingua franca’. This is not a claim of a unique grouping; many Universities and Institutions provide NPD services. What is different is our self-perceived view as our group being a CoP. Wenger’s writings on what constitutes a CoP and how have been influential on changing our weltanschauung, this in its own way affects our group organisation, client interactions, and methods for generating and disseminating knowledge within the CoP. In the latter point of knowledge generation and dissemination Wenger’s paper on shaping knowledge strategy through CoPs has been most influential. (Wenger, 2004) Setting up a group that sees itself and acts, as CoP specifically aimed at small businesses NPD around the expertise of the company and University staff is relatively new and evolving. Its success may not only be measurable by the number of problems it has interventions in. Its success needs to be measured by the increase in its knowledge base and hence capability to make useful interventions for its members. What we have found is the common language of systems and system based tools, does allow greater sharing of ideas and knowledge amongst the community, and a greater uptake and tightening of formal methods within client companies.

References Boisot M (1995) “Information Space : A framework for learning in organisations, institutions and culture.” International Thomson 1995 ISBN 0 415 11490 X Bond P, Otterson P (1998) “Creativity Enhancement Software – A Systemic Approach” International Journal of Technology Management 1998 Vol 15 Bond P (2004), “Communities of Practice and Complexity: Conversation and Culture”, Organisations & People journal, Vol 11 No 4 (November 2004), Association for Management Education and Development (AMED). Bond P (2006) “Emotioing, Foundational Knowledge and Enterprise Creation” Organisations & People Volume 13 Number 1 2006 Special edition on Knowledge Management Chapman R, Hyland P (2003) “Complexity and Learning Behaviours in Product Innovation” Technolovation 23. Pergammon Checkland P.B. (1991) “Systems Thinking, Systems Practice” (1991 edn.), Wiley& Son 1991 ISBN 0 471 27911 0 Dann Z, Barclay I (2005), “Complexity Theory and Knowledge Management Application”, Proceedings European Conference Knowledge Management 2005. DTI (1998) White Paper : “Building the Knowledge Driven Economy” Chpt 2 : Creating and Exploiting Knowledge HMSO 1998 Senge P.M. (1990) “The Leaders New Work: Building Learning Organisations” Sloan Management Review 1990 / Fall pp. 7-22 Wenger E (2004) “Knowledge management is a donut: shaping your knowledge strategy with communities of practice”. Ivey Business Journal, January 2004.

379

Organizational Knowledge Transfer: Turning Research Into Action Through A Learning History Robert Parent and Julie Béliveau Université de Sherbrooke, Québec, Canada [email protected] [email protected] Abstract: Organizational learning and knowledge management experts are searching for more appropriate research tools to tackle the difficult concepts of organizational learning and knowledge. This paper provides an overview of the learning history methodology, first proposed by Kleiner and Roth, in studying knowledge transfer activities. The learning history methodology, typically used within an action research environment, is designed to allow recognition of what has been learned in the past to guide stakeholders in the dialogical generation of a new future. It is a qualitative measurement tool of what has been learned, and remains sensitive to contextual factors, since it is based on the perceptions of the organization’s actors and the theoretical sensitivity of the researcher. This paper surveys the learning history literature to determine the roots, benefits and challenges of this research method. We will then demonstrate the advantages of using this approach to studying organizational knowledge transfer by presenting a case study where it is being used within participatory action research logic. Finally, we will provide lessons learned from our ongoing research and draw on implications for practice and future theorizing. Keywords: Knowledge transfer, learning history, organizational learning

1. Introduction Organizations of all types are struggling to learn to survive in a dynamic environment of increasing complexity. This requires that organizations employ mechanisms to reflect collectively on their experience, make sense of it and assess their investment in learning efforts (Roth and Kleiner 1998). When asked what topic will have the greatest impact in the future of organizational learning (OL) and knowledge management (KM), a panel of experts answered that it will be “research methods and measures of OL/KM” (Easterby-Smith and Lyles 2003). These experts are searching for more appropriate tools to address learning and knowledge concepts, both difficult to tackle. Some suggest a greater utilization of empirical research designs and qualitative research methods that are sensitive to contextual factors, such as narrative methods, to better understand processes related to organizational learning and knowledge management (Scholl et al. 2004; Easterby-Smith and Lyles 2003). Until now, few ideas have been presented on how to improve learning from experience and how to capture and disseminate knowledge in organizations. Many efforts at helping organizations learn have failed because the skills necessary to make effective OL/KM interventions, such as reflection and dialogue, are difficult to master and do not always provide organizations with effective solutions to promote learning and solve their business problems (Cross and Rieley 1999). Employee surveys, best-practice reports, traditional case studies and the use of consultants are all tools that fall short in helping organizations reflect collectively on past experience in a way that helps them to prepare for future actions (Farr 2000). Meanwhile, the use of stories for helping organizations learn and transfer tacit knowledge is gaining widespread favour among both practitioners and academics (Cortese 2005; Sole and Wilson 2002; Royrvik and Bygdas 2002). Narration is increasingly seen as the privileged form for constructing and expressing one’s own personal stories, and organizations are viewed as narrative artefacts (Cortese 2005; Klein 2005). The learning history methodology, typically used within an action research environment and designed to allow recognition of what has been learned in the past to guide stakeholders in the dialogical generation of a new future (Bradbury and Mainemelis 2001), seems to address the needs of the OL/KM experts cited earlier. However, few authors have demonstrated the potential of this qualitative research methodology in studying knowledge management, and more particularly, knowledge transfer activities. The purpose of this paper is to suggest the emergence of a new action methodology, the learning history, proposed by Roth and Kleiner (1995a), to study knowledge transfer initiatives. It is intended for both scholars and practitioners who want to explore

380

Robert Parent and Julie Béliveau

new ways to study knowledge and learning concepts. First, we present an overview of the learning history literature to determine the roots, benefits and challenges of this research tool. We then demonstrate the advantages of using learning histories to study knowledge transfer by presenting a case study where it is being used within participatory action research logic. Finally, we provide lessons learned from our ongoing case studies and draw out implications for practice and future theorizing.

2. Overview of the learning history methodology First designed to help pilot projects transfer their learning to other parts of an organization, the learning history is a qualitative research methodology that considers human perceptions, actions, opinions and evaluations (Cortese 2005). It was created in 1994 at MIT’s Centre for Organizational Learning in response to the needs of organizations to engage in collective reflection. Some see this narrative method as a qualitative measure of knowledge (Greco 1999) or as a knowledge management tool, especially effective for managing personal and context-specific tacit knowledge (Milam 2005). The learning history also qualifies as inductive research, since researchers are not trying to prove or disprove starting hypotheses. The naturalistic/constructivist perspective is used to capture and construct stories by collecting data from a wide group of people (Milam 2005). Inspired by Van Maanen’s (1988) ethnography tool, called the jointly told tale, the learning history document is a 20- to 100-page narrative of an organization’s recent critical episodes, presented in an engaging two-column format (Bradbury and Mainemelis 2001; Kleiner and Roth 1997a). The right-hand column presents an emotionally rich story of relevant events through the interwoven quotations of people who took part in them, including champions and skeptics, people who were affected by them, or people who observed them up close. The left-hand column contains the learning historians’ analysis, which identifies recurrent themes in the narrative, asks questions about its assumptions and raises “undiscussable” issues. The content of the left side of the document is based on recognized research in the areas of systems thinking, organizational effectiveness and organizational behaviours (Cross and Rieley 1999). Once written, the learning history document is disseminated through group discussions with people who were involved in the change effort and others who might learn from it. Thus, a learning history is as much a process as it is a product (Roth and Kleiner 1995a). It brings tacit knowledge to the surface, codifies it and turns it into an actionable knowledge base (Kleiner and Roth 1996). More generally, the learning history “is inspired by belief that legitimate or valid knowledge results from an emancipatory process, one that emerges as people strive toward conscious and reflexive emancipation, speaking, reasoning, and coordinating action together, unconstrained by coercion” (Bradbury and Mainemelis, 2001, 352).

3. On what is it based? The roots of the learning history are manifold. In their first paper on the methodology, Roth and Kleiner (1995a) state that the learning history draws upon theory and techniques from ethnography (to understand the realities of organization members from their points of view), journalism (to present the story in an accessible and compelling way), action research (to guide new actions through reflection and assessment of learning efforts), oral histories (to describe history with narratives) and theatre (to disseminate narratives that are often emotionally charged). Taking a systems view of organizations, the learning history methodology starts with a premise that all individuals are actively trying to do their best and that feedback to compare what was accomplished to what was expected is necessary to sustain any improvement process (Roth 2000). However, the learning history process differs from most evaluation approaches, where the only feedback people get is from an expert’s assessment (Roth and Senge 1996). The learning history can also be viewed as an intervention methodology, positioned in the field of action research at the organizational level of analysis (Coghlan 2002), and more precisely along the lines of participatory action research, since the research is co-designed and co-developed (Bradbury and Mainemelis 2001). The learning history methodology builds upon organizational culture research by using insider/outsider teams who take on the role of learning historians (Roth 2000). Insider/outsider research is particularly useful when the research goal is to generate rich

381

The 7th European Conference on Knowledge Management

appreciation of immediate or unfolding situations grounded in organizational participants’ experiences, or to give details about beliefs and assumptions held by these participants (Louis and Bartunek 1992). In contrast to the traditional research approach where researchers collect and analyze the data without the participation of people in the organization, insider/outsider research encourages ongoing dialogue and heterogeneous interpretations, which can result in more robust theorizing (Louis and Bartunek 1992). While the outsider searches for knowledge that can be generalized to many situations, the insider wants to develop knowledge of practical use. In that sense, insiders and outsiders complement each other and foster a better understanding of the ways that organizational members make sense of their world (Louis and Bartunek 1992). Furthermore, the insider/outsider team avoids the separation of experts’ evaluation from the organization’s own learning efforts (Roth and Kleiner 1995a).

4. What are the benefits of its use? For researchers, the learning history methodology helps them make their work available to the larger community of scholars and practitioners (Bradbury and Mainemelis 2001) by using a text to focus conversation and experiential engagement. As a result, the learning history contributes to the body of generalizable knowledge about what works and what doesn’t in management (Kleiner and Roth 1997a). Learning histories also generate a lot of information on the organization’s way of learning, which allows learning historians to work with this material as an ongoing resource, spinning off other documents, training programs and learning tools from the same material (Roth and Kleiner 1995b) and facilitating future research (Jacques 1997). For participants, the learning history is a collective and inclusive process (Farr 2000) which produces positive social change (Bradbury and Lichtenstein 2000). First, it builds trust when people see their anonymous comments in the learning history document (Kleiner and Roth 1997a; Farr 2000). It also shows them that their views count (Farr 2000). For example, in one organization, the learning history forced senior managers to recognize the teams’ stress level and recommendations concerning the staffing process for future project organization (Cross and Rieley 1999). Moreover, the group discussions favor collective reflection and help people openly express their fears, concerns and assumptions, which builds trust and a sense of community because people feel they are not alone in their efforts to improve the organization (Kleiner and Roth 1997a). Another positive element for participants can be the narrative interviews, where they recall the learning experience and often find that they have “learnt again” through the process (Cortese 2005). The learning history is also a new qualitative way of measuring organizational improvement efforts without killing their learning value (Kleiner and Roth 1997b), since it allows people to tell their story without fear of being evaluated (Roth and Kleiner 1995a). The process is regarded as safe by participants (Farr 2000). By transforming tacit knowledge into explicit knowledge, the learning history text acts as a transitional object through which opinions are made discussible in a concrete way because they refer to observable data (Bradbury and Mainemelis 2001). According to Argyris (1990), these conditions facilitate reasoning, which in turn favors learning. For example, Kleinsmann and Valkenburg (2005) used the learning history methodology for their case study on learning through collaborative new product development and found that it is an appropriate method to identify learning opportunities as well as a structured and transparent way of analyzing case study data.

5. What challenges does it bring? Bringing this type of tool into organizations can be a revolutionary enterprise. The learning history dissolves hierarchical privileges and favors conversations that create meaning and common objectives to guide future organizational actions (Roth 2000). In this context, researchers do not always get the necessary support from the organization. Even if most organizations agree on the importance of organizational learning, few of them are willing to invest the time, courage and honesty it requires (Roth and Kleiner 1998). For example, some executives are reluctant to undertake a learning history because of its cost, both in employee time and consulting/researching fees (Parnell et al. 2005). Also, building insider/outsider collaboration may require some time, since insiders often lack the necessary research skills and require training (Louis and Bartunek 1992).

382

Robert Parent and Julie Béliveau

Furthermore, in a business culture where action is glorified, managers often find it difficult to take the time to reflect (Roth and Kleiner 1998) under the pressures of proving results, serving a political agenda, identifying problems and finding solutions (Milam 2005). In this context, to get the most out of the learning history process, the organizational climate has to welcome contradictions, uncertainty and conflict as learning opportunities (Milam 2005). If the organizational context does not favor a transformational learning approach, the learning history can set off flames that burn up the organization’s good will and resources (Roth and Kleiner 1995a). Participants’ responses to learning history documents are not always positive. Managers and consultants who promote learning efforts are often disturbed by what the learning history actually uncovers (Kleiner and Roth 1996). Moreover, dissatisfaction is more visible when people learn and become aware of the gaps between their aspirations and their corporate reality (Kleiner and Roth 1997b). However, that is exactly where learning histories have their value: in their capacity to bring out multiple perspectives stories that make visible to an organization what is collectively hidden (Roth 2000), such as the psychological and emotional problems faced during the transformation effort (Milam 2005). In that sense, learning histories are like mirrors to organizations. They raise issues that people want to talk about but have been afraid to discuss openly (Kleiner and Roth 1997a). The learning history text also brings forth contradictions between suppressed and better known voices, as well as between the way things are supposed to be done and actual practices (Bradbury and Mainemelis 2001).To deal with these challenges, learning historians have to find the right way to bring out the issues of the story without blaming anyone (Kleiner and Roth 1996). Indeed, the process of collectively reflecting and assessing the learning history sometimes goes against people’s expectations that senior management should tell them what to do (Roth 2000). Researchers have to continuously negotiate practitioners’ involvement in the learning history (Bradbury and Mainemelis 2001). Conflicts about the meaning and causes of organizational events can also arise within the insider/outsider research team (Louis and Bartunek 1992). The team must then be able to discuss these conflicts openly and not rely on compromises, which would result in poor research or even the dissolution of the team. It is also crucial to ensure that no particular perspective is over represented in the insider/outsider team. Another challenge of the learning history methodology is related to its two-column format, which is quite unusual and must be formatted carefully to avoid reader confusion (Coghlan 2001). For researchers, who are not used to telling the story through the voices of other people, the format can be difficult to master. Some are not sure what to write in the left column (Kleiner and Roth 1996). Furthermore, there is no obvious place in this format for them to insert their observational field notes. Researchers are also faced with the challenge of creating an engaging text for many people whose learning styles can be different (Bradbury and Mainemelis 2001). For their part, managers would like a more prescriptive document, one that would include more synthesis, analysis and recommendations (Kleiner and Roth 1996). All in all, the two-column format needs further testing, because experience with it is limited. In the first section of this paper, we presented an overview of the learning history methodology. We started with the roots of this new approach, followed by its benefits and the challenges related to its use. The next section of this paper will demonstrate the relevance of using learning histories to study knowledge transfer activities. We will do so by presenting a case study of a project developed by the Knowledge Transfer Research Laboratory of the University of Sherbrooke, where the learning history was used within participatory action research logic.

6. Case study: Using a learning history methodology The Knowledge Transfer project team came together as a consortium in 2003 and included researchers and administrators in workplace health and safety from a rural Anglophone province, administrators in workplace health and safety research from a combined urban and rural Francophone province, and a team of business school researchers in workplace design and knowledge transfer from the same Francophone province. Two of the partners in this project had

383

The 7th European Conference on Knowledge Management

worked on knowledge transfer initiatives together before. The backgrounds of the researchers were multi-disciplinary, including sociology, psychology, systems thinking, political science, ergonomics, health and safety. The teams aim was to tell the story of the project in a case study that was true to the experience and perspectives of all the participants. They also intended to use the learning history to track the project milestones for reporting purposes. Finally, and perhaps most importantly, they wanted to stimulate and inform conversation on what actually happened, why it happened and how the team could learn from what had happened. In other words, they hoped to do as Houshower (1999, iv) suggests and “go far beyond a post audit review of a project, digging more deeply into the motivation and passion of those involved in the endeavour.” The team began by framing the project within participatory action research logic. William Foote Whyte (1991), in his review of the field of action research, differentiates action research from participatory action research (Whyte, Greenwood, and Lazes 1991). In participatory action research, people in the organization or community actively participate with the professional researcher throughout the research process from the initial design to the final presentation of results and discussion of their action implications. They thus engage actively in the quest for information and ideas to guide their future actions and participate directly in the research processes, which in turn are applied in ways that benefit all participants directly. Participatory action research is usually described as cyclical, with action and critical reflection taking place in turn. The reflection is used to review the previous action and plan the next one. Participatory action research is very useful when used across different organizations, with participants who come from different situations and are involved in different activities.

7. Bridging theory and practice In all, six senior members of the project participated in the first phase of the learning history. These participants represented a multitude of different backgrounds in terms of language, culture, geographic location, education level, responsibilities, needs and objectives. These differences proved to be at times complementary and at other times conflicting. For example, some members of the group were focused on knowledge transfer, others on health and safety, others on financing to ensure survival of their group, etc. From a life cycle perspective, one of the partners is a mature, well-established and recognized workplace health and safety research organization, while the other occupies a more junior position and is struggling to be recognized as a research representative in its health and safety community. To capture the differences in these backgrounds, we needed to develop a good understanding of the differences between theory and practice. Traditionally, theory is an attempt to answer the question of why a specific phenomenon occurs (Sutton and Staw 1995). Why do people get involved in one project and not another, why do people become influenced, why does conflict happen and get resolved. But a theory that tells us why a phenomenon occurs, does not really tell us how it occurs or even more importantly, how we can create, develop or deter that phenomenon. According to Friedlander (2001), to turn theory into practice, we must ask how. To make practice into theory, we must ask why. The practitioner tends to ask how; the researcher tends to ask why. But it is the integration of the how and the why which will result in a holistic, systemically enriched and useful practice-theory. For example, to understand how we can reduce conflict adds to our knowledge of why conflict gets reduced; and to understand why conflict is reduced contributes to our knowledge of how we can reduce it. For Friedlander (2001), the methodologies of traditional research and participatory research give rise to very different theories: essentially, researcher-dominated versus participant-dominated. In researcher-dominated methodologies, the researcher determines the research question or hypothesis, the way data will be collected, the analytic method and how the conclusions will be shaped. In participatory research, the population to be served or helped determine these characteristics. In the former, the researchers and their audience achieve the primary learning; in the latter, the participants and their constituencies achieve the primary learning. The participatory action research logic allowed us as a group to begin to address the issues of how and why certain conflicts happened during the project. The focus of the consortium was solely on the project-related activities of the participants, which are distinct from regular operations because they involve “doing something which has not been done before and which is, therefore unique” (Project Management

384

Robert Parent and Julie Béliveau

Institute 1996, 5). By focusing in on the unique nature of the project, we could separate the participants’ regular exploitation type work from the exploration type work associated with a new project (March 1991). The learning history methodology then provided the vehicle for the team to capture in the participants’ own words what took place, why it took place and how they perceived what took place. The learning historian’s role was assumed by a research professional with the KT Laboratory. The learning historian with the use of a learning history protocol covering the basic topics to be addressed conducted the retrospective interviews. The actual interview process for the learning history began early in the second year of the five-year project. From the outset, some of the project’s participants only partially bought into the learning history. For example, everyone saw it as an appropriate vehicle for tracking the “hard” facts and project events for reporting purposes. But the “softer” opportunities of stimulating and informing conversations on what happened, why it happened and how future action could be improved were less obvious to some of the team members. Nonetheless, the team decided to continue with the learning history.

8. Data analysis The method of analysis used for this learning history was to examine, categorize and tabulate the data gathered in order to identify project milestones, including learning milestones. In general, such an analysis consists of developing a descriptive structure of the project on the basis of the theoretical propositions guiding the study (Yin 1997). The objective is attained when we have an adequate narration of the facts and circumstances surrounding the project. As the project got underway and interviews took place, the learning historian began to suspect significant differences in perspectives from the three partner organizations. As the initial learning history document began circulating amongst participants, it became clear that project objectives and the needs of the three partner organizations varied considerably. These differences surfaced in two separate ways. The first was when the director of the KT Laboratory used the learning history to help develop a case report on the progress of the consortium and submitted it to the six partners, asking them to look at it and become co-authors. At the request of the funding agency, the case contained activities that had gone well (success stories), others that had not gone as well (horror stories) and still others that were more neutral in their impact (non-events). The reaction by the co-authors from one of the partner organizations was rapid and decisive and clearly reflected different opinions about what had taken place thus far, including different ontological predispositions to theory development, network development, knowledge transfer and the development of capacity for action. The conflicting interests were significant enough to delay the decision to move ahead with the proposed case until the authors could meet in person to discuss the project’s progress. The second indication that the project was experiencing difficulties came at the third annual project meeting where the differences between the groups began to generate tension. One group felt that they were being blamed for project misfires, and accused the other two groups of not pulling their weight on the project. It was obvious to all those involved that more time had to be devoted to addressing the differences raised in the learning history. At that meeting, the realization that the entire group was confronting was the difficulty of achieving “hard” results (transferring high quality research results and programs from one Canadian province to another) through an emphasis on concepts that many people termed “soft” (good communication, openness, honesty and trust, networking, relationship building). After two days of, at times complimentary, and at times conflicting conversations, the teams concluded the meeting with a more aligned understanding of what everyone was trying to accomplish. They also agreed to go ahead with an improved version of the case. Had the group not had the learning history to bring these tensions to the fore, it is not unrealistic to assume that they would have remained dormant until much later into the project, when in all likelihood it would have been too late to deal with them. The learning history was also very useful in developing the mid-term project report required by the project sponsors.

385

The 7th European Conference on Knowledge Management

9. Lessons learned Some of the most significant learning’s to come out of this experience include: ƒ The need to obtain clear and early buy-in by all participants of a learning history methodology; ƒ The usefulness of the “hard” aspects of a learning history for project reporting purposes; ƒ The contributions of the “softer” opportunities of stimulating and informing conversations on what happened, why it happened and how future action can be improved; ƒ The ability of the learning history to raise significant issues that needed to be addressed for the project to deliver the expected results. For this project, the learning history helped focus attention on differences within the groups which then had to be addressed using participatory action techniques. Since the learning history was part of participatory action research logic, the group had a good combination of tools and techniques to help them resolve issues and adjust the project focus. The participatory action research approach that helped the project team get realigned included: ƒ A project sponsored by an independent group or community, ƒ Directed toward the discovery of information about an issue of community concern, ƒ Aided by a facilitator, ƒ Resulting in empowerment of the community of people involved. This project is now in the second half of the five-year commitment and the combination of learning history and participatory action research have contributed significantly to improving how the project team functions. Many obstacles still remain to be overcome but participants now feel they have the appropriate knowledge, trust level and tools to continue to develop and improve the way they work together. Some of those obstacles include the time commitment required by both the learning historian and the project participants. They found that a learning history requires considerable time and effort to develop and keep up to date. They also found that the closer to real time the history is developed and shared, the more opportunities it provides for the group to change and improve performance before the end of a project. While considerable, the time investment in a learning history is far outweighed by the value of the knowledge it generates and synergy it provides. Because of the perceived value of the learning history and participatory action research methodology in this first knowledge transfer project, the team decided to repeat its use in another project with a different set of partners. The general objectives are similar to those of the first project, but in addition, this project involves the transfer and adaptation of a highly successful management philosophy developed in the US health care industry, called "Putting Patients First", to a healthcare setting in Quebec, the Centre de Réadaptation de l’Estrie (CRE). This learning history is an open-ended story about one organization’s (CRE) journey toward proactive humanization of patient care and management practices. It is also intended to provide additional learning’s about project learning and improvement for readers in the organization and for others wishing to undertake a similar journey. This is designed as a pilot project sponsored by the Government of Quebec and the CSN, the labor union representing CRE employees. Although this project is in its early stages, the researchers have already begun to apply some of the learnings from the first learning history project to this project. For example, they have been careful to make certain that everyone involved buys into the methodology; that a list of “noticeable results” or “hard” facts are included in our interview protocol in order to link interviewees’ interpretations to observable data; and that the interviewing and dissemination phases are kept close together in time to allow the learning history to have the best impact possible. They are also more prepared for difficult issues that may surface during the learning history process. The experience with the first project has strengthened our commitment to the value of conversations as a valuable resource to help resolve conflicts among project members and build a dialogical future.

386

Robert Parent and Julie Béliveau

10. Conclusion This paper has provided an extensive overview of the current learning history methodology. We presented the roots of this new approach, its benefits and the challenges related to its use. This overview will help researchers and practitioners that are new to the learning history methodology quickly get acquainted with its origins, goals, advantages, limits and appropriate contexts while learning what conditions are necessary to ensure its success and validity. This paper has also contributed to empirical research by presenting a case study where the learning history methodology is being used to study knowledge transfer activities within participatory action research logic. To date, our empirical results convey many lessons. The first lesson learned is the need to obtain clear and early buy-in by all participants of a learning history methodology (through information sessions or other means). Second, “hard” aspects of a learning history can be very useful for project reporting purposes while the “softer” aspects serve as opportunities to capture participants’ perspectives and stimulate conversations on what happened, why it happened and how future action can be improved. We also learned that the closer to real time the history is developed and shared the more opportunities it provides for the group to change and improve performance before the end of a project. Even though a learning history requires considerable time and effort to develop and keep up to date, we believe that the time and energy invested are far outweighed by the value of the knowledge generated. Although the learning history provides a fresh and effective way to study learning and knowledge concepts, it is still at an experimental stage. The potential of this new methodology in studying knowledge transfer activities has not been fully explored. The limitations are primarily those associated with the amount of work involved in a developing a learning history. More empirical research is necessary to demonstrate its effectiveness in studying collective learning processes and knowledge transfer initiatives.

11. Acknowledgements This research was made possible in part by funding from the Canadian Institutes of Health (CIHR) and the Centre d’étude en organisation du travail at the University of Sherbrooke.

References Argyris, Chris (1990). Overcoming organizational defenses. Reading, MA: Addison Wesley. Bradbury, Hilary and Benyamin M. B. Lichtenstein (2000). “Relationality in Organizational Research: Exploring The Space Between.” Organization Science, Volume 11, Number 5, 551-64. Bradbury, Hilary and Charalampos Mainemelis (2001). “Learning History and Organizational Praxis.” Journal of Management Inquiry, Volume 10, Number 4, 340-57. Coghlan, David (2001). “Car Launch: The Human Side of Managing Change/Oil Change: Perspectives on Corporate Transformation.” Irish Journal of Management, Volume 22, Number 1, 219-24. Coghlan, David (2002). “Interlevel Dynamics in Systemic Action Research.” Systemic Practice and Action Research, Volume 15, Number 4, 273-83. Cortese, Claudio G. (2005). “Learning Through Teaching.” Management Learning, Volume 36, Number 1, 87-117. Cross, Rob and James B. Rieley (1999). “Team Learning: Best Practices and Tools for an Elusive Concept.” National Productivity Review, Volume 18, Number 3, 9-18. Easterby-Smith, Mark and Marjorie A. Lyles (2003). “Introduction: Watersheds of Organizational Learning and Knowledge Management.” In The Blackwell Handbook of Organizational Learning and Knowledge Management, eds. Mark Easterby-Smith and Marjorie A. Lyles, 1-15. Malden, MA: Blackwell Publications. Farr, Karen (2000). “Organizational Learning and Knowledge Managers.” Work Study, Volume 49, Number 1, 14-18. Friedlander, Frank (2001). “Participatory Action Research as a Means of Integrating Theory and Practice.” Proceedings Fielding Graduate University Action Research Symposium, Alexandria, VA, July 23-24, 2001.

387

The 7th European Conference on Knowledge Management

Greco, JoAnn (1999). “Knowledge Is Power.” The Journal of Business Strategy, Volume 20, Number 2, 18-22. Houshower, Hans (1999). A Voyage Beyond the Horizon and Back: The Heartland Refinery’s Continuous Improvement Story. Cambridge, MA: Society for Organizational Learning, http://www.solonline.org/repository/download/9907_GlobalOilLearningHist.PDF?item_id=3 59662. Jacques, March L. (1997). “Learning Histories: The S in PDSA of Learning.” The TQM Magazine, Volume 9, Number 1, 6-9. Klein, Louis (2005). “Systemic Inquiry - Exploring Organisations.” Kybernetes, Volume 34, Number 3-4, 439-47. Kleiner, Art and George L. Roth (1996). “Field Manual for a Learning Historian”. Version 4.0. Cambridge, MA: MIT Center for Organizational Learning. Kleiner, Art and George L. Roth (1997a). “How to Make Experience Your Company's Best Teacher.” Harvard Business Review, Volume 75, Number 5, 172-77. Kleiner, Art and George L. Roth (1997b). “When Measurement Kills Learning.” The Journal for Quality and Participation, Volume 20, Number 5, 6-15. Kleinsmann, Maaike and Rianne Valkenburg (2005). “Learning from Collaborative New Product Development Projects.” Journal of Workplace Learning, Volume 17, Number 3, 146-56. Louis, M. R. and J. M. Bartunek (1992). “Insider/Outsider Research Teams: Collaboration Across Diverse Perspectives.” Journal of Management Inquiry, Volume 1, Number 2, 101-10. March, James G. (1991). “Exploration and Exploitation in Organizational Learning.” Organization Science, Volume 2, Number 1, 71-87. Milam, John (2005). “Organizational Learning through Knowledge Workers and Infomediaries.” New Directions for Higher Education, Number 131 (Fall), 61-73. Parnell, John A., C.W. Von Bergen and Barlow Soper (2005). “Profiting From Past Triumphs and Failures: Harnessing History for Future Success.” S.A.M. Advanced Management Journal, Volume 70, Number 2, 36-59. Project Management Institute (1996). A Guide to the Project Management Body of Knowledge. Upper Darby, PA: Project Management Institute (PMI), PMI Standards Committee. Roth, George L. (2000). “Constructing Conversations: Lessons for Learning from Experience.” Organization Development Journal, Volume 18, Number 4, 69-78. Roth, George L. and Art Kleiner (1995a). “Learning about Organizational Learning - Creating a Learning History.” Cambridge, MA: MIT Center for Organizational Learning, Sloan School of Management, https://dspace.mit.edu/retrieve/2285/SWP-3966-37617962.pdf. Roth, George L. and Art Kleiner (1995b). “Learning Histories: 'Assessing' the Learning Organization.” The Systems Thinker, Volume 6, Number 4, 1-7. Roth, George L. and Art Kleiner (1998). “Developing Organizational Memory through Learning Histories.” Organizational Dynamics, Volume 27, Number 2, 43-60. Roth, George L. and Peter M. Senge (1996). “From Theory to Practice: Research Territory, Processes and Structure at an Organizational Learning Centre.” Journal of Organizational Change Management, Volume 9, Number 1, 92-106. Royrvik, Emil A. and Arne L. Bygdas (2002). “Knowledge Hyperstories: The Use of ICT Enhanced Storytelling in Organizations.” The Third European Conference on Organizational Knowledge, Learning, and Capabilities, Athens, Greece, April 5-6, 2002. Scholl, Wolfgang, Christine Konig, Bertolt Meyer and Peter Heisig (2004). “The Future of Knowledge Management: an International Delphi Study.” Journal of Knowledge Management, Volume 8, Number 2, 19-35. Sutton, Robert I. and Barry M. Staw (1995). “What Theory Is Not.” Administrative Science Quarterly, Volume 40, Number 3, 371-84. Van Maanen, John (1988). Tales of the Field, on Writing Ethnography. Chicago, IL: The University of Chicago Press. Whyte, William Foote, Davydd J. Greenwood and Peter Lazes (1991). “Participatory Action Research: Through Practice to Science in Social Research.” In Participatory Action Research, ed. William Foote Whyte, 19-55. Newbury Park, CA: Sage Publications. Yin, Robert K. (1997). “The Abridged Version of Case Study Research.” In Handbook of Applied Social Research Methods, eds. Leonard Bickman and Debra J. Rog, 229-59. Thousand Oaks, CA: Sage Publications.

388

Systemic Methodologies and Knowledge Management: A Survey of Knowledge Management and Systems Thinking Journals Alberto Paucar-Caceres and Rosane Pagano Manchester Metropolitan University Business School, UK [email protected] [email protected] Abstract: Knowledge Management (KM) arrival in the early 90s as a distinctive management field can be linked to both the development of Organisational Learning as it was seen in the US and Japan; and to the preoccupation of certain management authors to approach issues about knowledge in organisations using the Systems Thinking (ST) approach. After more than ten years, KM practice in both sides of the Atlantic still seems to be influenced by the systemic approach, as it is understood in the US. Over the last 30 years, the UK Systems Movement has been very active and developed a strong set of well established Systems Based Methodologies (SBM): Soft Systems Methodologies, Critical Systems, Total systems Intervention amongst others. Although these systems methodologies have established a well solid Systemic Practice (SP) in the UK, it seems to have had some influence on the KM field in general but there is little evidence of their being used explicitly in KM practice. The paper discusses at a conceptual level the link between Systems Thinking and KM in the US, exploring the theoretical approaches that were embedded in the early KM literature. A survey of a sample of Journals in both Systems Practice and Knowledge Management areas is carried out to assess the current exchange of concepts/methods between KM and Systemic Practice, as it is mainly understood in the UK. The survey focuses on papers published between 1995 and 2005 that seem to have used directly or indirectly ST and Systems Methodologies in KM practice; results indicate that there is a reasonable number of papers that seem to have used SBM when tackling KM issues. Our objective is to raise Knowledge Management and Systemic Practice researchers’ awareness of the possible cross fertilisation between these two important management fields. Keywords: Knowledge management, systems thinking; methodology, systems; survey

1. Introduction Knowledge management has become a crucial resource in almost every organisation in its search for improving effectiveness, efficiency and effectiveness. Systems thinking and the embodiment of systems ideas in the so-called Systems-based methodologies has also been one of the major developments in management in the last 30 years. The motivation of this paper is to find out whether a cross influence between these fields of management is taking place. We start by identifying the links between KM and ST at a conceptual level outlining the systemic approach motivating and present in the early Knowledge Management writings of Nonaka and Takeouchi; the paper argues that the general tone of their man book is an indication of the importance of Systems Thinking in Nonaka’s model of knowledge creation in organisations. In order to introduce the developments in Systemic Practice, the main Systems based methodologies are mapped using a framework that identifies four distinctive paradigms: (1) Optimisation Paradigm: Problem-Solving methods; (2) Learning Paradigm: Situation Improving methodologies; (3) Critical Paradigm: Intervention and empowering methodologies; and (4) Pluralistic paradigm: Multi methodologies. A list of the main established methodologies representing these paradigms is compiled. It is argued that Knowledge Management al least in its beginnings can be linked to the Learning paradigm developed in Systems thinking. To assess the input of Systems Based Methodologies (SBM) developed over the last years and to ascertain its influence on the Knowledge Management research and practice we surveyed journals from both the KM and Systems Practice field. The ISI (Institute of Scientific Information) and the ABI Informs database were used to search for articles published in a set of well established Knowledge Management and Information Systems (which in general include research on KM) and Systemic Practice Journals (here we include OR and Management science journals); the search was driven by a set of keywords that will identify articles linking both fields, the search covered the period between 1995 and 2005 inclusive. The structure of the argument followed in this paper is: (1) to outline the link between KM and Systems Thinking present in the early KM theoretical and conceptual developments;

389

The 7th European Conference on Knowledge Management

(2) to map the development of the main Systems-Based Methodologies (SBM) into a framework of four stages or paradigms; (3) to survey articles in KM and ST journals to ascertain the cross influence amongst both fields; and (4) conclusions from both the concepts explored and the survey are drawn and further ideas for future research are drawn.

2. Knowledge management and systems thinking It is widely acknowledged that over the last years, Knowledge Management (KM) as made a major impact on the way that organisations are managed. Although not in its present form, the interest in knowledge and its management can be traced back to the 60s when major companies established Research and Development (R &D) Departments; mainly to explore and investigate new ideas and develop prototypes of possible new products. R &D departments were given the task of coordinating between the functional departments of Marketing, Production and Personnel. Research findings were indeed knowledge that organisation owned and kept but partly because they were still restricted to the problems of finding ideas for new products, knowledge was confined to the boundaries of the R & D departments’ briefs. It was in the early 90s that one of the most influential Management thinkers of the twentieth century, Peter Drucker predicted that there will be a fundamental change in the way organisations will be run in the future; it was his prediction of the increased of knowledge over production as the main shift of a post-capitalist society that gave birth to KM management, Drucker (1993). Two years after, Nonaka and Takeuchi (1995) elaborated the theory of knowledge-creation encapsulated into the concepts of explicit and tacit knowledge and their roles in the process of creating knowledge. Ever since these two major events that have marked the birth of KM, the amount of books, conferences and journals devoted to KM has continue to proliferate and it seems that despite of being seen by some as a possible ‘fad’ doomed to have a short life in the field of management (Jackson, 2003, pp xiv), KM is growing in importance and has become an essential task of almost every organisation; creation and leverage of knowledge both within the organisation and externally to their customers and shareholders has become a constant managers’ preoccupation (Rubestein-Montano et al ,2001). Nonaka and Takeuchi were concern with the way knowledge is created and although not explicitly, they indicated that to better understand the process of knowledge creation, the mechanistic model of the organisation as an entity processing information, needed to be revised and instead a more holistic approach was necessary; so it can be said that although not explicitly the beginnings of the KM theoretical developments, were linked to the concepts of Systems Thinking as it was understood and conceived in the US, in particular through the work developed by Peter Senge (1990). Nonaka and Takeouchi (1995) acknowledged both of Senge’s major contribution that is his ideas on Organisational Learning and his search for a synthesis approach in Organisational sciences; these have seen important theoretical contributions that somehow have cleared the field for Nonaka’s organisational knowledge creation to arrive in the early 90s. Also, Nonaka and Takeuchi took a clear standing in favour of a systemic view of organisations, a view very much embedded in Japanese culture. An important part of the book is devoted to contrast western and eastern ways of thinking; this eastern approach is very much linked to a systems stand or systemic view as oppose to a western reductionistic or systematic way of perceiving situations. Furthermore, there are grounds to claim that the difference has to do with cultural issues and the dichotomy Systemic-Systematic can be equated to the philosophy underpinning western and eastern way of thinking (Paucar-Caceres, A. 1998). The tone and discussion of Nonaka and Takeouchi book indicates that for them, Systems Thinking was important and implicit in the process of organisational knowledge creation which is advanced intertwined with the discussion that the underpinning philosophy that have shaped modern Japan has lead to a business culture that very much takes a holistic approach to situations; Japan seems to be better equipped to cope with systemic thinking whereas the west was still entrapped in the mechanisticfunctionalist way of thinking. In terms of the different systems thinking paradigms, it can be argued that this way of seeing the world is much close to the Learning Paradigm of Systems Thinking. Although the systems movement in the UK has been very active in promoting the set of systemic methodologies through the UKSS conferences and established journals such as Journal of Operational Research, Systemic Practice and Action Research, Systems Research and Behavioural

390

Alberto Paucar-Caceres and Rosane Pagano

Science amongst others, it seems that there has not been an exchange of ideas/concepts/methods between the field of KM and ST as it is known in the UK.

3. The systems movement in the UK In this section, a map of the development of some of the main SBM associated with systems thinking in the UK is proposed to further understand the influence between Systems Thinking and KM. A framework based on the paradigms underpinning the methodologies introduces four paradigms in the general development of management sciences and systems thinking in the UK: (a) Optimisation paradigm: problem-solving methods (1940-1960); (b) Learning paradigm: improving-situation methodologies (1960-1980); (c) Critical Paradigm: intervention-empowering emancipatory systems methodologies (1980-1990); and (d) Pluralistic and Multi-methodological Paradigm: use of multi-methodologies and pluralistic approaches (1990- ). Using time and the paradigms developed in management sciences and the systems movement over the last decades, a framework was constructed as shown in Fig. 1. It depicts the emergence and development of the main systemic methodologies in the UK, showing the major direct and indirect influences between them over the last six decades. The map depicts four main paradigms. The emergence of hard approaches is located in the late 50s and it has been associated with the developments of operational research (OR) in the UK and the USA and with the developments of systems engineering/systems analysis in the USA. During the 60s and 70s, a number of soft systems thinking methodologies emerged in the UK amongst them the more influential were Checkland’s soft systems methodology (Checkland, 1981; Checkland and Scholes, 1990) and cognitive mapping developed by Eden et al (1983). In the late 1980s and 1990s, Critical Systems Thinking (CST) became prominent in the UK when ‘total systems intervention’ developed by Flood and Jackson embraced the CST commitments in systems practice, (Flood and Jackson 1991). Finally, the more recent debate in OR and systems communities in the UK is around the use of methodologies in combination and acknowledging various paradigms. The term Pluralistic/Multiparadigmatic thinking has been coined to name the approaches under this approach (Jackson 1997, 1999, 2003; and Mingers, 1997a, 1997b, and 1999. These major methodological developments together with the main theoretical influences of the four paradigms over time are depicted in Fig. 1, a brief description of the paradigms follows in the next sections.

3.1 The optimisation paradigm (1940-1960): Problem-solving methods Checkland (1981) locates the emergence and development of this paradigm in the late 50s and 60s. It was mainly an extension into management of what was the positivistic epistemology to natural sciences. The belief that organisations can be seen as objective worlds was certainly underpinning the early developments of classical OR/MS methods and techniques. The Optimisation Paradigm and the development of ‘solving methods’ are generally associated with classic Operational Research techniques and the so-called ‘hard’ approaches. Jackson (2003) places some of the systems-based methodologies of this paradigm in what he calls Systems approaches for ‘Improving Goal Seeking and Viability’. The methodologies to be surveyed in this paper are: ƒ

Systems Dynamics;

ƒ

Organisational Cybernetics; and

ƒ

Complexity Theory

3.2

The learning paradigm (1970-1980): Situation-improving systems methodologies

The learning (Checkland, 1981), interpretivist (Jackson, 1982; Mingers, 1980, 1984) paradigm is the one that underpins the methodologies involved in this group. Ackoff (1993) calls this the ‘design approach’ comprising methods that attempt to dissolve systems of problems or messes. He argues that these methodologies differ substantially to those of the ‘research approach’ in that they aim to tackle the context or environment where the mess takes place and trying to alleviate or dissolve the systems of problems rather than solving it. Jackson (2003) groups the methodologies of this paradigm under Systems approaches that ‘Explore Purposes’; here he includes ‘Strategic Assumption

391

The 7th European Conference on Knowledge Management

Surfacing and Testing’ developed by Mitroff (1981) and ‘Interactive Planning’ proposed by Ackoff (1981, 1991). The methodologies to consider in this survey are: ƒ Soft Systems methodology (Checkland’s) ƒ

Interactive Planning (Ackoff’s)

ƒ

Strategic Assumption Surfacing and Testing, SAST (Mason and Mitroff’s)

ƒ

Systems Intervention Strategy (Mayon-White)

ƒ

Social System Design (Churchman’s)

ƒ

Cognitive Mapping, SODA (Eden’s)

Viable Systems Diagnosis (Beer’s)

3.3

The critical paradigm (1980-1990) Intervention-empowering systems methodologies

During the 1980s, a new set of methodologies based on Critical Systems Thinking (CST), Jackson (1992), Flood and Jackson (1991) appeared in the UK systems movement. It is a relatively new development in the systems movement; essentially its philosophy is based on the belief that social systems are oppressive and unequal therefore systems thinking should concentrate on the issue of inequality of the participants. It can be argued that the main feature of these approaches is that they try to empower the actors in the intervention. The critical systems thinking paradigm provides the philosophical underpinnings for the methodologies in this group. CST aims to provide a framework for those methodologies working in coercive context and in which the social and organisational world are oppressive and unequal. There are two main approaches to be consider in this survey: ƒ Critical Systems Heuristics (Ulrich); and ƒ

Total systems intervention (as developed by Flood and Jackson)

3.4 The Pluralistic Paradigm (1990): Multi-paradigmatic and Pluralistic thinking In the early 90s an interesting debate in the OR and systems communities in the UK emerged around issues concerning the use of more that one methodology (combining them or using parts of them); systems academics and systems practitioners have been debating the possibilities of using methodologies from different paradigms acknowledging and recognising the strengths and weaknesses of them. Two of the more developed current approaches to multi-methodology are: ƒ Critical Systems and critical pluralism/complementarism as initiated by Flood and Jackson and lately developed into ‘coherent pluralism’ by Jackson (1999); and ƒ

Multi-paradigm multi-methodology/Critical pluralism developed by Mingers (1997a, 1997b).

4. Survey of articles citing systems-based methodologies in KM journals We started by assembling a sample of Journals that publish academic and practitioners’ research at both fields Knowledge Management and Systems Practice. The selection is a sample of Journals in the fields of Knowledge Management and Systems Practice and adjacent fields such as OR, management science, and IT. The list includes journals published in the UK, the USA and Europe. All the Journals surveyed are published in English. First we planned to survey the journals directly using the set of keywords but this proved difficult since not all the journals websites had search facilities so it was decided to use the ABI Informs bibliographic database search facilities. Table 1 lists six of the most important journals that publish theoretical and research articles in Systems Thinking and Systems practice; and ten Journals in the various areas associated with Knowledge Management. All the Journals are well known and recognised as important in both KM and the Systems field.

392

Alberto Paucar-Caceres and Rosane Pagano

393

The 7th European Conference on Knowledge Management

Since the aim was to ascertain the cross influence between the field of KM and Systems practice, we selected a set of key words that identify the field of Knowledge Managment and the field of Systems Practice to search the KM and SP journals. The keywords were: ƒ Knowledge Management keywords to search articles published in Systems Practice journals: ƒ Knowledge ƒ Knowledge management ƒ Organisational learning ƒ Learning ƒ Intellectual capital ƒ

Systems Practice keywords to search articles published in KM journals: ƒ Systems Thinking ƒ Systems Dynamics ƒ Viable Systems ƒ Organisational Cybernetics ƒ Complexity Theory. ƒ Soft Systems Thinking ƒ Critical Systems ƒ Interactive Planning Strategic Assumption Surfacing and Testing (SAST) ƒ Systems Intervention Strategy ƒ Social System Design ƒ Critical pluralism

The ISI Web of Knowledge Service for UK Education and the ABI/Inform bibliographic data bases were used ; titles and abstracts of Knowledge Management and Systems Practice articles published in the 16 Journals between 1995 and 2005 were queried for the occurrence of our set of keywords. The ten year period was considered adequate because KM as such started in the early 90s but also because a 10 years period seems to be reasonable span of time over which to assess the cross exchange between the systems movement and their embodiment into a set of SBM developed in the UK and the field of KM The survey considered only papers that have been catalogued by the ISI and ABI databases as articles or academic papers hence, books reviews, editorials, letters, etc. were not included because these documents are not generally cited by authors in the field. Although the search was conducted using the ISI and ABI/Inform databases, it was mainly the output of ABI that has been compiled because ISI is a database that includes mainly Computer Science and IS journals which are also included in the ABI Inform ( the ABI database which includes more general Business and Management oriented bibliographic data bases journals). Table 2 shows a sample of the main results of the search; it groups the KM paper by the Systemic Methodologies that have been used and details of the paper’s title, authors and Journals. A summary of the number of paper using Systems Based methodologies is presented in Table 3 ƒ

(1) These are the Knowledge Management and Systems Practice journals included in Table 1

ƒ

(2) Other Journals: Total Quality Management and Business Excellence, Decision Science; the Learning Organization; International Journal of Operations & Production Management; Journal of Change Management; KM World; Organisational Science; Futures; Logistics Information Management; Information technology and People; Management Decision; Journal of Workplace learning; Employee Relations and Information Systems Security

394

Alberto Paucar-Caceres and Rosane Pagano

Table 1: Knowledge management and systems practice journals SYSTEMS PRACTICE JOURNALS Journal of Operational Research Society Interfaces Omega-International Journal of Management Science European Journal of Operational Research Systems Practice/Systemic Practice and Action Research. Systems Research/ Systems Research and Behavioural Science KNOWLEDGE MANAGEMENT JOURNALS European Journal of Information Systems (EJIS) International Journal of Information Management Journal of Information Technology Journal of Management Information System (JIMIS) Journal of Information Systems Journal of Strategic Information System (JSIS) Journal of Knowledge Management Knowledge Management Research and Practice Journal of intellectual Capital MIS Quarterly (MISQ)

Table 2: Samples of articles published between 1995-2005

395

The 7th European Conference on Knowledge Management

Table 3: Summary of the number of KM articles using SBM

Optimisation Paradigm: Problem-Solving methods Systems Dynamics Organisational Cybernetics Complexity Theory Learning Paradigm: Situation-Improving methodologies Soft Systems methodology (Checkland’s) Interactive Planning (Ackoff’s) Strategic Assumption Surfacing and Testing, SAST (Mason and Mitroff’s) Systems Intervention Strategy (Mayon-White) Social System Design (Churchman’s) Cognitive Mapping, SODA, JOURNEY (Eden’s) Viable Systems Diagnosis (Beer’s) The Critical paradigm: Intervention-empowering systems methodologies Critical Systems Heuristics (Ulrich) and Total systems intervention ( Flood and Jackson) The Pluralistic Paradigm: Multi-paradigmatic and Pluralistic thinking Critical Systems and critical pluralism/complementarism (Jackson and Flood ) Multi-paradigm multi-methodology/Critical pluralism (Mingers)

Number of articles

Journals KM and SP (1)

Others (2)

15 0 9

7 0 3

8 0 6

7 0 0

4 0 0

3 0 0

0 0 1 7

0 0 1 5

0 0 0 2

9

8

1

1

1

1

0

0

0

5. Discussion of results Table 2 shows the results of the survey grouping the articles by the systemic approach used; the papers were published in various journals between 1995 and 2005 inclusive. The set of Systemic methodologies used have been grouped according to the framework of four paradigms presented earlier. Summary of the survey results are presented in Table 3 which lists the main System-Based Methodologies used in the area of Knowledge Management. The table groups the number of papers published in journals devoted to both are4as KM and Systems Practice (i.e. Journal of Operational research Society, Journal of Knowledge Management, etc as listed in Table 2); and also includes journals in other areas of management. The results show that there is a reasonable number of KM papers that have made explicit use of Systems Based Methodologies or Systemic Practice (SP). Most noticeable is the case of Systems Dynamics with 15 papers, seven of which were published in KM and SP journals. A possible explanation to this high proportion may be that in its beginnings KM was influenced and somehow linked to Organisational Learning as championed by Peter Senge who popularised the principles of Systems Dynamics. The results show also that Soft Systems Methodology as proposed by Checkland is an approach that has been used in various KM papers, most of them (four out of seven) have been published in Systems/ KM/OR journals confirming the popularity of SSM. The survey also shows that Viable Systems Diagnosis that is Stafford Beer Approach has been used regularly, seven papers were published five of them in KM and Systems journals. The other area of Systems that has been used fairly extensively in KM is clearly Critical systems: nine papers were published and eight of them in KM and SP journals. The rest of Systems-Based methodologies clearly have not reached or influenced the KM community. Methodologies such as ‘Interactive Planning’, Strategic assumption surfacing and testing’, ‘ Social systems Design’ have not been mentioned or used in KM articles; only Cognitive

396

Alberto Paucar-Caceres and Rosane Pagano

mapping (Eden’s SODA and JOURNEY) appeared in on of the KM papers. These findings are consistent with the fact that SSM, Systems Dynamics, Viable Systems Diagnosis and to some extent Cognitive Mapping are the most fully developed methodologies made available by the systems community, the others can be seen as useful general frameworks and theoretical background when practitioners approach managerial situations including KM. On the other hand, Critical Systems (approach and not fully fleshed methodology) appears to be used frequently even though most of the papers were published in one Systems Journal (Systems Research and behavioural Science) It is interesting to note that not all of the KM and Systems Practice journals have published KM papers with SBM applications; thre journals: Interfaces; Omega and European Journal of operational research have not published papers that claimed to have used explicitly Systems Methodologies in KM; this may be because two of these journals (Omega and Interfaces) area US based where the UK systems movement endeavours sem to have had little influence; the US management science journals are still biased to accept OR and classical ‘hard’ management science articles. But also, five of well established KM/IT journals listed in Table 2 (European Journal of IS; Journal of Information technology; Journal of Information systems; Journal of Strategic Information systems and MIS Quarterly) appear not to have published papers in which SBM was applied to KM The number of KM papers using SBM published in other Journals (not categorised in this appear as KM or systems journals) is also noticeable: the Leaning Organisation journal; published three papers containing applications of SBM to KM.

6. Conclusions ƒ

The OR/Systems Community has been actively promoting the development and the use of systemic methodologies over the last decades. The paper has attempted to asses to what extent the range of these Systems Based Methodologies (SBM) developed mainly in the UK, have been used in the relatively new field of Knowledge Management. To understand these developments and to explore the links between Systems Practice and Knowledge Management, the paper has: (1) outlined the links between Systems thinking and KM found in the earlier Knowledge Management literature; (2) mapped the SBM developed in the last years into four Systems Paradigms containing a set of specific Systemic Methods/ methodologies; and (3) surveyed papers published in both areas KM and Systems practice during 1995 and 2005 inclusive in KM and OR/Systems Practice Journals.

ƒ

Essentially the survey shows that the methodologies linked to the Learning and Critical System Paradigms have been used fairly extensively in KM papers; these include SSM, Viable Systems Diagnosis and Critical Systems Approach. The only method from the Optimisation Systems Paradigm that appears to be popular when KM is tackled is Systems Dynamics although a good proportion of them were published in journals outside the KM and OR/Systems areas. In general, judging from the number of papers found in the survey, there has been a reasonable exchange of ideas between the KM and Systems communities especially in the UK. It is therefore a credit to the efforts of the UK systems community that has been promoting and made available these Systems methodologies to other audiences; the KM community seems to be certainly receptive to the UK systems movements endeavours and has made good use of the systemic methodologies developed.

ƒ

The paper has only touched on the possible cross influence between KM and ST at the level of paper titles and keywords which it can be argued are only a vague indication of the said cross influence. Both fields have authors that represent their main theoretical strands, i.e., in the Systems area authors such as Checkland, Jackson, Flood, Mitroff, Beer, Maturana amongst others; and in Knowledge Management there area authors that have been certainly very influential (Nonaka, Takeuchi, Davenport, amongst others). To assess the influence of their ideas and in order to have a better picture of the cross fertilisation happening between the fields, a survey on the articles citing their work will be useful to have a better understanding of the exchange between these two areas of management.

ƒ

Finally, the survey only counts the number of papers published KM and Systems Practice journals; to further assess the exchange and mutual influence, a deeper analysis if not of all papers here sampled maybe of an exemplary paper (representing each paradigm) will certainly

397

The 7th European Conference on Knowledge Management

shed further light into the real exchange of ideas between these two important fields of management.

References ABI/INFORM and PROQUEST RESEARCH LIBRARY; http://proquest.umi.com (accessed 25 May, 2006) Ackoff, R. (1993), The art and Science of Mess Management. In Mabey, C., Mayon-White, B. Managing Change, Paul Chapman Publishing, London, Ackoff, R. (1979), The future of OR is past, Journal of operational Research Society, 30, 2. Ackoff, R. (1981), Creating the corporate future, Wiley, New York. Drucker, P. (1993), Post-capitalist Society, Harper Business, NY. Eden, C., Jones, S. and Sims, D. (1983). Messing About in Problems, Pergamon, Oxford. Checkland, P. B. (1981), Systems Thinking, Systems Practice, Wiley, Chichester. Checkland, P. B. and Scholes, J. (1990). Soft Systems Methodology in Action, Wiley, Winchester. Flood, R. and Jackson, M. (1991). Creative Problem Solving: Total Systems Intervention, Wiley, Chichester. Jackson, M. (1992). Systems Methodology for the management Sciences, Plenum press. Jackson, M. C. (1997). Pluralism in Systems Thinking and Practice, in Multimethodology, (J. Mingers and A. Gill, eds.), Wiley, Chichester. Jackson, M. C. (1999). Towards coherent pluralism in management science, Journal of the Operational Research Society, 50: 12-22. Jackson, M. C. (2003), Systems Thinking, Creative Holism for Managers, Wiley, Chichester Institute of Scientific Information-Web of Science: - http://tame.mimas.ac.uk (accessed 25 May, 2006) Mingers, J. (1980). Towards an appropriate social theory for applied systems thinking: critical theory and soft systems methodology, Journal of Applied Systems Analysis, 7. Mingers, J. (1984), Subjectivism and soft systems methodology-a critique, Journal of Applied Systems Analysis. 11:85. Mingers, J. (1997a). Towards critical pluralism, in Multimethodology, (J. Mingers and A. Gill, eds.), Wiley, Chichester. Mingers, J. (1997b). Multi-paradigm Multimethodology, in Multimethodology, (J. Mingers and A. Gill, eds.), Wiley, Chichester. Mingers, J. (1999). A Comparative Characterisation of Management Sciences Methodologies, in: Systemist, 21(2): 81-92. Mitroff I (1981) Challanging Strategic planning assumptions, Theory, Cases and Technique, Wiley, NY. Nonaka, I. and Takeuchi, H. (1995), The Knowledge-Creating Company, Oxford University Press, New York, NY Paucar-Caceres, A. (2003) Measuring the effect of highly cited papers in OR/Systems journals: a survey of articles citing the work of Checkland and Jackson. Systems Research and Behavioural Science, 20, 65-79. Paucar-Caceres, A. (1998), Systems Thinking: East and West, Systemist, Journal of the United Kingdom Systems Society, Vol. 20, N. 2, May 1998. Rubenstein-Montano, et al, (2001), SMARTVision: a Knowledge Management Methodology, Journal of Knowledge Management, Vol 5, N. 4 pp 300-10 Senge, P. (1990), The Fifth Discipline: The Art and Practice of the Learning Organization, Random House, London.

398

Why Do Managers from Different Firms Exchange Information? A Case Study From a Knowledge-Intensive Industry Mirva Peltoniemi Tampere University of Technology, Finland [email protected] Abstract: This paper explores the motivation for information exchange between firms within a knowledgeintensive industry. The qualitative empirical data is gathered from the Finnish games industry. The industry is seen as a complex system that changes through an evolutionary process. There are three main explanations for such collective efforts. First, the firms want to help each other in order to create critical mass at the national scale. Second, selection operates more strongly at the group level between industries than within the industry. Third, information exchange makes their search functions more effective allowing collective search. Keywords: Information exchange, knowledge-intensive industry, group selection, collective search

1. Introduction This paper explores the motivation of information exchange between firms within a knowledgeintensive industry. The main objective is to find out why the managers engage in such activities and what it means in the light of the dynamics of the industry. This question has emerged during a case study of the Finnish games industry and its development mechanisms. Within the case study the representatives (CEOs, CFOs or equivalents) of eight firms were interviewed. Also other sources, such as newspaper and magazine articles were used in order to understand the context. The Finnish games industry is here defined to comprise firms that engage in the development and/or publishing of PC, console, mobile and/or online games. The Finnish games industry includes about 100 firms of which the first ones were founded in mid 1990s and the majority after the year 2000. They range in size from one to hundreds of employees and all operate in the global market. As a generalisation one can say that the number of firms with more than one hundred employees is less than ten and the number of firms with less than ten employees is about one hundred. Of these only a fraction concentrates solely on games. The problem of the motivation of inter-firm communication within such an industry is approached with evolutionary and complexity theories. The industry is seen as a complex system that changes through an evolutionary process. This is because with these theories it is possible to capture the dynamics that follow from decentralised decision-making and interconnectedness within such a population. The paper starts with a short overview of evolutionary and complexity thinking related to the topic of the paper. This is followed by a description of the information exchange and why it has an impact on the development of the industry. Subsequently, the motivation for the information exchange is analysed with three concepts, namely critical mass, group selection and collective search. Finally, some conclusions are given.

2. Evolution and complexity within the industry Evolution means that novel things come about by changing and recombining existing things (Murmann 2003). The basic proposition of evolutionary economics is that firms have certain ways of doing things that do not change over night and that these ways vary from firm to firm. Thus, continuity and distinctiveness are assumed (Dosi et al. 2000, pp. 11-12). Nelson and Winter (1982, p. 4) state that their evolutionary theory of economic change emphasises “the tendency of the most profitable firms to drive the less profitable ones out of business”. This means that the variety built of different kinds of firms undergoes market selection. It favours those firms that are better fitted with current circumstances, which translates into better profitability.

399

The 7th European Conference on Knowledge Management

In the field of evolutionary economics an industry is often defined as a population of firms that faces the same selection mechanisms. This means that firms, that have to tackle the same kinds of challenges in order to survive and succeed, belong to the same industry. This also translates into similar success factors and sources of competitive advantage for firms within an industry. Firms may be different, but the market process measures them with the same metrics. Murmann (2003) claims that individuals do not evolve, but populations do. This means that change within an industry comes from the birth of new different kinds of organisations and death of old ones, and not from change within existing organisations. This is quite a harsh statement since although it is consistent with evolutionary biology, where animals and plants cannot change their genetic makeup, firms and other kinds of organisations can. Perhaps they are not capable of swift changes because of organisational inertia, resistance in the face of change and lack of resources, but people fundamentally decide what they do and all firms have some kind of a hierarchy based on which such decisions are made. The games industry in general is an interesting case of variation and selection since its existence is acutely dependent on the emergence of new variation. New games have to differ from the old ones either by technology or by content. In an ideal case the difference is remarkable in both. The same product cannot be sold again to the same consumers. This means that every firm is aiming at variety creation with each and every product. Selection operates on several levels in such an interconnected business. The developers face selection first in coming up with financing for producing a demo of a prospective game. In the case of past successes income financing is the obvious alternative. In other situations the financing has to come from the initial capital of the firm or it may be acquired from an outside investor. So-called sweat capital (work with no pay) is also a viable option for new and enthusiastic entrepreneurs. Once the demo is completed the developer has to pitch for a publishing deal and thus the publishers enforce the second round of selection. In the case of a hugely successful developer the selection process operates another way around and the developer chooses which publisher would best suit its needs in terms of retail channels, marketing capabilities, and - most importantly revenue sharing. Thus, the rise in the relative importance of a firm in the industry enhances its position in negotiations, which changes its function from merely generating variety to enforcing a significant selection pressure. This makes it easier for a firm to influence the development of the industry and its norms. Once the finished product reaches the consumer market both the developer and publisher face selection. The effect of that selection is determined by the contract that the developer and publisher have made. Thus, if the developer in any case receives the cost of production the selection works mainly on the publisher. In the opposite case the developer carries the risk and is also entitled to a greater share of profits in the case of good sales. From the viewpoint of prospective game concepts the selection operates on them three times. Firstly, any developer firm has several potential ideas from which to choose and only limited resources with which to work on them. Thus, only a fraction of the ideas will be developed even to the concept or demo level. Secondly, publishers receive large numbers of demos and will choose to finance only a portion of those. The third selection goes on within the consumer market. Majority of published games end up as financial disasters making losses for the publisher. For this reasons the publishers have a portfolio approach where a big hit can compensate for several failures. In the first and the second stage the decision-makers are trying to figure out what the eventual consumer instigated selection would favour. However, this has proven to be difficult and the overlap of these three selection mechanisms in terms of what they favour is not perfect. Evolutionary thinking emphasises variation within the population and the selection process, which it undergoes. There is also an interest on how such variation is renewed as selection winnows it. However, there is basically no interest on how the members of the population are connected and how such connectedness affects the development of the system. This is why the evolutionary view is here supplemented with the complexity view.

400

Mirva Peltoniemi

There is no strong consensus over the relationship of evolutionary and complexity theories. Some see evolution as a part of complexity and others the other way around. The following is an example of the former view. “The process of evolution is an important integrative theme for the sciences of complexity, because it is the generative force behind most complex systems.” (Ray 1999, p. 161) Here, however, evolutionary and complexity theories are seen as complementing ways to analyse an industry. According to Metcalfe and Foster (2004, p. ix), a complex system is a network structure that contains elements and connections. According to their interpretation the connections constitute of knowledge and understanding and for this reason knowledge is core to economic systems and a source of economic value. Secondly, they see that selection mechanisms can be seen from a complexity perspective and then selection will not lead to an equilibrium or regression to the mean. Selection mechanisms highlight the fact that the variety on which they operate is of prior importance in economic systems and it arises from forms of knowledge that are much less prevalent in the biological domain. (Metcalfe and Foster 2004, p. ix) Any knowledge-intensive industry can be regarded as a knowledge-based system. Competitive advantage is built on knowledge. Opportunities and threats arise based on knowledge that the particular firm or some other firm may have. Relevant knowledge can be classified into two groups. The first group, internal knowledge, concerns the methods to do the things that a firm in question does itself. The second group concerns knowledge on what the competitors are doing and how they do it (external knowledge). These two types of knowledge interact leading to absorption whereby new decisions are made based on both internal and external knowledge. The industry in question, the Finnish games industry, is an interconnected system, where information exchanges is an important factor creating the connections between the firms. This means that the firms do not find out of each others’ actions merely through the market processes by winning or loosing a bidding contest or seeing each others’ products on the shop shelf. The firms consciously seek for more information and also distribute it. Information exchange also functions as a feedback process.

3. Information exchange within the Finnish games industry The firms within the Finnish games industry rarely see each other as competitors. Although the interviewees often talked about Finnish competitors, they found it hard to name any that would actually affect their own success potential or business decisions. This means that the firms are able to find a niche where they can protect themselves from fierce rivalry. But how do they manage to do that? The first explanation is that the founders of many firms have been eager gamers before the entry to the industry, which has given them an idea of what is missing from the market. However, that is not a sufficient explanation since there is often a reason for such absence, be it technological or related to consumer potential. At least a part of the answer seems to lie in the nature of interaction between the game companies. A distinctive feature of the Finnish games industry is that there is a lot of communication and exchange of information among the firms. In Table 1 some information about the firms and their attitudes and objectives related to information exchange are presented. Below are some examples collected from the interviews illustrating the information exchange. ”Our seller does informal cooperation as he meets others. He tells them that we have entered some market and it seems quite good and it is worthwhile to go there. That doesn’t take anything away form us. It is based heavily on the personal relationships between people.” (Alpha) ”I guess this is typical for a young industry that personal relationships are very important. For example today I am going to go for a beer with a competitor. We are going to talk about what is happening in the market and whether something new is emerging.” (Alpha) ”Information exchange is clearly a case of win-win because you can always learn from others and it does not take anything away from you. And a large part of it is simply about having fun.” (Delta)

401

The 7th European Conference on Knowledge Management

”From our point of view the most important information is what our competitors are doing and we always know that before the press releases come out because word gets around.” (Alpha) “It is also a part of marketing. You should not spend five days a week inside a cubicle. You don’t see or hear anything [new or interesting] there.” (Beta) ”For example there is one case in which an [graphics] artist had sent a job application and included works that other people had done in his portfolio. It didn’t take more than three days that everyone within the industry knew about it. The guy committed a professional suicide. One can only be amazed at how stupid people can be.” (Alpha)

Developer

Publisher

Subcontractor

Platform

Employees

Founded

Firm

Table 1: The views of the firms concerning information exchange

X

X

Attitude towards Main reasons for exchanging information exchange information

We want to help others.

Critical mass to improve recruiting situation. To gain knowledge about the market.

Everything is easier when you have a network of contacts.

To do marketing. To get inputs for our creative process.

Alpha

2004 35

Mobile

Beta

2002 27

Mobile

Gamma 2000 24

Mobile

X

X

Together we can find new profitable things.

To find subcontractors. To understand the global games market.

Delta

1999 100 Mobile

X

X

I can learn from you and you can learn from me.

To find subcontractors. To find employees. To see what others are doing.

Epsilon

Online, mobile, 2000 170 handheld, console

X

We all help each other.

To find out about the development of the industry. To ponder what is going to happen next.

Zeta

2002 9

PC, online

X

We can benefit each other and have fun together.

To enhance the growth of the industry.

Eta

1995 25

Console, PC

X

We want to help others.

Theta

1995 13

Console, PC

X

X

X

X

X

To find out what is going on. It is in our interest to see other firms in Finland succeeding. To discuss what the publishers We want to exchange want. To find out concrete leads our views with others. on sales opportunities.

Information exchange seems to be heavily based on personal relationships. People within the industry know each other and enjoy discussing industry-related matters with each other. The underlying assumption is that it is not harmful for anyone to engage in such interactions. When asked why they participate in active communication most of the representatives of the firms stated that there is no reason or that they do it for altruistic reasons. Helping others is seen as a norm within the industry and its benefits to oneself are not considered. However, this cannot be the full explanation since after all it is a tough branch of business and things just cannot be that cosy. As the interviewer relentlessly kept on asking about the motivation for information exchange some other reasons were also mentioned, as is shown in Table 1.

402

Mirva Peltoniemi

4. Explanations for collective efforts 4.1 Critical mass There is quite a good consensus over the benefits of critical mass of firms within the Finnish games industry. The firms do not see each other as threats but as vital creators of critical mass at the national scale. Critical mass has several aspects here. Since skilled staff is scarce it is in every game firm’s best interest to have other potential employers for their staff. This will make the industry appealing for prospective employees and lowers their personal risks in investing their time, efforts and money on industry-specific training and career. This is quite evident in the following two comments. “We see it so that the more there are entrepreneurs within the industry and firms, in the long run it will help us. One thing is that we will be able to get employees that have worked within the industry. If they want to work in a larger firm then we are an option.” (Alpha) ”Some of the other firms here have had to scale down and the first thing that the HR managers do is they call us and say that these kinds of skilled employees would be available. The overall goal is to keep the people within the industry.” (Alpha) As there are more firms, the employees will also have more job opportunities. This also means that there is more demand for skilled workers and they can be persuaded to change jobs. This can cause some tension between the firms, but at least some of the interviewees saw also positive potential in such circulation. ”I think it’s good that they get to see new things and develop their skills. Perhaps one day they will come back here to a higher position. I don’t see that as a bad thing, but is it punished? I guess some people would like to do some arm-twisting at some cocktail party.” (Epsilon) Critical mass of game firms can also make the industry appealing for prospective investors. The mass will increase the general credibility of the industry and realised success stories can serve as best case scenarios. Critical mass of the Finnish games firms can also serve as a collective track record towards international publishers and operators. Coming from Finland will certainly not get you a publishing deal, but it might open the door for the first meeting.

4.2 Group selection The definition of an industry through common selection mechanisms is not particularly suitable to the Finnish games industry. The main problem is that competition at the national level is scarce. In addition, many interviewees stated that they do not have any direct competitors abroad either. However, all the representatives of the firms see the firms as belonging to the same industry. This is especially visible in the lively communication and information exchange between the firms. It seems that the firms have a collective search function and selection mechanisms operate more on a group level. Saviotti and Pyka (2004) state that as competitive selection between populations or sectors is fiercer than that within a population or sector, the conditions are optimal for the creation of variety. Additionally, variation generation is seen as a prerequisite for economic growth and progress. The findings from the Finnish games industry support this line of thinking since the firms do not compete with each other, but find niches where they can protect themselves from fierce rivalry. This is evident in one comment. “We just operate in a niche within the ecosystem that is different from those on many other firms.” (Beta) By finding these niches they create variety which enables them to continue to specialise. This means that the surface on the space of potential content and technology that the population covers is continuously spreading. Group selection is in conflict with selfish maximisation (Bergstrom 2002). However, this line of thinking has both a short-run and a long-run aspect. In the short run the firms could not care less about the survival of other firms, but they concentrate solely on putting off acute fires, such as

403

The 7th European Conference on Knowledge Management

finding the money to pay the monthly wages. In the long run the firms see the benefits of group thinking. They see that there is selection pressure at the group level as the industry has to compete with other forms of spending free time. At the national scale an important driver for group selection is the institutional setting within which all the games firms have to operate and which they try to collectively change. One example of such lobbying successes is the recently started game development education programme at the secondary level.

4.3 Collective search Each firm has a search function according to which it explores new possibilities and alternatives to be applied in the future. According to Cyert and March (1992) such search is problemistic. This means that firms would not continually search for better ways to do things or new things to do, but they will start the search only once the old way presents a problem. Thus, search is triggered by encountered problems and not by some inner motivation for continuous bettering. However, as it was stated earlier, in a knowledge-based creative industry the creation of new variety is fundamental. It is the inevitable problems that would follow from failing to find novelty to be included in the products in the future, that makes search an every day activity. Information exchange means that the firms are not forced to execute a purely trial-and-error type of a search function. The search functions of the games companies are linked to each other because of the ongoing information exchange between the people of different games companies. This kind of communication allows the firms, first of all, not to make the same mistakes that someone has already made, but also to find potential directions towards which to head. This kind of communication also enables efficient exploitation of existing niches and also the avoidance of fierce rivalry. Communication among the firms allows the evaluation of more alternatives since more of those are known. Also, communication may allow the identification of attributes and aspects that might not be considered if they would not have been discussed with others that have different backgrounds and experiences. The following two comments illustrate the nature of information exchange that leads to collective search. “A very large part of very good ideas emerge in such discussions.” (Gamma) “And a large part is contemplating. We puzzle over what is happening next.” (Epsilon) Although it is often thought that such an active information exchange would lead to the concentration of the population, it is not the case here. Naturally, there is also such me-too type of decisions, but the overall picture is more characterised by finding out what the others are doing in order to avoid doing the same thing.

5. Conclusions When asked why they exchange information the managers stated that they do it for altruistic reasons. However, that cannot be the only reason. Another reason could be that this way they can build their personal reputation within the industry and also get personal satisfaction by being able to share their knowledge. This is supported by the interviewees’ eagerness to tell about this business to the interviewer. However, the reasons cannot be just at the personal level since the information exchange is often done within working hours. Thus, there must be some kind of motivation also at the firm level. The managers of the firms must see the benefits in engaging in such collective efforts. They either do not want to admit it or the motivation forms in an unconscious level. By finding out what the others are doing they can avoid highly competitive areas and find uninhabited niches. However, the motivation of telling what they are doing is a trickier matter. One explanation is that getting the word out on what they are doing might help in getting a good publishing deal. Another one is that in order for the others to play along you also have to pass the ball. This means that sharing information that you have is a ticket to getting the information that others have. Collective efforts may also arise from the idea that the industry in question is seen as a group abroad. This then means that competitive advantage has group characteristics. By acting as a group in attracting skilled employees and investors as well as in searching for new possibilities the

404

Mirva Peltoniemi

firms may achieve more that by flying solo. This also applies to changing the institutional setting under which they have to operate. From a complexity perspective the lesson here is that connections consisting of knowledge or information are important for the functionality of the system and they enable the development of the system towards better fitness. The development constitutes of self-organisation, since the participation and contribution of each firm is their own decision and the direction arises bottom-up. This informal network is fundamentally an emergent structure that is in continuous flux of change.

6. Acknowledgements This paper was produced within the TIP Research Programme (Knowledge and Information Management in Knowledge Intensive Services). The projects in the programme explore theories of complex adaptive systems and their various interpretations and apply them to the study of socioeconomic systems. The programme is funded primarily by The Finnish Funding Agency for Technology and Innovation, and the research is conducted at the Institute of Business Information Management at Tampere University of Technology. Professor Marjatta Maula is the director of the programme. I would like to thank all the representatives of the games firms that I have interviewed regarding my research project and also the staff of Neogames: The Centre of Game Business, Research and Development for helping me in directing the study towards interesting issues.

References Bergstrom, T. C. (2002) “Evolution of social behaviour: individual and group selection”, Journal of Economic Perspectives, Vol 16, No. 2, pp67-88. Cyert, R.M. & March, J.G. (1992) A behavioral theory of the firm, Second edition, Blackwell Publishing, New Jersey. Dosi, G., Nelson, R.R. & Winter, S.G. (2000) “Introduction: The nature and dynamics of organizational capabilities” in The Nature and Dynamics of Organizational Capabilities, Dosi, G., Nelson, R.R. & Winter, S.G. (Eds), Oxford University Press, pp1-22. Metcalfe, J.S. & Foster, J. (2004) “Introduction and overview” in Evolution and economic complexity, Metcalfe, J.S. & Foster, J. (Eds), Edward Elgar, pp3-23. Murmann, J.P. (2003) Knowledge and competitive advantage: The coevolution of firms, technology, and national institutions, Cambridge University Press. Nelson, R.R. & Winter, S.G. (1982) An Evolutionary Theory of Economic Change, The Belknap Press of Harvard University Press. Ray, T.S. (1999) “Evolution and complexity” in Complexity: Metaphors, Models, and Reality, Cowan, G.A., Pines, D. & Meltzer, D. (Eds), Westview, pp.161-176. Saviotti, P.P. & Pyka, A. (2004) ”Economic development by the creation of new sectors”, Journal of Evolutionary Economics, Vol 14, No. 1, pp1–35.

405

Knowledge Reuse in Creating Audit Plans David Peto Corvinus University of Budapest, Department of Information Systems, Hungary [email protected] Abstract: In this research the question of knowledge reusability in creating more reliable IT audit plans has been investigated. With the use of appropriate simulation techniques and statistical analysis, it has been proved that the explicit usage of self-reflection in IT auditing enables more precise audit plans, therefore the execution might become more effective. This self-reflection means that auditing methodologies are largely depending on the results of previous examinations of certain areas. In fact, the most widespread methods and guidelines are also based on the experience gained through previous examinations. If the results gathered in this way are being used, more precise audit plans can be made and the designation of the areas to be examined can become more accurate. If the fact that audit methodologies are primarily based on practical experience is accepted and explicitly formulated, then, with the use of the information acquired in previous audits, better and more precise audit plans can be created. In other phrases: the assignment of control objectives in certain situations of examination can be done based on the experiences from previous audits. Additionally, the audit plans created in this way enable the cost-effective execution of audits, without sacrificing accuracy and reliability. The results of the simulation confirm these statements. Keywords: IT audit, COBIT, knowledge reuse

1. Introduction The purpose of IT audit reports is to inform the company’s executives of the revealed situation, let them know of the possible deficiencies, and preferably to offer ways of solution. Obviously though, the decisions have to be made by the executives themselves. It is a common and serious problem that managers are not provided with all the necessary information to make their decisions regarding information technology issues. This is true in spite of the fact that audit reports present a description of the areas with higher risks, the risk factors in these fields and often the possible solutions as well. Audit reports do not help in the decision which areas the limited resources should be invested in for effective treatment. Decision-makers often make the allocation of different resources in an ad hoc manner to cure the diverse problems. In their decision, they mainly rely on their previous experiences. Another problem, although its cause is basically the same, is that the results achieved by different audit processes, especially the ones regarding risk levels, are not comparable with each other. As there is no commonly agreed regulation for the assessment of risks, the evaluation is usually made in a highly subjective manner (Ozier 2003). Thus, even if numerical indices are available concerning certain areas, they cannot be compared to other cases, as another auditor might reach different results, even if the method and the investigated problems are the same. The risk levels in the results of different audits made in different periods in the same organization, or different companies in the same industry are not comparable. According to the assumption made in this research a metrics that is based on a widespread methodology and that secures more precise measurement and comparability of different risks, helps in optimizing corporate resource-allocation in the areas involved, and – thanks to this and the benchmarking capabilities – enhancing the efficiency, numerical representation and verifiability of company decisions based on the audit results. An additional achievement is that by calculating the risk levels more precisely the results of previous audit processes can be used to more accurately delimit the areas of interest. Therefore auditing knowledge might be reused in order to make more specific audit plans. This paper describes the steps taken to verify this assumption and also its consequences regarding knowledge reusability.

406

David Peto

2. The usability of knowledge in auditing 2.1 The goals of auditing The main goal of an information technology audit, similarly to other methods of supervision, is to examine compliance. Thus, to check whether the processes, control and operation of the inspected areas comply with some kind of predefined regulations. Therefore, there are only two kinds of results of an audit process: complying or not complying. The intention in regulating corporate operation is to restrain the different operational risks in order to achieve the strategic goals. Obviously to make compliance measurable in a suitable way, controls must be built into the company’s processes. In our case these are derived from the Control Objectives of COBIT, the methodology that has been used as a basis for the research. But the question that is one of the key issues of this research arises: to what extent does the appropriate selection of controls (control objectives) enhance risk reduction? Do the appropriate narrowing down of the area of focus or the amount of questions that have to be examined result in the cost-effective reduction of risks derived from corporate IT? If the knowledge gained in previous audits is used to articulate the self-reflection of the assessment system, does it help in selecting the right control objectives?

2.2 Audit plans based on previous experiences We might presume that the use of risk assessment metrics during IT auditing contributes to the optimization of the allocation of corporate resources. Executives responsible for IT governance are in a difficult position when they have to decide on countermeasures against risks (Trites 2004). Without an appropriate measurement method it is hard to precisely determine the desirable use of resources. The metrics creates a chance to make optimal decisions on the use of resources. Our assumption was that if the self-reflecting nature of the execution of IT audits is formulated explicitly, the results can be reused to improve and more accurately specify the audit plans. Information technology audit is essentially based on previous experiences. Most methodologies (including ITIL, Common Criteria, COSO ERM and also COBIT) are actually a collection of best practices (ITIL 1989, CC 1999, COSO 2004). Therefore, the data from different audits is obviously worth to be used to more precisely define the assessment method. According to our assumption, the refinement can be carried out if the results of previous audits are used in an appropriate way.

3. Risk assessment metrics as a tool for knowledge reuse The primary goal of this research was the creation of a risk assessment metrics based on a widely spread methodology, which might be used in information technology auditing, and optionally the creation of a software system that might be of use in the audit process by providing support for the auditors. After the appropriate methodological funding and the choice of methods, the research has been mainly of practical nature, as on basis of the principal background, the assessment method was constructed, as well as the scaling and the tool that provides the necessary support for the users. There were several prerequisites of the research. First, a comprehensive collection of the possible risks had to be created that could be used as the foundation of the assessment. Second, the appropriate measurement and ranking method had to be shaped, namely the metrics that is capable of the evaluation of the risk factors and the totalling on certain areas. As the goal was to create a method that can be used in many areas, the definition of risks also had to be as wide as possible. To reach this goal an audit methodology had to be selected that is both widespread and detailed enough so the certain risks could be generated with its use in a direct or indirect way.

407

The 7th European Conference on Knowledge Management

There is only one comprehensive audit methodology that fulfils the above criteria, which is accepted by most experts, covers the most possible areas, but at the same time is suitable for the deconstruction so the risk factors can be reached. This is the COBIT methodology, issued by ITGI and ISACA (COBIT 2000). Although there has been some criticism on the completeness of the threats to information integrity mentioned in COBIT (Boritz 2005), this is obviously the methodology that covers most of the areas in question. On the other hand, COBIT does not originally include such deconstruction that would allow the direct analysis of risks. Although it provides serious help in creating the control questions on risks, the extraction of actual risk factors from this standard needed further work. The other task was the creation of the metrics itself. The method is described below. To make the metrics functional, the calibration of the method had to be done. This was assured by the execution of several measurements and the recording and comparison of data. The operation of the index was tested with the use of Monte Carlo simulation.

4. Creating the index The aim of the research has been to create a method that allows the certain determination of the risk level regarding the examined company; thus, the making of a risk index that defines the risk level, based on the data collected during the audits. In order to reach this goal, the widely used guidelines of COBIT were used as a research basis. In its construction, COBIT (3rd edition) contains 34 control objectives grouped in four domains. The control objectives cover practices to follow that are important in the information security and effective operation of the company. As further specification, these contain more than 300 detailed control objectives, which are to specify and more precisely define the higher-level objectives. Although the 4th edition of COBIT has been published recently, the basic concept has not changed. According to its objective, COBIT covers every area related to corporate information technology, therefore the risk factors may be considered as the most comprehensive possible. This is the reason why the detailed control objectives of COBIT were taken as a basis for the identification of risks in this research. COBIT makes the evaluation of control objectives possible only by assigning levels of 1 to 5 (0 in the case of non-applicable) to them, based on the capability-maturity models. This results a variable that is measurable only on an ordinal scale that is not appropriate for calculating averages or other statistical indices. For the sake of easier usage, these evaluations can be taken into consideration in a way that the risk linked to the control objective raises or lowers the risk of the company (or some of its parts). The main concept of the risk assessment method is the following: the auditor assigns the capability-maturity levels regarding the individual factors on the area in focus (There have been attempts to create metrics based on this concept (Jelen 2000)). Relying on these the decision can be made whether the certain factor raises or lowers the risk level. As a starting point, the acceptance threshold, namely the line between raising and lowering is 2.5, which is only used as a parameter in the model. Based on these data, the risk level of the investigated area can be defined with the use of a certain algorithm. The most straightforward algorithm is the calculation of a simple mean. In this case the values of +1 and -1 are added, which show the contribution to the risk level. Obviously, this method is not capable of supplying refined data and it is not useful in practice, as one cannot state the equal importance of all factors. With the use of this simple method, it is inevitable that such factors extinguish each other that are obviously of different importance in real life. With the use of a method like that, it is impossible to define the areas where the resources have to be concentrated upon, as all problems appear to be of same severity.

408

David Peto

As a result the introduction of importance weights was also necessary. With this method, which appears in most of the known risk assessment tools as well, it is possible for the auditor to consider the different importances of the individual factors. There are several methods to assign the weights (Hwang-Shin-Han 2004); the choice between these is not part of this research. To use this method, the allocation of appropriate weights is also expected from the person carrying out the audit; thus, the creation of a weighted average can be done. In the research, the making of the weights can rely on the scenarios. Namely, during the examination such sets were defined that determine the areas to be analyzed in certain industries (e.g. banks, manufacturing etc.). The weights of the examined areas are also expected to be different in these cases.

4.1 Mapping the interactions At the same time, the allocation of weights does not solve another important problem: the interaction of risk factors. During the research, the conclusion was drawn that the assessment of risks can be much more precise if the factors are not regarded independent, but their relationships are also taken into account. To reach this goal the effects of the coexistence of two simultaneous factors had to be mapped. For example it could be defined how the overall risk index is going to be affected by the coexistence of the two factors when the quality of the plan on IT strategy is a factor raising the risks and the qualification of the personnel is a factor lowering the risks. Obviously, these estimations cannot be done in a totally faultless way. As there are no historical data on regarding these questions, expert estimations had to be relied on. At the same time this is not opposed to the viewpoint of COBIT, as this is a collection of empirical knowledge, therefore its individual statements are not unquestionable. As a result of the detailed discovery work, an interaction matrix was created that contains these simultaneous effects (see table 1). Table 1: A section of the interaction matrix domain_id id PO PO

1 2

PO

PO

PO

PO

PO

PO

PO

PO

PO

PO

PO

AI

AI

AI

AI

AI

AI

DS

1

2

3

4

5

6

7

8

9

10

11

1

2

3

4

5

6

1

1 -1

1

1 -1

1 -1 1

0 1

-1 -1 PO

3

PO

4

PO PO PO

5

1

0

1 -1

1 -1

1

0

1 -1

1 -1

1 -1

1

1 -1

1

1 -1

1

1 -1 -1 -1

1 -1

1 -1

1 -1

1 -1

1 -1

0 -1

1 -1

1 -1

1 -1 -1 -1

1 -1

0 -1

1

1

1 -1

1

1 -1

1

1

1

1

0

1 -1

1

1

1

1

1

1 -1

1 -1

0 -1

0 -1

1 -1

0 -1 -1 -1

1 -1 -1 -1 -1 -1

1 -1 1 1 1 -1 -1 -1

1 0 0 -1

1 0 0 -1

1 -1 1 -1

1 -1 1 -1

1 0 0 -1

1 -1 0 -1

1 -1 1 -1

1 -1 1 -1

1 -1 1 -1

1 -1 0 -1

1 -1 1 -1

1 0 0 -1

1 0 1 -1

1

1

1

0

1 -1

1

1

1

1

1

1

-1 -1 -1 -1 -1 -1 -1 -1

1 -1

0 -1

PO

0

0

0 -1

1 -1

1 -1

1 -1

1 -1

0 -1

0 -1 -1 -1 -1 -1 -1 -1

1 -1

1

0

1 -1

1 -1

1 -1

1 -1

1 -1

1 -1

1

1

1 -1

1 -1

0 -1

0 -1

1 -1

1 -1

1 -1

0 -1

1 -1

1 -1

0 -1

1 -1 -1 -1

0 -1

1 -1

1 -1

1 -1

1

1 -1

1 -1

1 -1

1 -1

1

1 -1

1

-1 -1

1 -1

1 -1 -1 -1 -1 -1

1 -1

1 -1

1 -1 -1 -1 -1 -1

1 -1 -1 -1

1 -1

1

1

1

7

1

0

8

1 -1

1

0 0

1 -1 -1 -1

1

0

0 -1

AI

1

1

1

1

0

0 -1

1 -1

1 -1

1 -1

1 -1

1

0

1 -1

1 -1

1

1 -1

1 -1

1 -1

0 -1 -1 -1

0 -1

0 -1

0 -1

1

1

1

1

1

1

1

1

1

0

0

1

1

1

0 -1 -1 -1

0 -1 1

0

1 -1

1

1 -1

10 11

0

0 -1 -1 -1

1 -1

1 -1

9

PO

0

0 -1 -1 -1

0

1

1

1 -1

6

1 -1

1

1

0

0

0

0

1 -1

-1 -1 -1 -1 -1 -1 PO

1 -1

0

1 -1

0

1

1

1

-1 -1 PO

1

0 -1 -1 -1

0

0

1

0 -1 -1 -1

1

1 -1

1

0 -1 -1 -1 -1 -1

1

1

1

1

0

0

1

1 1

0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1

1 -1

1 -1

1 -1

1 -1

1

0 -1

0 -1 1 0

1 -1 1 -1

0 -1 1 -1

0 -1 1 1

1 -1 -1 -1 1 -1 1 1

0 -1 1 -1

-1 -1

1 -1

1 -1 -1 -1

1 -1 -1 -1

1 -1

1

1 -1

1 -1

1

0

0

1 -1

1 -1

1

1

1

1

1

1

1

0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1

On this basis, it was possible to develop the assessment procedure further. The determination of the risk index is done in such way that the capability-maturity indices and risk factors defined by the auditors are considered in selecting the certain elements of the matrix and the results are cumulated. The meaning of the certain squares of the matrix is made clear by table 2. Table 2: Legend for the interaction matrix

PO 4

+ -

+ 1 1

409

AI 1

0 -1

The 7th European Conference on Knowledge Management

The upper left field shows the value that is appointed to the risk index when both factors in question perform in a positive way – in this case in the example, their coexistence lowers the overall risk (the positive number means the raising of security, therefore the lowering of risks). In the upper right field, the factor shown in the column on the left is positive and the one shown in the row on the top is negative. The other fields are also filled up according to the figure.

4.2 The algorithm of the index The appropriate elements of the interaction matrix, totalling and weighting can create the risk index. The totalling can be carried out using the following formula: n

n

R = ∑ wi i =1

ri + ∑ ri , j j =1

n

n

, where

∑w i =1

i

=1

In which R is the overall risk index, wi is the weight of the certain risk factors, ri is the converted value of the risk index (-1or +1) and ri,j is the value created from the first-order interaction of the risk factors by the use of the above matrix (might be -1, 0 or +1). Thus, the formula creates a weighted average of the risk indices including the interactions as well. The value ri is emphasized, as that is the direct contribution of the certain risk factor to the cumulated risk level. In fact this is the self-interaction of factor i, who is not else but its own risk value. The benefit of the procedure is that the value of the index can be easily calculated for certain subdomains as well, thus for the subset of overall risk consisting of some control objectives. Simply simply totalling these then can create the overall risk index. In case the auditor finds it hard to assign weights to certain areas with a total of 1, a transformation can be executed easily. Therefore it is possible to use practically any kinds of weighting, if the individual weights are divided by the total of weight values; namely, if the weights are normalized. It might appear so that during the cumulating each risk interaction is considered twice, but this phenomenon is parried by the use of appropriate weighting. The attribution of weights is done according to the weights of parent-factors given by the auditor, and during the cumulating; the weights of individual risk factors (control objectives) are used.

5. The simulation experiment Because of the lack of relevant data, Monte Carlo simulation was used during the research. The simulation and the generation of the results were carried out in several steps.

5.1 Scenarios First, the formulation of 4 different scenarios took place, representing certain audit situations. Thus, the model of the examination of a bank, a manufacturer, a service company and a software development firm was created in such way that the detailed control objectives to be examined were identified depending on practical experience. The importance of the creation of these scenarios regarding the goals of this research is that the different risk assessment methods may be distinguished from the aspect of their usability in diverse auditing situations. With the help of the scenarios, further peculiarities specific to the certain areas might be observed as well.

5.2 Random samples Next, 500 random samples were created in order to represent the capability-maturity values defined in the auditing process. Equal distribution of the values was assumed when creating the

410

David Peto

random numbers, which means that each of the evaluation levels (measured on a scale of 0 to 5) had the same chance to be in the sample. Naturally, during further research, it is possible to change the distribution and make further analysis. Random numbers were generated for the scenarios as well. In order to assure the comparability of the results, the same cases were used, thus each of the 500 cases used in the scenarios are shortlists of the random values created for the whole of the control objectives.

5.3 Conversion In the next step the evaluations were transformed into the values +1 and -1, where +1 stands for the growth of security and -1 for its decrease, therefore the raising level of risks. There were two reasons to make this conversion: first, the values measured on an ordinal scale are obviously not usable directly to create numerical values – e.g. averages; second, the concept of the research was to make a separation of factors depending on whether they raise or decrease overall risk. Thus, the set of simulated data is divided in two groups depending on the acceptance threshold. In the simulation, this threshold was 2.5, which is the middle of the range of values. As this is only a parameter of the model, this might be changed in further research. The threshold had been set at that level, as this made the enabled the allocation of equally distributed variables in two groups of the same size. However, if the distribution is changed, the shifting of the acceptance threshold might be needed. This is considerable also, because the value 0 is a special measure in the capability-maturity models, as this stands for not applicable.

5.4 Calculating the risk indices The following step was the creation of the risk indices from the generated and transformed values. In order to do that, equally distributed weights had to be rendered to the factors, which were normalized to total 1. There are different weights attributed to each of the cases, therefore 500 different set of weights were used. Four different indices were created in the research: 1. In the creation of the mean, simply the +1 and -1 values were averaged in each of the cases. 2. In the construction of the weighted average, the +1 and -1 values rendered to the control objectives were averaged with the use of the constructed weights. 3. In the index created with respect to the interactions, the respective elements of the interaction matrix (therefore the intersections rendered to the +1 and -1 values of the control objectives) are averaged. 4. Finally the creation of the R index, the goal of the research, was done. This index is created with respect ot the interactions and the different weights of the control objectives. The calculation of the risk indices was made for each of the cases in the sample of 500 for all of the control objectives (thus, the risk factors), and also for the different scenarios. In this way, 500*5*4=10000 index values were created.

5.5 Statistical analysis In the final step, the statistical analysis of the cumulated risk indices regarding the risk factors in the whole of the sample and also in each of the scenarios was made. With the help of these, it was possible to compare the different assessment methods concerning their basic attributes, and also the verification of the examined hypotheses. The statistical indices were created using SPSS software. The histograms illustrating the behaviour of the respective indices seriously support the analysis.

6. Results The main results of the simulation experiment are the following:

411

The 7th European Conference on Knowledge Management

The average values of the indices created with respect to the interactions (the expected values of the variables) are shifted towards the negative values. This means that, by the use of the index suggested here, the risks of the organization in question might appear bigger than in the case of simple averaging. Namely, the shift towards the negative direction means that the value of the security index is lower. This is natural, and it reflects one of the main principles of auditing: prudence. This is the consequence of the fact that in cases where it was hard to decide on the effect of the interactions, negative values were preferred to be safe. This can also become clear by totalling the elements of the interaction matrix, as the result is a negative number. The variation of the indices created with respect to interactions is higher than in the cases of simple averages. This additional variation calculated on basis of the matrix extended with fist-order interactions compared to the basic situation is generated by the simultaneous occurrence of risk factors. In this research, only the first-order relationships could be analysed. The additional variation generated by the second and higher order interactions could also be analysed one by one, but this is beyond the limitations of the present research. This is why Monte Carlo simulation had to be employed that allows the estimation of the effects of higher order interactions, therefore all further indirect impacts. The operational strategy for moderation of risks and the goals of the IT function can be based on the intention to lower the additional variation discovered in the abovementioned way. The importance of the method introduced in this paper is the capability of identifying such strategic focus points in addition to the explicitly formulated primary risks, which are impossible to discover without this approach. By analysing the results of the simulation, the statement can be made that the positive or negative sign of the index considering the risk interaction is very seldom different from that of the simple average – only in cases with values close to 0. At the same time, the size of the shown risk might be considerably different, depending on the case. Thanks to this, the method is capable of raising the attention to special cases and orientate so that the simultaneous effects of the individual risk factors can be estimated. While the use of weights lowers the variation in the case of the indices without respect of the interactions, variation is bigger in the indices considering the relationships compared to the not weighted methods. This method is capable of giving even more importance to the cases different from the usual, and raising the attention to hidden relationships (see the histograms). Indirectly this verifies that there is a procedure that is capable of active management and articulation of hidden relationships. The weighting also expresses the relative importance and posed amount of threat by certain factors in certain moments.

Figure 1: The distribution of weighted averages in the sample containing all the control objectives

412

David Peto

The range of the results – the distance of the minimum and maximum values – does not change, or gets larger with the insertion of interactions (also visible on the histograms). Therefore, the suggested method is creates the opportunity to raise attention to the cases differing from the average even more. The effect of the weighting of risks is smaller than that of the consideration of the interactions. Thus, the difference in the totals of the weighted and not weighted cases is smaller, than the amount of the effects of taking the interaction in consideration.

Figure 2: The distribution of the R index in the sample containing all the control objectives Comparing the suggested index and the simpler methods shows that the index considering the interactions as well is usually even more different in the examination of the scenarios than in the case of the whole of the control objectives. This confirms that, if the amount of available information is less, the importance of this method is even bigger in determining the appropriate measure. In the case of the scenarios, the avoidance of individual risks is less important than the consideration of their simultaneous effects. With the use of the index that is the result of the research, the critical coexistences that influence corporate risk are easier to spot. Thanks to the construction of the index, in the extreme cases (e.g. all factors are raising or all are lowering risk) there is no difference between the resulted values; at the same time in the cases in between, that are much more likely in real situations, the shift can be considerable.

6.1 Confirmation of the assumption The practical meaning of our assumption is that if the fact that audit methodologies are primarily based on practical experience is used, then, with the use of the information acquired in previous audits, better and more precise audit plans can be created. In other phrases: the assignment of control objectives in certain situations of examination can be done based on the experiences of previous audits. Additionally, the audit plans created in this way enable the cost-effective execution of audits, without sacrificing accuracy and reliability. The results of the simulation confirm that the index created in the described manner is capable of the appropriate measurement of risks. As the creation of a risk index with the consideration of interactions succeeded; the self-reflecting quality of auditing was usable in creating the audit plan. Therefore our assumption is confirmed.

7. Knowledge reusability conclusions As it is possible to create a cumulated risk index that considers the simultaneous effects of individual risk factors, a new method for the assessment of corporate risks is enabled. With the use

413

The 7th European Conference on Knowledge Management

of such metrics, previously unidentifiable risks can be brought into front. In some cases areas that remained hidden when using traditional methods, can now be considered of higher risk that need further investigation. All this results in the possibility for corporate management to get a better and more accurate image of the information technology risk level of the organization. Because of the consideration of the relationships of risk factors, this suggested index is more capable of comprehensive assessment of larger areas, ranges consisting of more sources of risk in the company. This may be a tool in the hand of the management that allows the correction of strategy on a more objective basis. It has become clear that the results of previous audits are usable in making more accurate and more purposeful audit plans. If the the already examined areas and the relations on these are taken into consideration, it is possible to set up scenarios that employ the interactions of individual risk factors and their effects on overall risk. This also enables the more accurate designation of the critical areas regarding the examination. In this way, it is possible to create better audit plans that are easier to execute than previous ones. If these data, the results of auditing, and their confirmation by indices are available, it becomes possible for corporate management to optimally distribute the resources related to information technology. The limited assets of the company can be used in such way, that IT risk management receives the most benefits possible. It has to be noted, that the exploration of the results is not enough to realize the advantages mentioned above. In order that the executives be able to interpret the results, it is necessary to bring them to a format that is understandable for them; to “translate” these into the appropriate language. Therefore the tasks of the auditors do not end at creating the risk index. It is a further duty to put the results in an appropriate context, providing a handhold for corporate executives in the interpretation. It has been confirmed that by employing the suggested index, the identification of such IT-related and strategically important areas is achievable that were indefinable with the use of traditional methods. This is primarily made possible by the fact that the consideration of joint effects of risk factors enables the perception of such co-existences that are important from the corporate strategy point of view, but which were impossible to discover due to the too few dimensions of risk indices.

References Boritz, J. E. (2005) “IS practitioners’ views on core concepts of information integrity”, International Journal of Accounting Information Systems, Vol. 6, Issue 4, pp260-279. CC (1999) CSE-SCSSI-BSII-NLNCSA-CESG-NIST-NSA: Common Criteria for Information Technology security Evaluation COBIT (2000) COBIT Framework, 3rd edition, IT Governance Institute, Rolling Meadows. COSO (2004) Enterprise Risk Management – Integrated Framework, Executive Summary, COSO, Jersey City. Hwang, S-S. – Shin, T. – Han, I. (2004) “CRAS-CBR: Internal control risk assessment systems using case-based reasoning”, Expert Systems, Vol. 21, Issue 1, pp22-33. ITIL (1989) IT Infrastructure Library, Central Computer and Telecommunication Agency, London. Jelen, G. (2000) SSE-CMM Security Metrics, [online], NIST and CSSPAB Workshop, Washington, http://csrc.nist.gov./csspab/june13-15/jelen.pdf Ozier, W. (2003) “Risk metrics needed for IT security”, IT Audit, Vol. 6. Trites, G. (2004) “Director responsibility for IT governance”, International Journal of Accounting Information Systems, Vol. 5, Issue 2, pp89-99.

414

Knowledge Logistics to Support the Management of Organizational Crises: A Proposed Framework Stavros Ponis1, Epaminondas Koronis2, Ilias Tatsiopoulos1 and George Vagenas1 1 National Technical University of Athens, Greece 2 Warwick Business School, University of Warwick, UK [email protected] [email protected] [email protected] [email protected] Abstract: In the complex environment of late modernity, organizations, large and small, are challenged by corporate crises more than ever. Thus, not surprisingly Crisis Management has gained additional attention during the last two decades. Yet, the knowledge aspects of crises’ management theories and the role of effective knowledge retrieval and sharing in the processes of crises’ prevention, management and survival have been relatively unexplored. In this paper, a generic knowledge-based framework is proposed to address the increased knowledge needs of organisations during a crisis and to help management in establishing the necessary risk avoiding and recovery mechanisms. We combine crisis management and knowledge management literature and provide a thorough specification of knowledge activities of crises. Both, the specified knowledge processes of crises and the proposed framework have been tested and applied in a pharmaceutical company resulting empirical results and evaluations. Keywords: Knowledge management, organizational crises, knowledge logistics, pharmaceutical industry

1. Introduction In today’s media-driven and highly networked society, organizations face the challenges of crises more than ever. Organizational crises, symbolic or real, can escalate exponentially affecting people and assets, damaging reputations leading to overturning market dominance and undermining the prosperity and viability of the organization as a whole. In an attempt to provide guidelines for managers and organizations, analysts developed strategic management frameworks, methodologies, tools and technologies that support managers in dealing with crises; these efforts have been multiplied and improved significantly in the last two decades. An elaborated study of the available literature proves that although a large number of useful guidelines for preparing for and managing crises are provided, the practical role of efficient and effective knowledge management to support these guidelines’ execution is severely under explored. The lack of a knowledge-based view of crises and the study of such considerations is rather surprising, taking into account that crisis management is a heavily depended on information and knowledge intensive discipline (Coombs, 1999). The aim of this paper is to discuss the knowledge implications and aspects of crisis management and to propose a knowledge-based crisis management framework. Given that the necessity for efficient and effective knowledge is amplified in conditions of crises, and that crisis management requires the rapid knowledge transfer of the crisis-related domain knowledge to the physical location and to the people that are handling the crisis, we believe that the integration of Crisis and Knowledge Management disciplines, presented with this framework, could provide interesting and useful results. The paper is structured as follows. In section two we provide a literature review of the two theoretical poles of our study, these being Crisis and Knowledge Management followed by a critical examination of the added value of integrating knowledge logistics guidelines in crisis management activities. In section three, the elaborated knowledge-based crisis management framework is presented. In examining such theoretical issues, we draw from a case study of a crisis in a Pharmaceutical company, which is elaborated in section four. Finally, in section five, conclusions and opportunities for further research created by the efforts in this paper, are presented.

415

The 7th European Conference on Knowledge Management

2. The knowledge management of crises 2.1 Crisis management In the complex environment of late modernity, organizations, large and small, are challenged by corporate crises more than ever. Literature is full of examples of business crises include amongst others, the well documented Exxon Valdez oil spill (Ott, 2005), the racial discrimination case at Texaco (Bowen, 2000), the incidents of glass, found in Gerber baby foods (Baldassere, 1998), the bankruptcy of Orange County (Rothchild, 1998). In spite of the raising interest in crises and corporate scandals, a common definition has not been stated yet (Pearson & Claire, 1999). According to Arpan & Pomper (2003), a crisis is an unpredictable major threat, which might produce negative effects and harm organizational legitimacy and reputation if it is improperly handled. According to Brewton (cited from McMullan 1997), a crisis should have “some of all” of the following features: (i) severe disruption of operations; (ii) negative public perception of the company; (iii) financial strain; (iv) unproductive of management time; and (v) loss of employee morale and support, usually in this order. Carley (1991) also pointed out that a crisis is a multi-phase organizational phenomenon that is relatively short. Should the crisis continue for an extended period, it would not be a crisis but a general problem. Shrivistava and Mitroff (1987) suggested different types of corporate crises, pointing out that each crisis results from organization-environment interactions of socio-technical factors. Quarantelli (1988) argues that there are community crises, which are generated by natural or technological agents, “disasters” and conflict type situations such as wars, civil disturbance, riots etc., and noncommunity kind of disaster crises, such as most transportation accidents that do not impact the functioning of type community. According to the Crisis Resource™ (2005), class action, product recalls and accidents remain the key reasons for organizational crises while symbolic crises and stakeholder-driven ones become increasingly more often. A key concern of crisis management analysts is the recognition and description of the various stages of crises. Fink (1986) divided the “anatomy” of a crisis into four stages: the prodormal crisis stage, the acute crisis stage, the cornice crisis stage, and the crisis resolution stage. In fact, some analysts developed the idea that it is the multiplicity and evolution of an event that constitutes a crisis. In this paper we draw from Coombs (1999) and understand crises as evolving into three main phases: a) the Pre-Crisis phase, b) the Crisis Event phase and c) the Post Crisis phase. The study of literature shows that each school of Crisis Management analysts has tended to focus on particular stages of crises. For example, Mitroff et al. (1989) placed emphasis on the pre-crisis processes and the prevention of crises arguing that organizations could be considered as being “crisis-prone” or “crisis-prepared”. On the other hand, other scholars, particularly communication school theorists, developed managerial frameworks for the management of crisis when it happens and after by providing normative views on how organizational image should be managed during crisis and how it can be rescued or restored (see Benoit, 1995; Heath, 1994; Coombs, 1999). Combining these approaches, we would argue that crisis management involves the organizational processes related with the preparation for crisis, identifying a crisis, planning a response to the crisis and confronting and resolving the crisis while minimizing the effects for the organization.

2.2 Knowledge management Despite the historical background of knowledge theories (see Drucker, 1964; Marshall, 1965), the perception of knowledge and its effective handling and utilisation as an independent, institutionalized management function has become common only in its recent development and right after the emergence of new concepts such as the Learning Organisation (Senge, 1990) and Knowledge Management (Drucker, 1988; Nonaka, 1991). In our days there is an unambiguous recognition by academics, researchers and practitioners about the importance of knowledge as a critical resource for organisations (Foucault, 1980). In that context managing knowledge has become an imperative in order to survive in this new unstable and fierce business environment

416

Stavros Ponis, Epaminondas Koronis, Ilias Tatsiopoulos et al

(Davenport et al., 1998). More recent efforts have shown that knowledge and its effective management are major factors in determining the level of an organisation’s performance and its degree of competitiveness (Holsapple and Jones, 2004). According to Metaxiotis et al., (2005) one can distinguish between three generations of Knowledge Management (KM). The first spans the period between 1990 and 1995, and includes initiatives focusing on defining KM, investigating its potential benefits and designing specific KM projects (Nonaka, 1994; Wiig, 1993). The second generation of Knowledge Management (1996-2002) initiated by many corporations setting up new jobs for KM specialists and “Chief Knowledge Officers”. During this generation, KM research touched knowledge definitional issues, business philosophies, systems, frameworks, operations, practices and advanced technologies (Metaxiotis et al., 2005). Nowadays, knowledge management goes through the third generation of its evolution which is characterised by efforts to connect knowledge with action (Paraponaris, 2003). In this context, Knowledge logistics is a new direction of knowledge management addressing the issue of decision support by delivering necessary and timely information for knowledge management systems. In doing so, knowledge is generated, passed on, used and in turn contributes to its re-generation. In order for this to happen, an intensive cooperation and an open real-time knowledge exchange between participants in the global information environment are required, so that the right knowledge from distributed sources can be integrated and transferred to the right person within the right context at the right time for the right purpose. The aggregate of these interrelated activities will be referred to as knowledge Logistics (KL) (Smirnov et al., 2003).

2.3 Knowledge intensive crisis management In this paper we argue that a potential integration of Crisis and Knowledge Management disciplines would produce interesting theoretical results and meet the current needs of practitioners. Crisis Management literature has already placed emphasis on “crisis” as a situation in which important decisions have to be made and where management problems should be confronted under conditions of major technical emergency (Turner and Pedgeon, 1997). Given that a crisis is a situation that obliges the organization to make quick and efficient decision under conditions of uncertainty and complexity, the role of information and knowledge is extremely important. Coombs (1999) provides an ontological and methodological approach on crises focusing on the information and knowledge management aspects of crisis management. He argues that ultimately “crisis management is a process of moving from the unknown to the known through information gathering and processing” (ibid: 99). In this respect, getting hold of the right information, at the right time, at the right place and if possible at the right cost is critical for the success of managing any crisis situation (ISCRAM, 2006). An analysis of cases of organizational crises in literature reveals the knowledge dynamics of crisis management. For instance, in the case of the boycott of black minorities against NIKE (see Jackson, 1993) it becomes evident that the lack of efficient response supported with the appropriate knowledge undermined the validity of the corporations’ claims and promises. More, evidently, in the cases of Brent Spar (Tsoukas, 1999) and the recall of Firestone tyres (O’ Rourke, 2001), an identified challenge for organizations was to be able to regulate and control the construction of knowledge and its dissemination to the public. Other famous crises, like the Bhopal disaster or the Tylenol crisis also show how a crisis is mainly a socially constructed phenomenon which requires the participation of the organization in providing information about the crisis, the consequences, the risks and its own contribution to the resolution of the crisis (Wallace, 2004). In the context of the research efforts described in this paper, organizational crises are regarded as events which produce additional challenges for knowledge management, mainly because complex, polymorphic and both structured and unstructured knowledge must be efficiently harnessed, processed and disseminated to the appropriate internal and external actors under specific time, media and environmental constraints. For example, it is essential for actors involved in a crisis to possess the knowledge of what happened, relevant regulations and team views before being involved in the management of communication and the handling of crises. We conceptualize crises

417

The 7th European Conference on Knowledge Management

as knowledge intensive organizational phenomena and we explore the knowledge aspects of crises and crisis management. Such challenges have been revealed by Wang (2005), who proposed a strategic model for crises knowledge management; but further studies, drawing from Knowledge Management, are required. An organisations ability to effectively react to a severe crisis most –if not all- of the times, depends on the existence of a mechanism to represent and process the explicit and implicit modes of all kinds of available crisis related knowledge (procedural, reasoning etc.). Taking into consideration such mechanisms, this paper aims to provide an integrated knowledge-based framework, based on existing efforts by Holsapple (2004) and Smirnov (2003), which will support the effective management of a crisis during its life cycle. We argue that an integrated approach of knowledge logistics guidelines coupled with an existing, consistent crisis management framework could significantly improve an organization’s readiness and efficiency towards confronting both expected and unexpected crises.

3. Knowledge logistics of crises: An integrated framework A framework is defined as an integrated and consistent description of the major elements and concepts of a particular domain and provides their relationships and the principles that define the way in which these elements and concepts interact. Current literature proves the existence of two types of frameworks. Prescriptive frameworks that provide general directions about the types of management procedures and descriptive frameworks that characterize or describe management activities. The proposed framework presented in this paper is of the prescriptive type and includes a) a description of the major elements, concepts and the relationships between them for both Crisis and Knowledge Management domains in the form of two interrelated ontologies. An abstract description of the necessary underlying knowledge repository structure is also presented b) a procedural matrix of knowledge intensive Crisis Management approach which utilises ontology elements and integrates the two domains into a set of practical guidelines in the form of knowledge activities and c) a graphic technique for documenting and disseminating knowledge activities that supports Crisis Management during the pre-crisis and crisis event phases of the crisis’ life-cycle.

3.1 Knowledge repository and ontology The core of any knowledge management system is the underlying knowledge repository. Repositories can have different structures and implementations depending on the purpose of the implemented system. For the need of our approach we intend to use Smirnov’s KSNet-approach repository (Smirnov, 2003). This repository is used for storage of information about knowledge sources’ characteristics, knowledge procedures, crisis management procedures, domain descriptions etc. and the relations between them. Smirnov (2003) identifies three different repository components a) the semantic description component that includes the library of ontologies b) the service component containing knowledge source and user profile information and c) the physical component used for storage and verification of knowledge, entered by experts, learnt from users, obtained as a result of the knowledge logistics process or acquired from knowledge sources which are not free, not easily accessible, etc. In the context of this paper we will address issues related with the ontological structure of the repository since our approach research still lies in the analysis phase and does not yet concerns implementation issues. A semantic description component is used for knowledge representation and the creation of a common understanding of the domain terms and definitions. In doing so, this component includes all the necessary ontologies used to describe the domain terms and correspondence between terms of different ontologies. In the context of this study, two different ontologies were created, one for each domain under study, these being Crisis Management and Knowledge Logistics/ Management. Both ontologies were further specified in a declarative form by using OWL (Web Ontology Language). OWL is recommended by W3C (World Wide Web Consortium) as a standard language for the Semantic Web. Figure 1, illustrates a snapshot of the later ontology shown in Protégé ontology editor.

418

Stavros Ponis, Epaminondas Koronis, Ilias Tatsiopoulos et al

A

B

C

Figure 1: A Snapshot of the knowledge management/ logistics ontology Finally, these two core ontologies will be supported by a library of ontologies that includes a) the implementation of a set of application-based ontologies for each instantiation of the methodology in real-life business situations, b) a knowledge source ontology c) a request of service ontology and d) a tools, methods and practices ontology.

3.2 Crisis management knowledge-based activities The next section of this chapter presents the procedural matrix of crisis management knowledge activities, which is based on the frameworks of Holsapple and Jones (2004) and the analysis on Crisis Management typologies and anatomy introduced by Mitroff (1988), Pearson and Mitroff (1993) and Coombs (1999). According to Holsapple and Jones (2004), knowledge management includes the following primary activities: a) knowledge acquisition (from external sources), knowledge selection (from internal sources), c) knowledge generation, d) knowledge assimilation and e) knowledge emission. According to Coombs (1999; 2004) and Lerbinger (1997) Crisis Management consists of only three phases; the pre-crisis phase, the crisis event and the post-crisis recovery. In the pre-crisis phase the management should attempt to define the crises that their organization is most likely to experience as well as determine the likelihood of the crises occurrence and do as much preparations as possible to deal with each of the crises should they occur. In the crisis event, phase two, which, the time for decision-making is compressed and decisions have to be made under conditions of high uncertainty. Management needs to contain the damage of the crisis itself as well as the media’s reporting to the public. In the third and last phase, post-crisis, management should seek to rebuild its reputation and carry out changes in its organizational structure, corporate governance policies, and corporate culture and control mechanisms in an attempt to recover from the crisis. Based on the works of the aforementioned authors a procedural framework of crisis management knowledge activities was elaborated. The framework includes a set of knowledge activities to support crisis management for each phase of the crisis’ lifecycle. Respecting the size limitations of this paper we present the results regarding the Pre-Crisis phase in Table 1. Similar tables have been elaborated for the Event-Crisis and Post-Crisis phases.

419

The 7th European Conference on Knowledge Management

Table 1: Pre-crisis knowledge activities Pre-Crisis Phase Knowledge Acquisition

Knowledge Selection

Knowledge Generation

Knowledge Assimilation

Knowledge Emission

Searching external environment for possible crises Conducting external surveys to identify potential events that would lead to crisis Studying previous industry crises Evaluating symbolic effects through external surveys Participation in inter-organizational communities of practice Focusing on knowledge of customer complaints Monitoring possible technological advances and options in order to avoid crises Hiring people with Crises Management experience Forming alliances and joint ventures with other organizations Using informal relationships for acquisition of information Acquiring advice from consultants and experts on Crisis Management Gathering advice on Crises from the professional literature (Books, Newsletters, Journals) Obtaining intellectual property (Proprietary Software, Patented Methodologies) Observing failed/ successful efforts of others in dealing with Crises Subjecting employees to external training Improving processes through purchase of technology Transforming internal agents’ instinct into exploitable knowledge about crises Observing behaviour and processes (e.g. quality management system) Monitoring organization’s systems of indicators and record incidents that insinuate the potential of a forthcoming crisis (e.g. increase in the number of faulty products) Evaluating suggestions for alternative forms of organizing/ operating Consulting with top management and align crisis prevention with strategy Bringing experts & people together (Internal Communities of Practice) Participating in in-house training Retrieving information from Crisis Management manual (if available) Retrieving information from available knowledge repositories Identification of potential enemies/allies triggering crises or amplifying consequences. Evaluating risk of new products and services Brainstorming Identification of patterns leading to crises Creating knowledge objects in a repository/ Create a crisis-related data warehouse Gathering data from data mining, text mining and simulation Creating and performing pilot Studies Developing intellectual assets strategy to support crisis management practices Creating a crisis management system Authoring a crisis management manual Codifying a mission statement and corporate identity profile Elaboration of a wiki-based thesaurus on risk management issues and topics Selection of staff to form a 24/7 rota over individual staff members that will be available should a crisis strike/ Selection of spokesman Disseminating information about potential crises via email, intranet, newsletters, corporate portal, internet and conferences and meetings Publishing and Disseminate to appropriate users of the Crisis Management Manual Populating a crisis-related data warehouse Pushing knowledge via an electronic alert Informing the whole of the organization about the official spokesman and communication handling In-house training Informally acclimating employees with the general crisis management guidelines of the company (code of practice) Establishing a social atmosphere of trust to encourage interpersonal communications Mobilising resources towards addressing the particular crisis immediate needs related with the incident results. Lobbying

420

Stavros Ponis, Epaminondas Koronis, Ilias Tatsiopoulos et al

4. Case study: Crisis management practices and realities in a pharmaceutical company In order to draw empirical data from real life we examine a crisis in an SME (Small and Medium Size) pharmaceutical company. Such case analysis has been the first step of a series of empirical research in an attempt to further explores the knowledge aspects of crisis management while it stands as a preliminary effort for validating our ontology-based framework and the identified knowledge activities. For confidentiality reasons we will refer to the company as PharmaCom. The aim of our empirical investigation was to evaluate the validity of the proposed integrated framework and to identify the importance of knowledge activities while also setting a first step to develop an applicable process-based tool for handling crises knowledge management activities.

4.1 Case description PharmaCom is a manufacturer of pharmaceutical products, involved in the production and marketing of several owned brands while also providing contract-manufacturing services for eleven (11) other companies. Despite the existing regulatory and safety system, the complexity of manufacturing operations creates the risk for manufacturing errors and the company faced three (3) production crises in the last five (5) years. We focused on a small-scale crisis that happened in March of 2005, when a customer has denounced to the Ministry of Health, the National Organization of Medicines and the Customer Protection Secretariat that she has found three tablets of a product, which contained an alien object (hair). The incident concerned a product manufactured in PharmaCom for another company and created panic and fear to organisation’s management at all levels. The aforementioned crisis evolved in four stages: ƒ a. The customer has contacted the Company to report the problem but the Sales Manager failed to satisfy her demands. He did not retain the customer’s contact information and underestimated the potential organizational threats. ƒ b. The customer has reported the incident to the authorities, publicized the case, sent a fax to the mass media and threatened for litigation. ƒ c. The Company immediately responded by: ƒ Sending a salesman to examine the problem in place. ƒ Initiating a chain of internal inspections and safety controls especially for the recent batches of the product. ƒ Contacting the brand’s owner firm to explain the situation. ƒ Organizing a large meeting to search for the reasons. ƒ Sending a complete report to the NOM (National Organisation for Medicines) explaining the causes of the problem. ƒ Trying to influence media’s agenda and to avoid and publicity. ƒ d. A few weeks later, the NOM has conducted an inspection to the product and the manufacturing processes. The company has assigned the case to a lawyer Given that our research team aimed at identifying the knowledge dimensions of the crisis, we have designed and conducted qualitative research using both structured and unstructured interviews which best reveal hidden dynamics within organizations (Denzin & Lincoln, 1998). We also organized two (2) discussion groups with the top management of the company.

4.2 Crisis management and knowledge activities in PharmaCom Our research revealed that PharmaCom is a crisis prone organization, particularly because of the manufacturing operations, which comprise an increased level of risk. While a few elements of crisis management exist in the current ISO documentation (e.g. defected product’s recall, customer complaints reporting) and managers are aware of the risks for crises (e.g. manufacturing errors, changes of regulatory system, financial threats), it is arguable that no particular effort has been dedicated in designing a system which would address the organizational needs during a crisis.

421

The 7th European Conference on Knowledge Management

Drawing from the particular crisis’ case experiences, managers and staff of PharmaCom agreed that the lack of shared knowledge on policy; potential threats and crisis handling information have amplified the effects of the crisis. Moreover, people in PharmaCom harvesting useful insights from the proposed framework (and in particular the procedural matrix) used their previous experiences, memories and evaluations of the recent crisis and provided us with a list of – in their opinion knowledge activities that are important during crisis. The empirical study showed that the staff, in all levels of hierarchy, shared the strong belief that knowledge should efficiently flow within the organization during a crisis, hence enabling everybody to know the general strategy, the process of reaction, the spokesman and other particular details of the crisis. A key consideration of middle management employees was their need to know what the top management thinks about the crisis and what are its size and significance. More importantly, the case revealed the absence of a standard tool of rapidly disseminating that knowledge to all implicated stakeholders of the crisis inside the organisation. To address this issue, the research team introduced a simple to use graphic technique which integrated into one diagram all the factors participating in the handling of a crises, these being the crisis management activity, the agent that process it, the knowledge source that provides the necessary information and the system that supports it (IT or manual). The technique gained management acceptance and a set of approximately fifty different diagrams covering all the knowledge-based crisis management activities were elaborated. In Figure 2, an example of such a diagram that supports the “Focusing on Knowledge of Customer Complaints” activity is presented (as part of a pre-crisis, crisis-event prevention process). In this figure a specific workflow (part of it, since the whole process is depicted in a set of five diagrams) for efficiently handling knowledge of a customer complaint that could lead to a crisis is presented. All the agents and the departments they belong to are shown along with external and internal sources of used knowledge (e.g. Production Registration File). Creation of new knowledge is also depicted (e.g. CM Strategy file). Moreover, for each activity an index is appointed showing the system that supports it (e.g. CC for Call Center, MIS for Management Information System and M for Manually executed activities). Particular emphasis is placed in this workflow to the formation and strategic role of a Crisis Handling Committee, with members from all over the organization, which would facilitate the knowledge transfer among different departments and the dissemination of knowledge about the crisis and the tactical overview of the organization.

5. Conclusions The aim of this paper is to point out the important role of knowledge logistics in crisis management. We have argued that crises may be conceptualized as knowledge intensive phenomena; our empirical study confirmed that managers and employees identify the need of a well defined crisis management strategy that must be supported by a knowledge logistics mechanism that rapidly and efficiently circulates the necessary knowledge to all the implicated crisis handling agents. In an attempt to contribute to this particular and rather innovative field we proposed a generic framework that provides a common understanding of the domain in the semantic level and a set of knowledge activities that can support the management of crises during its lifecycle, in the operational level. In an effort to transform the theoretical suggestions of the framework into applicable actions and based on empirical data drawn from the case company, we introduced the use of a graphic technique that supports the rapid and efficient dissemination of the knowledge activities during a crisis. This paper stands as the first step towards the elaboration of an integrated complete knowledge management model for crisis management. In this process, current research of our team includes the elaboration of reference interrelated diagrams for all the activities of the procedural matrix resulting in a functional reference model for knowledge-based Crisis Management. Our empirical research has also been focusing on the testing, validation and evaluation of both the ontological framework and the description of knowledge activities by drawing data from a selected sample of pharmaceutical companies.

422

Stavros Ponis, Epaminondas Koronis, Ilias Tatsiopoulos et al

MARKETING & COMMUNICATIONS DPT

TOP MANAGEMENT

SALES DEPARTMENT Sales Employee 1

IV

External Media Environment

4.1.1 CC

Custome r Calls with a Complaint / Problem

Document Call ID

4.1.2 CC

Crisis Handling Committee Conduct Issue 4.1.8 Assessment Meeting M

Complaint Management Knowledge Activities (I/IV)

Sales Employee 1 Communications Dire ctor Prepare a Media – Customers 4.1.12 Communication Plan M

CM Strategy File Communications Director Evaluate 4.1.11 Communication Situation & T hreats M

Communications Director 4.1.13 M

Communicate with Source s of Influence

External Social Envir onment

K.R. 1. Customer Data MIS

4.1.3 CRM

K.R.2. Production Database MIS

Document Customer Information

Crisis History File

External Sources

Sales Employee 1 4.1.4

K.R.3. Product Registration MIS File

M

Document Complaint / Problem Information

No

Can the Issue Transform to A C risis

Sales Employee 1 4.1.5 M

Yes Yes?

Effort to Re solve the Issue

Can the Issue B e Solved Instantly?

Board of Direc tors

No 4.1.9 Is the Issue/ Problem Resolved?

? ??

Sales Manager Inform Management 4.1.6 about M T he problem/ Issue

M

Conduct General Board Meeting

CM Str ategy File

? a?

Sales Employee 1 Record Issue 4.1.7 Resolving M Process

Unresolved Issue Report Sales Manager Reestablish 4.1.10 Communication With Customer M

Issue History File

IV II

III Leads the Process to The Legal Department

Leads the Process to The Financial Department

Figure 2: Complaint management knowledge activities diagram (partial, 1 of /5)

6. Acknowledgements The research efforts described in this paper are within the context of a project co - funded by the European Social Fund (75%) and National Resources (25%) - Operational Program for Educational and Vocational Training II (EPEAEK II) and particularly the Program PYTHAGORAS.

References Arpan, L. M. and Pompper, D. (2003.) “Stormy weather: testing ‘stealing thunder’ as a crisis communication strategy to improve communication flow between organizations and journalists”, Public Relations review, Issue 29, pp. 291-308. Baldassere M. (1998) “When Government Fails: The Orange County Bankruptcy”, University of California Press, 1 edition. Benoit, W. L. (1995) “Accounts, Excuses and Apologies: A theory of Image Restoration”, Albany: state University of New York Press Bowen, B. R. (2000) “Roberts vs. Texaco: How Arrogance, Abuse and Racial Discrimination Proved Costly : An Exclusive Interview with Bari-Ellen Roberts”, Transformate Publishing LLC. Carley, K.M. (1991) "Designing organizational structures to cope with communication breakdowns: a simulation model", Industrial Crisis Quarterly, Vol. 5 pp.19-57. Coombs, T. (1999) “Ongoing Crisis Communication: Planning, Managing and Responding”, London: Sage Publications Coombs, T. (2004) “West Pharmaceutical’s Explosion: Striking Crisis Discourse Knowledge”, Public Relations Review, Vol. 30, Issue 4, pp 467-473 Crisis Resource (2005) Crisis Metric: Types of crises most frequently mentioned in global news and business publications for 2004, www.crisisresource.com, created by Burson-Marsteller Davenport, T., DeLong, D. and Beers, M. (1998), ‘‘Successful knowledge management projects’’, Sloan Management Review, Vol. 39 No. 2, pp. 43-57. Denzin, Norman & Lincoln, Yvonna (1998) Collecting and Interpreting Qualitative Materials, London: Sage Publications Drucker, P. (1964) “Managing for Results”, London: William Heinemann. Drucker, P. (1988) “The Coming of the New Organization”, Harvard Business Review 1: 66, pp. 45–53.

423

The 7th European Conference on Knowledge Management

Fink, S. (1986) “Crisis Management: Planning for the Inevitable”, New York: AMACOM Foucault, M. (1980), “Power/Knowledge”, Pantheon, New York, NY. Heath, R. L. (1994) “Management of Corporate Communications”, New Jersey: Lawrence erlbaum Holsapple, C.W. and M. Singh (2000), “Toward a Unified View of Electronic Commerce, Electronic Business, and Collaborative Commerce: A Knowledge Management Approach”, Knowledge and Process Management Volume 7 Number 3 pp 151-164. Holsapple, C.W. and K. Jones (2004), “Exploring Primary Activities of the Knowledge Chain”, Knowledge and Process Management, 11:3, pp 155–174. Jackson, Janice E. (1993) “Crisis management lessons: when Push shoved Nike - boycott of Nike by People United to serve Humanity”, Business Horizons, Jan-Feb, 1993 International Community on Information Systems for Crisis Response and Management, [Proceedings of ISCRAM 2005 online], http://www.iscram.org/. Lerbinger, O. (1997) “The Crisis Manager”, Mahwah: Lawrence Erlbaum Associates. McMullan, C.K. (1997), "Crisis: when does a molehill become a mountain", Disaster Prevention and Management, Vol. 6 No.1, pp.4-10. Marshall, A. (1965), “Principles of Economics”, Macmillan, London. Metaxiotis K., Ergazakis K. and J. Psarras (2005) “Exploring the world of knowledge management: agreements and disagreements in the academic/practitioner community”, Journal of Knowledge Management, 9: 2, pp.6-18. Mitroff, I. (1988) “Cutting through the confusion”, Sloan Management Review, 2 (9), 15-20. Mitroff, I.I., Pauchant, T., Finney, M., Pearson, C. (1989), "Do (some) organizations cause their own crisis? The culture profiles of crisis-prone vs crisis-prepared organizations", Industrial Crisis Quarterly, Vol. 3 pp.269-338. Newell A. (1982), “The knowledge level”, Artifcial Intelligence 18: No. 1. Nonaka, I. (1991) “The Knowledge-Creating Company”, Harvard Business Review 6: 71, 96–104. Nonaka, I. (1994), ‘‘A dynamic theory of organizational knowledge creation’’, Organization Science, Vol. 5, pp. 14-37. O’ Rourke, James (2001) “Crisis management lessons: when Push shoved Nike - boycott of Nike by People”, Corporate Reputation Review, Vol. 4, Issue 3, pp 255-264 Ott, R (2005) “Sound Truth & Corporate Myth$: The Legacy of the Exxon Valdez Oil Spill”, Dragonfly Sisters Press. Paraponaris, C. (2003), ‘‘Third generation R&D and strategies for knowledge management’’, Journal of Knowledge Management, Vol. 7 No. 5, pp. 96-106. Pearson, C. M. and Clair J. A. (1998) “Reframing Crisis Management”, Academy of Management Review, Vol. 23, Issue 1, pp 59-76 Pearson, C. and Mitroff, I. (1993) “From crisis prone to crisis prepared: a framework for crisis management”, Academy of Management Executive, 7 (1), 48-59. Rothchild, P. (1998) “Gerber Management’s Response to the Glass Scare of 1986: An Ethical Analysis of the decisions of Gerber’s Management Team”, [individual online research paper], http://www.pillowrock.com/ronnie/gerber.htm. Quarantelli, E.L. (1988) "Disaster crisis management: a summary of research findings", Journal of Management Studies, Vol. 25 Issue 4, pp.373-84. Senge, P. (1990), “The Fifth Discipline: The Art and Practice of the Learning Organisation”, Doubleday, New York, NY. Shrivastava P. and Mitroff I. I. (1987) “strategic Management of Corporate Crises”, Columbia Journal of World Business, Vol. 22, pp 5-11 Smirnov A., Pashkin M., Chilov N., Levashova T. and F. Haritatos (2003), “Knowledge Source Network Configuration Approach To Knowledge Logistics”, International Journal of General Systems 3: 32, 251–269. Tsoukas, Haridimos (1999) “David and Goliath in the Risk Society: Making Sense of the Conflict between Shell and Greenpeace in the North Sea”, Organization, Vol. 6, Issue 3, pp 499528 Trner, B.A.and Pedgeon, N.F. (1997) “Man-Made Disasters”, 2d edition, Oxford: ButterworthHeinemann. Wallace, Michael (2004) “The Disaster Recovery Handbook”, New York: Amacom Wiig, K. (1993), “Knowledge Management Foundations: Thinking about Thinking – How People and Organizations Create”, Represent and use Knowledge, Schema Press, Arlington, TX.

424

Knowledge-Based View of the Firm – Foundations, Focal Concepts and Emerging Research Issues Aino Pöyhönen and Kirsimarja Blomqvist Lappeenranta University of Technology, Finland [email protected] [email protected] Abstract: The logic of doing business and creating value has changed fundamentally during the past decades. Knowledge has taken the place of land, labor and capital as the most significant resource, and productivity of knowledge work has become the main challenge for all types of organizations. Along with this development, a novel theoretical approach to organizations, the knowledge-based view of the firm (KBV), has risen, which implies significant changes to how organizations are understood. However, the KBV still is in an emerging state and a lot of vagueness remains even in is basic tenets and key concepts. Our paper traces the foundations and suggests future developments for the KBV. Specifically, we address two promising directions of future inquiry: organizational renewal capability and trust as a form of governance. Keywords: Knowledge-based view of the firm, knowledge management, innovation, organizational renewal, dynamic capabilities, trust

1. Introduction The competitive environment of firms has changed dramatically during the last two decades. The role of the traditional sources for competitive advantage has deteriorated in the process of globalization and technological change. Monopolies have broken down through the deregulation of markets, and the economies of scale seem to bring diminishing returns. The high-technological change creates incentives for innovation and entrepreneurs seeking opportunities. Also, large and established companies need to adapt to the changes in the business environment. In addition to streamlining their structures and processes for increased efficiency, firms need to create new knowledge for increased innovativeness. As James March (1991) notes, firms need to exploit the existing potential, but also to adapt to the ongoing changes by exploring new business opportunities. The discontinuities may be turned into opportunities either through proactive innovation and self-renewal (Pöyhönen, 2004), or by co-operating with innovative firms and learning from them. Large companies are building relevant capabilities through acquisitions, partnerships, and collaborative R&D projects. For example, Microsoft, Cisco and Nokia have diversified their capabilities through collaboration and acquisitions with Internet, software and content capabilities (Blomqvist, 2002). The role and effect of knowledge in the contemporary society and organizations is so profound that the change has been compared to the Kuhnian paradigmatic shift. The increasing role of knowledge has dramatic consequences for the nature of work (e.g., Drucker, 1999). It also has implications for the authority relations within the firm, organizational structures and processes, and even for the boundaries of firms. An increasing number of organizations worldwide are consciously attempting to manage their knowledge and leverage their intellectual capital. Along with these developments, a novel theoretical approach to organizations, the knowledge-based view of the firm (KBV) has risen, which implies significant changes to the way in which organizations are understood. However, the KBV still is in an emerging state and a lot of vagueness remains even in is basic tenets and key concepts. Therefore our aim in this paper is to clarify the current state of the art of the KBV and to point out some crucial issues that have so far been given inadequate research attention. To achieve this goal we first review the theoretical foundations of the KBV and pinpoint some of its most focal ideas. Then discuss some promising directions of future inquiry. Specifically, we address two important areas of development, organizational renewal capability and trust as a form of governance, and present suggestions for how they could be approached.

425

The 7th European Conference on Knowledge Management

2. Foundations of the knowledge-based view of the firm Along with the rise of knowledge work, a novel approach to organizations, the knowledge-based view of the firm, has emerged bringing significant changes to the way in which organizations are understood. The starting point is that knowledge is the key explanatory factor, and the nature of knowledge is an important determinant enhancing understanding of firm organization and behaviour. The KBV addresses the issues of the existence, the boundaries, and the internal organization of the multi-person firm (Foss, 1996, 460). According to the KBV, organizations exist to create, transfer, and transform knowledge into competitive advantage (Kogut & Zander, 1992), and performance differences between firms derive from their differing stocks of knowledge and capabilities in using and developing knowledge (Nonaka & Takeuchi, 1995; Grant, 1996b; Spender & Grant, 1996). While most authors discussing knowledge management do not explicitly refer to the KBV, this tradition is arguably one of the the most important theoretical developments connected with the field of knowledge management. In the following, we review the most important tenets of the KBV. As a productive resource, knowledge has some distinctive characteristics that set it apart from other types of resources. First, there are economies of scale in knowledge. This means that the replicating costs of knowledge are less than the original discovery or knowledge creation costs. Especially explicit knowledge is economical to reproduce in digital form (Shapiro & Varian, 1998). For tacit knowledge, replication is costly and slow, but even its replication costs are lower than the costs of its creation. There are also economies of scope in knowledge. This means that knowledge is not specific to the production of single product or service, but can also be extended to benefit the production of other outputs (Kogut & Zander 1992, Grant, 1996b). Unlike traditional physical goods that are consumed when they are used, knowledge is subject to increasing returns: the more knowledge is used, the more valuable it becomes (Grant, 1996b; Zack, 1999). In the sense that the KBV views knowledge as the most important firm resource, it is similar to the resource-based view of the firm (RBV) (e.g. Penrose, 1959; Wernerfelt, 1984; Barney, 1991), which conceptualizes the firm as a unique bundle of idiosyncratic resources and capabilities. In comparison with the RBV, the KBV takes a more fine-grained and profound understanding of knowledge as its basis. Especially relevant are three background assumptions of the nature of organizationally relevant knowledge: constructionism, activity-relatedness and inter-subjectivity (Pöyhönen, 2004). Firstly, the KBV is based on the constructionist view of knowledge, which assumes that knowledge is always tied to a particular viewpoint and practical application. In other words, knowledge is a product and vehicle of human activity, bounded by the limitations of human cognitive and other psychological capacities, and by the social and cultural environment of activity. Information technology systems and other related mediating tools can act as vehicles for transferring knowledge, or as repositories for storing knowledge, but in knowledge-based management the role of these is secondary compared with knowledgeable human actors. Humans are seen as active constructors of knowledge, who use knowledge for achieving certain goals, rather than naive recipients of externally created knowledge or “garbage cans” into whose minds information is inserted and where it exerts a stable and predictable influence. Learning is understood as a situated process of knowledge construction based on action (e.g. Berger & Luckmann, 1966; Nonaka & Takeuchi, 1995; Orlikowski, 2002). Constructionism also implies that knowledge cannot be completely controlled but can only be managed by creating enabling conditions (Von Krogh, 1998). It also directs attention from mapping knowledge resources per se to examining how these resources are utilized and coordinated (Spender, 1996b), Secondly, the particular socio-historical context sets the boundaries for individual understanding and behaviour, while at the same time individuals regenerate and modify the context by enacting it. Individuals neither think nor take action in a vacuum; knowledge is embedded and constructed in shared practices by interacting individuals that combine their efforts while striving towards more or less common goals (e.g., Berger & Luckmann, 1966; Crossan et al., 1999). As Spender (1996a, 64) argues, “knowledge is less about truth and reason and more about the practice of intervening knowledgeably and purposefully in the world.” And to intervene in the world one has to be able to communicate with others and understand the particular context of activity. In this sense, knowledge essentially exists between individuals and not only within them.

426

Aino Pöyhönen and Kirsimarja Blomqvist

For a firm to be knowledgeable, it is not enough that its individual employees are skilled and educated. The scattered, uncoordinated insights of individual organizational members are not enough to produce competitive advantage; in order to produce sustainable value, they must be combined into a synergistic whole. This does not mean a mechanistic aggregation or synthesis of what the individual members of the organization know. The pattern and mechanisms of integration of knowledge cannot be reduced to the level of individual actions, but have to be analyzed in their own right, on the level of shared practices. The crucial issue is how the employees work together, how their tasks interrelate and how their individual knowledge is integrated to produce value for the company (Grant, 1996a; 1996b). This entails that, from a knowledge-based view, organizations are above all social entities “specializing in the creation and transfer of knowledge” (Kogut & Zander, 1996, 503). In the knowledge-based production, the role of organizational factors is critical. The competitiveness of the firm does not so much depend on its product–market positioning in relation to external competitors, as on its internal characteristics. Humans are bounded by cognitive limitations as to how much and what they can know, and therefore they have to specialize (Simon, 1955). Especially in complex issues, which cannot be understood by any single individual, there is a need for integration and coordination of knowledge (Grant, 1996b). Producing a good or service typically requires the application of many types of knowledge resources (Kogut & Zander, 1992; Grant, 1996b; Grant & Baden-Fuller, 2004). This means that the organization also has to be able to manage, integrate, and coordinate the knowledge of its employees (Penrose, 1959; Kogut & Zander, 1992; Grant, 1996b). Finally, knowledge is always connected with action: that what is known is demonstrated in knowledgeable activity. Knowledge is created and leveraged in the context of on-going organizational activities (e.g., Dougherty, 1992; Leonard-Barton, 1995; Brown & Duguid, 2001; Orlikowski, 2002). The most valuable kind of knowledge is that which is demonstrated in “knowing” and skillful behavior, rather than that which is stored in, for example, databases and patents. Competitive advantage, in fact, flows not from resources themselves but from the firm capabilities to use these resources for productive purposes (e.g., Penrose, 1959; Kogut & Zander, 1992; Spender & Grant, 1996; Grant, 1996b). Furthermore, because knowledge is always based on human action, it cannot be managed in the same way as inhuman resource stocks and flows. Knowledge is intangible, invisible and to a large extent unconscious even to those in whose minds and bodies it is embedded. Tacit knowledge can never be fully externalized and brought to be the subject of rational management control. Thus, the most important way to manage knowledge is by creating contexts where knowledge can grow and flourish. Higher-order organizing principles (Kogut & Zander, 1992) and collective knowledge (Spender, 1996a; 1996b) create the context where organizational activities take place, and therefore have a crucial role in steering any knowledge-based organization. To summarize, according to the KBV, knowledge is not something objective, free-floating, abstract, and universal as portrayed by the traditional western epistemology; but neither is it only subjective, residing solely in the heads of individuals as their personal experience. Instead, knowledge is something that is constructed in the social practices of actors embedded in a particular social context. Rather than residing in the minds of individuals or in databases, the most important type of knowledge is that which is located between people (e.g., Spender, 1996b; Brown & Duguid, 2001). Knowledge emerges from the social interactions between various parties within and across the organizational borders. In addition, it is continuously reinterpreted and modified, and continuously changing and developing (e.g., Blackler, 1995; Drucker, 1999; Tsoukas & Chia, 2002; Orlikowski, 2002). In other words, knowledge is fundamentally dynamic in nature: it is the subject of constant negotiations, modifications, and alterations. It is also related to the issues of power and control (Blackler, 1995).

3. Emerging issues in the knowledge-based view of the firm According to Spender (1996a; 1996b), the strategically most important type of knowledge for firms is collective knowledge, which consists of patterns and modes of knowledge combinations between individuals, groups, units, and organizations. Shared operating methods are inimitable across firms, and therefore they are the main source of sustained competitive advantage. Also, Kogut and Zander (1992) argue that the central competitive dimension of firms is the efficient creation and

427

The 7th European Conference on Knowledge Management

transfer of knowledge within the organizational context. Thus, the processes by which knowledge is used and created in organizations are at the heart of business performance and value creation. Furthermore, as organizations increasingly face worldwide competition in rapidly transforming, unpredictable environments, the ability to generate constantly novel and improved products, services, processes, and mindsets has become quintessential for sustained competitive advantage (e.g., Grant, 1996a; Teece et al., 1997; Eisenhardt & Martin, 2000; Pöyhönen, 2004). In sum, it seems that there are two fundamental issues that are especially important topics of research for taking the KBV further: understanding organizational capability for continuous self-renewal and innovation, and knowledge-productive mechanisms of social coordination. Correspondingly, in the field of knowledge management, several authors (Snowden, 2002; Tuomi, 2002; Hong & Ståhle, 2005) have noted that the next generation of research will be characterized by an interest in the complicated, complex and chaotic nature of knowledge, management of risk and uncertainty, as well as capabilities for the creation of new knowledge and innovations. The key issue then is how to navigate successfully in the midst of turbulent and unpredictable environments and to benefit from the diverse and dispersed knowing of various actors, while instilling and maintaining a requisite coherence in the firm activities. In this chapter, we discuss two important research directions which represent emerging topics in the KBV and knowledge management: organizational capability for continuous innovation and trust as a governance mechanism.

3.1 Organizational capability for renewal It is widely recognized that winners in the global marketplace are those firms that can renew themselves by continuously generating new knowledge and capabilities through learning and innovating. Renewal capability does not only mean that the organization is able to respond to today’s challenges and to keep up with the changes in its environment, but also that it can act as a forerunner by creating innovation, both at the tactical and strategic levels of operation and thereby change the rules of the market (Hamel & Prahalad, 1994; Brown & Eisenhardt, 1998). The demand for constant renewal is not limited to firms: non-profit organizations as well as regions and nations face similar challenges.

Figure 1: Knowledge assets, renewal capability, and the production of sustained competitive advantage (Pöyhönen, 2005) However, even though both academics and practitioners recognize that the dynamic capability for continuous learning, development and renewal is a major source of sustained competitive advantage (e.g. Grant, 1996a; Teece et al., 1997; Eisenhardt & Martin, 2000; Pöyhönen, 2004), there is no widely shared view on how organizational renewal capability should be defined, and the research field is characterized by a plethora of concepts and definitions. This has lead into theoretical pluralism and relatively segregated research strands with little cross-communication (e.g., Wolfe, 1994; Van de Ven & Poole, 1995; Dunphy, 1996). Although there is a wide multidisciplinary literature addressing some dimensions of organizational renewal, there are few integrative models that would draw together the most important aspects of a wide array of studies. A recent theoretical model by Pöyhönen (2005) integrates the contributions of various approaches and presents a comprehensive model of organizational renewal capability. In this model, it is proposed that sustained competitive advantage in turbulent environments is created in an

428

Aino Pöyhönen and Kirsimarja Blomqvist

interaction between knowledge assets and renewal capability of the organization. Figure 1 demonstrates the interconnections of knowledge assets, renewal capability, innovation, learning and sustained competitive advantage. The model further argues, based on an extensive literature review, that there are six key elements of organizational renewal capability: strategic capability, leadership, exploiting time, connectivity, managing knowledge, and learning orientation (see Pöyhönen, 2005). The model posits that sustained competitive advantage in turbulent environments is created when an organization is able to renew itself continuously (Leonard-Barton, 1995; Teece et al., 1997; Eisenhardt & Martin, 2000; Weick & Sutcliffe, 2001; Pöyhönen, 2004). Renewal is demonstrated as organizational learning (i.e. development of novel mental models and insights) (e.g. Fiol & Lyles, 1987; Huber, 1991) and innovation (i.e. development of new products, services, processes etc.) (e.g. Kanter, 1988; West & Farr, 1990). Learning and innovation result from the interaction between knowledge assets and renewal capability. Renewal capability determines how an organization is able to develop its existing knowledge assets and to create new ones. Renewal capability is defined as the organizational capability to produce incremental development and radical change in its mental models and activities. In other words, an organization with high capability for renewal is not only able to respond to today’s challenges and to keep up with the changes in the environment, but it can also act as a forerunner by creating change from within the organization and thereby change the rules of the market. Knowledge assets and renewal capabilities in turn are modified by the learning and innovation outcomes. Learning and innovation are the main mechanisms that produce new knowledge assets for the firm. Also renewal capability can be extended and improved by learning and innovation outcomes. When this cycle is continuous, the organization is able to sustain itself in the face of both external and internal change, and to proactively create change from within the firm. Knowledge assets refer to the stock of strategically important knowledge in the firm. Knowledge assets embody the particular substance of knowledge, for example the field of individual expertise or the content of a particular database or the script of a particular work activity. The knowledge asset base of the organization influences innovation and learning outcomes, as shown in the model. The development of knowledge is always based to some extent on already existing knowledge (Cohen & Levinthal, 1990). However, even though the existing knowledge base of the organization influences the possible developmental paths, in order to understand the extent to which an organization is able to produce cognitive and behavioral development it is not enough to examine the knowledge resources that the organization governs at the moment. Knowledge assets such as human capital and IPRs tell only a part of the story. The organization’s capability for renewal determines how efficiently it is able to use its resources for learning and innovation. The management of organizational renewal is significantly hindered by the lack of measures that would allow its reliable assessment and effective development. Even though the importance of renewal capability for organizational success is an uncontested fact in the literature, there are few metrics for this important phenomenon. At the moment, most of the research on renewal capability is based on case studies, and there is a lack of quantitative methods that would allow reliable assessment, internal communication and inter-firm comparison of renewal capability. The theoretical model of renewal capability has been used as a basis for creating a survey instrument, which will be psychometrically validated in a sample of Finnish organizations in the near future. The survey instrument is suitable for both scientific research and practical performance measurement and organizational development purposes.

3.2

Trust as an emerging form of governance in intra- and inter-organizational relationships

In the knowledge economy, trust is becoming an increasingly important issue, as high-trust forms of intra- and inter-organizational relationships are seen to encourage more effective knowledge creation and transfer (Miles et al., 2000; Adler, 2001; Tyler, 2003). Most scholars (e.g. Bradach and Eccles, 1989; Adler, 2001) approach trust as a complementary governance mechanism in comparison to price (incentives) and authority (control). In addition to being an important form of governance, trust also has essential role as an enabler in intra- and inter-organizational knowledge

429

The 7th European Conference on Knowledge Management

creation processes (Ellonen et al., 2006). Miles et al. (2000) have illustrated the role of trust in knowledge creation and innovation as follows (Figure 2). Time, trust and territory

Collaboration

Broad entrepreneurial empowerment

Knowledge creation and transfer

Product and service innovation

Commercial application

Figure 2: The role of trust in knowledge creation and innovation (Miles et al., 2000) Based on existing research, trust has a direct effect on communication, collaboration, and commitment (e.g., Hardwig, 1991; Blomqvist, 2002; McEvily et al., 2003). Most scholars, such as Bradach and Eccles (1989) approach trust as a complementary governance mechanism especially suited for networks and communities, in comparison to price (markets) and authority (hierarchy). Trust can also be seen as a lubricant in managing uncertainty, complexity, and related risks (Luhmann, 1997), as well as a higher-order organizing principle (Kogut & Zander, 1992; Foss, 1996; 2005). Traditional mechanisms of trust may be inadequate yet in the global network society (Adler, 2001; Blomqvist, 2002; Tyler, 2003). In many cases there is not enough time for repeated interaction on which the traditional, slowly evolving trust is based. Therefore the nature of trust may be changing towards a more analytical, reflective and tentative type of trust (Adler, 2001). It is also proposed that the fast and individual-based trust acts as a critical threshold condition for managers attempting to cope with environmental and organizational complexity, limited time and attention, and the related risks. Trust is argued to be especially critical in a high-risk dynamic environment, and allowing asymmetric actors to connect despite the lack of (perfect) information, limited shared context, and related risks (Blomqvist, 2005). However, due to organizational and management challenges, future organizations cannot only rely on individual-based trust but demand complementary mechanisms and trust to support the knowledge creation and transfer. Trust cannot only be particularistic, but more universalistic forms of trust are also needed. Procedural justice has been positively associated with employee attitudes, such as commitment to the organization (Pearce et al., 2000). At present, researchers are studying the role and nature of impersonal and institutional trust in both organizational and virtual contexts (Kosonen, Ellonen, and Blomqvist, forthcoming). It is therefore proposed that different forms of trust, such as fast and individual-based trust, as well as impersonal and institutional trust are increasingly important for knowledge-based organizations.

4. Conclusions In this paper we examined the basic tenets of the KBV and pointed out two emerging research topics of high importance: organizational capability for continuous renewal and innovation, and trust as a social coordination mechanism. In knowledge-based competition, sustainable competitive advantage is not possible without continuous innovation. Organizations need to understand how to master change, both by adapting to it, but also by proactively creating change for themselves and extending this change to the environment. Innovations, by nature, emerge in social interaction in which diverse actors share complementary knowledge. When hierarchies collapse and traditional incentives work poorly with knowledge workers, the role of mutual trust as a social coordination mechanism becomes increasingly critical. Concerning the KBV as a whole, it should be noted that it still is in an emerging state and is more a set of ideas about the existence and nature of the firm than a unified theory in a formal sense (Grant, 2002). Further, the main foci of the KBV are extremely demanding to observe reliably and even more so to operationalize and measure. Spender and Grant (1996) note that many of the phenomena that are the most interesting from the knowledge-based perspective may in fact be unmeasurable. Connected with this is the lack of a large body of empirical research that would demonstrate the connection of the key knowledgebased variables to firm performance.

430

Aino Pöyhönen and Kirsimarja Blomqvist

However, interest in the KBV is increasing all the time and we expect to see much progress in the next years. An increased understanding of the social aspects of knowledge creation could especially complement this abstract yet focal theoretical framework. We believe that research on organizational renewal and trust will contribute significantly to the emerging theory of knowledgebased organizations.

References Adler, P. (2001) ‘Market, hierarchy, and trust: The knowledge economy and the future of capitalism’, Organization Science, vol. 12, no. 2, pp. 215-234. Barney, J. (1991) ‘Firm resources and sustained competitive advantage’, Journal of Management, vol. 17, pp. 99-120. Berger, P. and Luckmann, T. (1966) The social construction of reality, New York: Anchor Books,. Blackler, F. (1995) ‘Knowledge, knowledge work and organizations: An overview and interpretation’, Organization Studies, vol. 16, no. 6, pp. 1021-1046. Blomqvist, K. (2005) ‘Trust in a dynamic Environment - Fast trust as a threshold condition for asymmetric technology partnership formation in the ICT Sector’, In Trust in Pressure: Investigations of trust and trust building in uncertain circumstances. Edward Elgar Publishing. Blomqvist, K. (2002) Partnering in the dynamic environment: The role of trust in asymmetric technology partnership formation, Acta Universitatis Lappeenrantaensis 122. Lappeenranta University of Technology. Brown, J. and Duguid, P. (2001) ‘Knowledge and organization: A social-practical perspective’, Organization Science, vol. 12, no. 2, pp. 198-213. Brown, S. and Eisenhardt, K. (1998) Competing on the edge. Strategy as structured chaos, Boston: Harvard Business School Press. Cohen, W. and Levinthal, D. (1990), ‘Absorptive capacity: A new perspective on learning and innovation’, Administrative Science Quarterly, vol. 35, no. 1, pp. 128-152. Crossan, M., Lane, H. and White, R. (1999) ‘An organizational learning framework: from intuition to institution’, Academy of Mangement Review, vol. 24, no. 3, pp. 522-537. Dougherty, D. (1992) ‘A practice-centered model of organizational renewal through product innovation’, Strategic Management Journal, no. 13, pp. 77-92. Drucker, P. F. (1999) ‘Knowledge-worker productivity: The biggest challenge’, California Management Review, vo. 41, no. 2, pp. 79-95. Dunphy, D. (1996) ‘Organizational change in corporate settings’, Human Relations, vol. 49, no. 5, pp. 541-552. Eisenhardt, K. and Martin, J. (2000) ‘Dynamic capabilities: What are they?’ Strategic Management Journal, vo. 21, pp. 1105-1121. Ellonen R., Blomqvist K. and Puumalainen K. (2006) ‘The Role of Trust in Organizational Innovativeness’ ProAct 2006 –conference on Innovation Pressure Tampere, Finland. Fiol, M. and Lyles, M. (1985) ‘Organizational learning’, Academy of Management Review, vol. 10, no. 4, pp. 803-814. Foss, N. (1996) ‘Knowledge-based approaches to the theory of the firm: Some critical comments’, Organization Science, vol. 7, no. 5, pp. 470-476. Grant, R. (2002). ‘The knowledge-based view of the firm’, in N. Bontis and C. Wei Choo (Eds.), Strategic management of intellectual capital and organizational knowledge, New York: Oxford University Press. Grant, R. (1996a) ‘Prospering in dynamically-competitive environments: Organizational capability as knowledge integration’, Organization Science, vol. 7, no. 4, pp. 375-387. Grant, R. (1996b), ‘Toward a knowledge-based theory of the firm’, Strategic Management Journal, vol. 17, pp. 109-122. Grant, R. and Baden-Fuller, C. (2004). A knowledge accessing theory of strategic alliances. Journal of Management Studies, 41(1): 61-84. Hamel, G. and Prahalad, C. (1994) Competing for the future, Boston: Harvard Business School Press. Hardwig, J. (1991) ‘The Role of trust in knowledge’, The Journal of Philosophy, vol. 88, no. 12, pp. 693-708.

431

The 7th European Conference on Knowledge Management

Hong, J. and Ståhle, P. (2005), ‘The coevolution of knowledge and competence management’, International Journal of Management Concepts and Philosophy, vol. 1, no. 2, pp. 129-145. Huber, G. (1991) ‘Organizational learning: The contributing processes and the literatures’, Organization Science, vol. 2, no. 1, pp. 88-116. Kanter, R. (1998) ‘When a thousand flowers bloom: Structural, collective and social conditions for innovation in organization’, in B. Staw and L. Cummings (Eds.) Research in organizational behavior, vol. 10, pp. 169-211. Greenwich: JAI Press. Kogut, B. and Zander, U. (1992) ‘Knowledge of the firm, combinative capabilities, and the replication of technology’, Organization Science, vol. 3, no. 3, pp. 383-397. Kogut, B. and Zander, U. (1996). ‘What firms do? Coordination, identity and learning’, Organization Science, vol. 7, no. 5, pp. 502-518. Kosonen M., Ellonen R. and Blomqvist K. (forthcoming in 2006) ‘The Role and nature of impersonal trust’ in Networked and Virtual Organizations. Leonard-Barton, D. (1995) Wellsprings of knowledge. Building and sustaining the sources of innovation, Boston: Harvard Business School Press. Luhmann, N. (1979), Trust and power, Chichester: Wiley March, J. (1991) ‘Exploration and exploitation in organizational learning’, Organization Science, vol. 2, no. 1, pp. 71-87. McEvily B., Perrone V. and Zaheer A. (2003) ‘Trust as an organizing principle’, Organization Science, vol. 13, no. 1, pp. 91-103. Miles, R., Snow, C. and Miles, G. (2000) ‘TheFuture.org’, Long Range Planning, vol. 33, pp. 300321. Nonaka, I and Takeuchi, H. (1995) The knowledge-creating company, New York: Oxford University Press. Orlikowski, W. (2002) ‘Knowing in practice: Enacting a collective capability in distributed organizing’, Organization Science, vol. 13, no. 3, pp. 249-273. Pearce J., Branyiczki, I. and Bigley, G. (2000) ‘Insufficient Bureaucracy: Trust and Commitment in Particularistic Organizations’, Organization Science, vol. 11, no. 2, pp. 148-162. Penrose, E. (1959) The theory of the growth of the firm, Oxford: Oxford University Press. Pöyhönen, A. (2005) ‘Exploring the Dynamic Dimension of Intellectual Capital: Renewal Capability, Knowledge Assets and Production of Sustained Competitive Advantage’, presented at the 2005 PMA IC Symposium: Management and Measurement of Intangible Assets and Intellectual Capital: Multidisciplinary Insights, New York, 15-16 December 2005. Pöyhönen, A. (2004) Modeling and Measuring Organizational Renewal Capability. Doctoral Dissertation,Acta Universitatis Lappeenrantaensis 200. Lappeenranta University of Technology. Shapiro, C. and Varian, H. (1998) Information rules: A strategic guide to the network economy, Boston: Harvard Business School Press. Simon, H. (1955) ‘A Behavioral Model of Rational Choice’, Quarterly Journal of Economics, vol. 69, pp. 99-118. Snowden, D. (2002) ‘Complex acts of knowing: Paradox and descriptive self-awareness’ Journal of Knowledge Management, vol. 6, no. 2, pp. 100-111. Spender, J.-C. (1996a) ‘Organizational knowledge, learning and memory: Three concepts in search of a theory’, Journal of Organizational Change, vol. 9, no. 1, pp. 63-78. Spender, J.-C. (1996b) ‘Making knowledge the basis of a dynamic theory of the firm’ Strategic Management Journal, vol. 17, pp. 45-62. Spender, J.-C. and Grant, R. (1996) ‘Knowledge and the firm: An overview’, Strategic Management Journal, vol. 17, no. 5-9. Teece, D., Pisano, G. and Shuen, A. (1997) ‘Dynamic Capabilities and Strategic Management’, Strategic Management Journal, vol. 18, pp. 509-533. Tsoukas, H. and Chia, R. (2002) 'On Organizational Becoming: Rethinking Organizational Change', Organization Science, vol. 13, no. 5, pp. 567-582. Tuomi, I. (2002) ‘The future of knowledge management’, Lifelong Learning in Europe, vol. 2, pp. 69-79. Tyler T. (2003) ‘Trust within organizations’, Personnel Review, vol. 32, no. 5, pp. 556-568. Van de Ven, A. and Poole, M. (1995) ’Explaining development and change in organizations’, Academy of Management Review, vol. 20, no. 3, pp. 510-540.

432

Aino Pöyhönen and Kirsimarja Blomqvist

Von Krogh, G. (1998) ‘Care in knowledge creation’, California Management Review, vol. 40, no. 3, pp. 133-153. Weick, K. and Sutcliffe, K. (2001). Managing the unexpected. Assuring high performance in an age of complexity, San Francisco: Wiley. Wernerfelt, B. (1984) ‘A resource-based view of the firm’ Strategic Management Journal, vol. 5, pp. 171-180. West, M. A. and Farr, J. L. (1990) ‘Innovation at work’, in M.A. West and J.L. Farr (Eds.), Innovation and creativity at work. Psychological and organizational strategies, Chichester: Wiley. Wolfe, R. (1994) ‘Organizational innovation: Review, critique and suggested research directions’, Journal of Management Studies, vol. 31, no. 3, pp. 405-431. Zack, M. (1999) ‘Developing a knowledge strategy’, California Management Review, vol. 41, no. 3, pp. 125-145.

433

Conceptual Design of a Knowledge Management Support System for Assessment of Intellectual Capital Agnieta Pretorius and Petrie Coetzee Tshwane University of Technology, Pretoria, South Africa [email protected] [email protected] Abstract: Existing literature propagates a variety of methods, models, systems and frameworks for assessment of intellectual capital. Unfortunately, awareness of and interest in the assessment of intellectual capital, far exceeds its practical application. Due to the complexities involved in selecting and implementing an appropriate method for assessing intellectual capital, knowledge management support systems are needed for managing the evolving body of knowledge concerning such assessment. This paper discusses the first iteration of the development process of a conceptual design for a knowledge management support system to (i) provide management support to the process of selecting and implementing an appropriate method (or combination of methods) for assessment of intellectual capital, (ii) utilize past knowledge and expertise to accelerate and improve decision-making, (iii) promote synergism through integration of methods, and (iv) manage the evolving body of knowledge concerning the assessment of intellectual capital.The proposed knowledge management support system builds upon the rich body of existing knowledge on knowledge management systems, decision support systems, expert systems, and hybrid support systems, and is composed of a data management subsystem, a model management subsystem, a user interface subsystem, and a knowledge-based management subsystem. The knowledge-based management subsystem is the major focus of this first iteration and provides intelligence for the selection and implementation of appropriate models and for knowledge acquisition from a knowledge repository for keeping track of current and best practice scenarios. Keywords: Intellectual capital, intangible assets, methods of assessment, knowledge management support systems, decision support systems, intelligent systems

1. Introduction Intellectual capital (IC) is increasingly acknowledged as a dominant strategic asset and a major source of competive advantage for organizations (Harrison and Sullivan 2000; Holsapple 2003; Housel and Bell 2001; Kannan and Aulbur 2004; Mouritson, Bukh and Marr 2004; Sánchez, Chaminade and Olea 2000; Teece 1998). Despite a rich and evolving body of literature on methods, models systems and frameworks for assessment of IC, and increased awareness of the need for such assessment, relatively few organizations are actively and comprehensively assessing their IC (Andriessen 2004b; Best Practices, LLC 1999, Chen, Zhu and Xie 2004; Bontis 2001; Green 2005; Marr 2005; Pretorius and Coetzee 2006; Smith and McKeen 2003). According to Simon as cited by Turban, Aronson and Liang (2005), the human mind has cognitive limits in processing and storing information, limiting individuals’ problem solving capabilities when a wide range of knowledge and diverse information is required. Knowledge about methods, models, systems and frameworks for assessing IC is scattered across existing literature, the operations and processes of organizations, various databases and the minds of academics and practitioners (Pretorius and Coetzee 2005). This knowledge includes: ƒ

Theoretical descriptions of existing methods;

ƒ

Knowledge on how to practically implement such methods;

ƒ

Data, information and know-how on which methods and what particular implementations thereof were used in which contexts and industries, at what levels and for what purposes;

ƒ

Whether a particular implementation was deemed successful or not;

ƒ

Theoretical descriptions of models, systems and frameworks; and

434

Agnieta Pretorius and Petrie Coetzee

ƒ

Tacit knowledge concerning which methods should most likely be suitable under what conditions and circumstances.

Pretorius and Coetzee (2005) note that, with a growing awareness of IC as a key strategic asset of the organization, and with more and more organizations supplementing their annual reports with sections on IC, organizations and their managers feel increasing pressure to assess, manage and report on their IC assets. They argue that, due to the complexities involved in selecting and implementing an appropriate method or combination of methods for assessment of IC, and the cognitive limits of human problem solving, there is a need for a system or mechanism to manage (organize, store and retrieve) the evolving body of knowledge concerning such assessment. They refer to such a system or mechanism as a knowledge management support system, a term that will be discussed in more detail in the next section. Any such system needs to be embedded in a rather sophisticated socio-technical system to be able to deal with the variety and complexities of assessment of intellectual capital, this process being a knowledge-intensive process. In this paper we focus more on design of the technical aspects of such a system than on the design of the socio-technical system in its totality. This paper discusses the first iteration of the development process of a conceptual design of a knowledge management support system: ƒ To provide management support to the process of selecting and implementing an appropriate method/strategy (or combination thereof) for assessment of IC; ƒ

To utilize past knowledge and expertise to accelerate and improve decision-making;

ƒ

To promote synergism through integration of methods; and

ƒ

To manage the evolving body of knowledge concerning the assessment of IC.

The proposed knowledge management support system is composed of a data management subsystem, a model management subsystem, a user interface subsystem, and a knowledge-based management subsystem. This first iteration focuses on the knowledge-based management subsystem of such knowledge management support systems.

2. Terminology The following sub-sections explain some of the terminology of this paper.

2.1 Knowledge management support systems (KMSS) The term “management support system”, as defined by Turban et al. (2005), “refers to the application of any technology, either as independent tool or in combination with other information technologies, to support management tasks in general and decision-making in particular”. These technologies include, according to Turban et al (2005), amongst others, management information systems, executive information systems, group support systems, decision support systems (with or without knowledge-based management subsystems), expert systems and neural networks. Pretorius and Coetzee (2005) refer to a management support system (MSS) that includes a mechanism for the management of knowledge, as a knowledge management support system (KMSS). Such a system could be realized as a decision support system (DSS), an expert system (ES), or some other integrated system. As illustrated in Figure 1, mechanisms for the management of knowledge could, for example, be in the form of the knowledge repository of a knowledge management system (KMS), the knowledge-based management subsystem of a decision support system, and/or a knowledge base maintained through an ES.

435

The 7th European Conference on Knowledge Management

Figure 1: Schematic representation of a KMSS as derived from the definition by Pretorius and Coetzee (2005) Turban et al. (2005) emphasize the difference between a knowledge-based management system and a KMS and between a knowledge base and a knowledge repository: ƒ A knowledge-based management subsystem is the optional fourth component of a decision support system that contains an ES or other intelligent system. (A decision support system containing a knowledge-based management subsystem can also be called an intelligent decision support system (IDSS), an ES, an active decision support system, a knowledge-based decision support system (KBDS) or an expert/intelligent system integrated with a decision support system.) ƒ A KMS is typically implemented as a text-oriented DSS. Such a text-oriented DSS supports decision-making by electronically keeping records of textually represented information that could affect decision-making. ƒ A knowledge base forms part of an expert- or other intelligent system and contains the relevant knowledge for understanding, formulating and solving problems. Its two basic elements are facts (about the problem situation and the theory of the problem area) and rules or heuristics for directing the use of knowledge to solve specific problems in a specific domain. ƒ A knowledge repository, sometimes called an organizational knowledge base, forms part of a KMS, stores knowledge (which is often text-based), has characteristics very different from a database or knowledge base, and can be connected to the database management system or knowledge-based management subsystem of a DSS.

2.2 Intellectual capital (IC) Smith and McKeen (2003) note that intellectual capital (IC) is often confused with other intangible assets such as knowledge assets, human capital, intellectual property, customer capital and structural capital. Furthermore there is an overlap between terms being used, such as knowledge, intellectual capital and human capital. According to Brooking (1999), IC refers to the combined intangible assets that enable the company to function, including market assets, intellectual property assets, human centered assets and infrastructure assets. As shown in Table 1, similar to the components of IC identified by Brooking (1999), but not explicitly including intellectual property assets, Stewart, as quoted by Smith and McKeen (2003), states that it is “generally agreed by academics” that IC consists of “at least” three categories, namely human capital, structural capital and customer capital. Edvinsson and Malone (1997) further subdivide structural capital into organizational, process and innovation capital.

436

Agnieta Pretorius and Petrie Coetzee

There is some disagreement on whether IC includes intellectual property or not: Prusak, as quoted by Kannan and Aulbur (2004), defines IC as “intellectual resources that have been formalized, captured and leveraged to create assets of higher value” (including knowledge, information, intellectual property and experience). Contrary to Brooking (1999) and Prusak, as quoted by Kannan and Aulbur (2004), Bontis (1998) explicitly states that IC does not include intellectual property, defining intellectual property as “assets that include copyrights, patents, semiconductor topography rights and various design rights” and also “trade and service marks”. It should be noted that, as a result of this disagreement on whether IC includes intellectual property or not, certain methods of assessments accounts for intellectual property and others not. Table 1: Components of IC Human Centred Assets Infrastructure Assets Market Assets Intellectual Property Assets

Human Capital Structural Capital Customer Capital

Organizational Capital Process Capital Innovation Capital

2.3 Assessment Although the terms measurement, (e) valuation and assessment are often used interchangeably in literature, Andriessen (2004a) notes a distinct difference between valuation and measurement: He quotes Rescher (1969) as defining valuation (using the term evaluation) as a “comparative assessment or measurement of something with respect to its embodiment of a certain value” and also Swanborn (1981) as defining measurement as “the process of assigning scaled numbers to items in such a way that the relationship that exists in reality between the possible states of a variable are reflected in the relationships between the numbers on the scale”. Andriessen (2004a) subsequently explains that valuation requires an object to be measured, an evaluation framework and a criterion reflecting usefulness or desirability. He identifies four options for determining value (clarified in Table 2), namely financial valuation, value measurement, value assessment and measurement. The first three qualify as evaluation methods and the fourth as a measuring method. In this paper the word “assessment” includes measurement, (e) valuation and all other methods for determining value, as intended through its placement in the third column of Table 2. Table 2: Options for determining value Options for determining value Financial valuation (EVA, HRA, Tobin’s Q) Value measurement (BSC, HRA, IC Audit) Value assessment (IC Benchmark system) Measurement (IC Index, CWP, HRA)

(E)valuation

Assessment

Measuring

Figure 2 below shows the relationship between the four options for determining value as expressed by Andriessen (2004a).

437

The 7th European Conference on Knowledge Management

Figure 2: Financial valuation, value measurement, value assessment and measurement. Source: Andriessen: 2004a

3. Existing methods for assessment of IC Existing literature propagates a variety of methods for assessment of intellectual capital IC, including market-to-book ratio, Tobin’s Q, return on assets, technology broker’s IC audit, balanced scorecard, human resource accounting, intangible asset monitor, IC-index, Skandia navigator, the “new IC measurement model” of Chen, Zhu and Xie, as well as citation-weighted patents (Andriessen 2004b; Best Practices, LLC 1999, Chen et al. 2004; Bontis 2001; Smith and McKeen 2003). Selecting an appropriate method or combination of methods for assessment of IC in a particular context, is a complex process (Pretorius and Coetzee 2005). Smith and McKeen (2003) note that “no single metric would work in all circumstances”, Sveiby (2002) that “no one method can fulfill all purposes” and Andriessen (2004a) that “the array of problems being addressed by many of the methods are so broad that it seems questionable whether they can be solved using one method.” The appropriateness of assessment methods – and of specific implementations of such methods – depends on factors such as: ƒ Purpose of and motivation for assessment (Andriessen 2004a; Sveiby 2002); ƒ Context, goals, objectives and critical success factors of organization (Harrison and Sullivan 2000; Smith and McKeen 2003); ƒ Circumstance or situation (Smith and McKeen 2003; Sveiby 2002); ƒ Audience (Sveiby 2002); ƒ Industry and line of business (Van Buren 1999); ƒ Level of assessment (Sánchez et al. 2000; Smith and McKeen 2003); and ƒ Level of resources the organization is willing to commit towards assessment of IC (Harrison and Sullivan 2000). The classification of methods for and/or approaches to assessment of IC provides a valuable aid in the quest towards determining which method or approach to follow in a particular scenario. Existing classification schemes include those by Andriessen (2004a; 2004b), Best Practices, LLC (1999),

438

Agnieta Pretorius and Petrie Coetzee

Kannan and Aulbur (2004), Luthy (1998), Smith and McKeen (2003), Sveiby (2002), and Williams as quoted by Green (2005) and Sveiby (2002).

4. Objectives of KMSS Turban et al. (2005) emphasize that in decision-making situations, rather than attempting to reinvent the wheel each time, there is a need to draw upon past knowledge and expertise to solve reoccurring problems and problems similar to those solved before. Pretorius and Coetzee (2005) deduce that, due to the complexities involved in selecting and implementing an appropriate method or combination of methods for assessment of IC, systems or mechanisms are needed for managing (organizing, storing and retrieving) this evolving body of knowledge concerning the assessment of IC and their spheres of relevance. The components of such system are required to assist with: ƒ Making sense of the context and sphere of relevance of existing solutions; ƒ The development of relevant solutions for new problems; ƒ The selection of an appropriate solution; and ƒ The implementation of such solution. According to Simon as cited by Turban et al. (2005), the decision-making process involves three major phases: intelligence, design and choice. Implementation, as a fourth phase, was added later. Mapping these requirements to Simon’s decision-making process (Turban et al. 2005), this could be rephrased as stating that systems or mechanisms are required to provide support to organizational analysts and managers for all four phases of the decision-making/modeling process, as illustrated in Table 3. Table 3: Matching of requirements of KMSS with phases of Simon’s decision-making process Requirement

Requirements of KMSS

The selecting of an appropriate method or methods for a specific organization and purpose

Sense-making Development of relevant solutions Selection of appropriate solution Implementation of solution

The actual implementation of such method or methods

Simon’s decisionmaking process Intelligence Design Choice Implementation

Analyzing various existing methods/approaches/models/strategies for assessing IC (Andriessen 2004a; Andriessen 2004b; Best Practices, LLC 1999; Bontis 2001; Castellanos, Rodríguez and Ranguelov 2004; Chen, Zhu and Xie 2004; Edvinsson and Malome 1997; Sánchez, Chaminade and Olea 2000; Sveiby 2002), it was found that: ƒ These methods consists of steps; ƒ A certain step may appear in more than one method; and ƒ Similar categories of methods appear to have similar/overlapping steps, even though the guidelines for executing these steps may differ. After noticing the overlaps in steps between methods, these methods were re-evaluated in search for possible synergisms that could be achieved by combining/exchanging/adding steps from other methods. The resultant list includes the following: ƒ The same data, e.g. goals and critical success factors of the organization, may be fed into more than one model; ƒ Steps described as part of one method, e.g. determining IC indicators (Andriessen 2004B; Edvinsson and Malome 1997, Sánchez et al. 2000), could be useful in other methods that, for example, also make use of indicators;

439

The 7th European Conference on Knowledge Management

ƒ Intermediate output, e.g. IC indicators, may be fed into more than one method, e.g. the Skandia Navigator, the Caterpillar IC base and Battery’s IC system (Roos et al., as quoted by Andriessen 2004b); and ƒ Intermediate output derived from a model such as the Skandia Navigator (Edvinsson and Malome 1997) could be further explored by feeding it into another model, e.g. Chen et al.’s “new IC measurement model” (Chen et al. 2004), which could, for example, be used to analyse the interrelationships between categories of IC. From the above analysis the objectives as listed in Table 4 are derived for the proposed knowledge management support system (KMSS). Table 4: Objectives of KMSS Objective 1 Objective 2 Objective 3 Objective 4

Objectives Provide management support to the process of selecting and implementing an appropriate method/strategy (or combination thereof) for assessment of IC, addressing all four phases of Simon’s decision making process, including intelligence, design, choice and implementation Utilize past knowledge and expertise to accelerate and improve decision-making Promote synergism through integration of methods Manage the evolving body of knowledge concerning the assessment of IC

5. Conceptual design of KMSS Following from the objectives derived in the previous section (Table 4) the proposed KMSS for assessment of IC needs mechanisms for decision support (objective 1), for knowledge capturing storage and retrieval (objectives 2 and 4) and possibly for providing intelligence (objectives 1, 2, 3 and 4). Scanning and analyzing existing literature (Andriessen 2004a; Bolloju, Khalifa and Turban as cited by Turban et al 2005; Green 2005, Jackson 1990; Nonaka 1995; Turban et al. 2005), the following short list of possible structures for such a system arises: ƒ A DSS, consisting of a data management subsystem, a model management subsystem, and user interface subsystem, and a knowledge-based management subsystem, possibly connected to an organizational knowledge base (maintained by an intelligent system) or to a knowledge repository (maintained by a KMS); ƒ A KMS integrated with a DSS and possibly an intelligent system such as an expert system (ES) or neural network; and ƒ A knowledge-based decision support system (KBDSS), also referred to as an intelligent decision support system (IDSS), defined by Turban et al. (2005) as a DSS that “integrates knowledge from experts”. Analysing these structures reveals that they are overlapping and that all four consists of some or all of the following components: ƒ The three main subsystems of a DSS, namely a data management subsystem, a model management subsystem and a user interface subsystem; ƒ The fourth (optional) component of a DSS, as introduced by Turban et al. (2005), namely a knowledge-based management subsystem consisting of a knowledge base or knowledge bases maintained by one or more intelligent systems; ƒ One or more intelligent systems (including one or more knowledge bases); and ƒ A KMS (including a knowledge repository). A closer look reveals that, in the list above, the third component is conceptually the same as the second component, and is henceforth dropped from this list. Table 5 below shows the resulting list of components, and for each component the objectives it facilitates (is responsible for), or assists with (supports the responsible component with).

440

Agnieta Pretorius and Petrie Coetzee

Table 5: Matching components of proposed KMSS with objectives listed in Table 5 Components

Objectives facilitating 1

Objectives assisting with 2, 3 and 4

1.

The three main subsystems of a DSS, namely a data management subsystem, a model management subsystem and a user interface subsystem

2.

The fourth (optional) component of a DSS, namely a knowledge-based management subsystem consisting of a knowledge base or knowledge bases maintained by one or more intelligent system

2 and 3

1 and 4

3.

KMS (including a knowledge repository)

4

1, 2 and 3

It follows from Table 5 that, in order to fulfill all the objectives of the proposed KMSS listed in Table 4, all three (remaining) components need to be included. An extensive body of knowledge exists for DSS (the subject of Component 1) and related technologies tailored since at least the 1960’s to effectively support the phases of the decisionmaking/modeling process. The top-level schematic view of a DSS (Figure 3) therefore provides an ideal starting point for the required knowledge management support system.

Figure 3: A top-level schematic view of DSS. Source: Turban et al. (2005) Departing from the “top-level schematic view of DSS” illustrated in Figure 3, matching the objectives identified in Table 4 for the proposed KMSS with existing theory on KMS, intelligent systems (including expert systems and neural networks), hybrid support systems and DSS (Jackson 1990; Oz 2002; Turban et al. 2005) and building upon the four subsystems of DSS as outlined by Turban et al. (2005), we arrive at the schematic representation of the proposed KMSS for assessment of IC shown in Figure 4.

441

The 7th European Conference on Knowledge Management

Figure 4: Schematic view of proposed KMSS for assessment of IC The following subsections elaborate on the structures of the four subsystems illustrated in the four quadrants of Figure 4, namely the data management subsystem, the model management subsystem, the user interface subsystem and the knowledge-based management subsystem. The focus and main contribution of this paper is in the quadrant of the knowledge-based management subsystem.

442

Agnieta Pretorius and Petrie Coetzee

5.1 Data management subsystem According to Turban et al. (2005), the data-management subsystem – refer to the bottom-left quadrant of Figure 4 – maintains internal data sources, facilitates access to internal and external data sources, and consists of a decision support database, a database management system, a data directory and a query facility. The decision support database contains data extracted from internal and external data sources, as well as from users’ personal data.

5.2 Model management subsystem According to Turban et al. (2005) the model management subsystem – refer to the top-left quadrant of Figure 4 – maintains internal models, facilitates access to internal and external models, and consists of a model base, a model base management system, model building blocks and routines, modeling languages and tools, a model directory and a model execution, integration, and command processor. The model base management system performs model creation, generation of new routines and reports, model updating and changing, model data manipulation and the interrelation of models (Turban et al. 2005). As mentioned earlier, the appropriateness of assessment methods – and of specific implementations of such methods – depends, amongst other factors, on the contexts, goals, objectives and critical success factors of organizations (Harrison and Sullivan 2000; Smith and McKeen 2003), the circumstance or situation (Smith and McKeen 2003; Sveiby 2002) and the industry and line of business (Van Buren 1999). The model base of the proposed knowledge management support system includes/provides access to models, both for analyzing organizations (e.g. their contexts, goals, objectives and critical success factors) and for assessment of IC. The model directory provides a catalog and definitions for the models in the model base, as well as information concerning the availability and capability of models (Turban et al. 2005). It therefore assists with determining the sphere of relevance of models. The model execution, integration, and command processor controls the execution of models, combines the operations of more than one model when required, and accepts, interprets and routes modeling instructions from the user interface subsystem (Turban et al. 2005).

5.3 User interface subsystem According to Turban et al. (2005), the user interface subsystem – refer to the top-right quadrant of Figure 4 – facilitates input, output and language processing, and is composed of a user interface management system and input and output languages. The user interface management system enables the user to interact with the model and data management systems in a variety of formats, with multiple, different dialog styles and with a variety of input and output devices.

5.4 Knowledge-based management subsystem The knowledge-based management subsystem – refer to the bottom-right quadrant of Figure 4 – consists of a knowledge base management system (responsible for the execution and integration of the intelligent system), an intelligent system (supplying intelligence (i) for the selection of appropriate models and (ii) for knowledge acquisition), and a knowledge base (containing facts about problem situations, and rules for applying knowledge to solve problems). The intelligent system draws upon a knowledge repository for retrieval of current and best practice scenarios. The knowledge repository is maintained by a KMS and extracts information from actual implementations, as well as from external sources. (Derived from Oz 2002; and Turban et al. 2005) As mentioned earlier, “rather than attempting to reinvent the wheel each time”, past knowledge and expertise should be used to accelerate and improve decision-making. Knowledge accumulated over time, including best-practices, can be organized and stored in a knowledge repository (also referred to as an organizational knowledge base) and used to solve similar problems (Turban et al.

443

The 7th European Conference on Knowledge Management

2005). The knowledge repository stores knowledge on current and best practice scenarios, which is often text-based, and is populated and maintained by the KMS. The (text-based) knowledge stored in the knowledge repository of the proposed KMSS (Figure 4) is sourced from actual implementations of methods and strategies for assessment of IC, and from external sources such as experts, literature, publications, current and best practices (in any format and from any origin), the Internet, intranets and extranets. As explained by Turban et al. (2005), the six steps in the cycle of a “functioning” KMS are: (i) create, (ii) capture, (iii) refine, (iv) store, (v) manage, and (vi) disseminate. The primary function of the intelligent system of the knowledge-based management subsystem of the proposed KMSS (Figure 4) is to provide intelligence for the selection and implementation of appropriate methods or models. It aims to answer questions such as: ƒ “Which model should be used for what situations?” ƒ “What method should be used to solve a particular problem in a particular model class?” Turban et al. (2005) explain that the model selection required to address this type of question couldn’t be done by a model base management system, because it requires expertise, and suggest that such expertise could be provided manually or by a knowledge component. The knowledge base contains the relevant knowledge for understanding, formulating and solving problems. Its two basic elements are facts representing knowledge about the problem scenario and rules or heuristics for using the knowledge to solve specific problems in a particular context. The process of extracting knowledge into the knowledge base is called knowledge acquisition and includes “the accumulation, transfer, and transformation of problem-solving expertise from experts or documented knowledge sources to a computer program for constructing or expanding the knowledge base”. (Turban et al. 2005) The secondary function of the intelligent system of the knowledge-based management subsystem of the proposed KMSS (Figure 4) is to facilitate automatic knowledge acquisition from the knowledge repository to the knowledge base. Such automatic knowledge acquisition could take place either at regular intervals, or, whenever the intelligent system – while attempting to solve a problem (e.g. select an appropriate model for a specific scenario) – sources relevant knowledge from the knowledge repository. Oz (2002) and Turban et al. (2005) note that a knowledge engineer may be required to interface with human experts to build the knowledge base. In the proposed KMSS (Figure 4) further knowledge acquisition is performed manually by the knowledge engineer, by drawing upon the knowledge repository, as well as upon external sources such as experts and literature.

6. Summary and conclusion Due to the complexities involved in selecting and implementing an appropriate method for the assessment of intellectual capital, there is a need for knowledge management support systems to assist with the decision-making process concerning the assessment of intellectual capital. Such knowledge management support systems are envisioned to serve organizations by (Pretorius and Coetzee 2005): ƒ Leveraging their intellectual capital as a sustainable source of competitive advantage; ƒ Enabling them to align their intellectual capital with their organizational context, goals, objectives and critical success factors and with the opportunities and threats posed by their environments; ƒ Providing support for all four phases of the decision-making process concerning the selection and implementation of models for assessment of intellectual capital; ƒ Promoting synergism by combining the steps of different models for assessment of intellectual capital; ƒ Facilitating the organization, storage and retrieval of relevant data, models and knowledge; and ƒ Enabling and assisting with intellectual capital reporting.

444

Agnieta Pretorius and Petrie Coetzee

This paper proposes a conceptual design of a knowledge management support system for assessment of intellectual capital, which has resulted from the first iteration of the development process. The proposed knowledge management support system builds upon the rich body of existing knowledge on knowledge management systems, decision support systems, expert systems, and hybrid support systems (particularly as described by Turban et al. (2005)), and is composed of: ƒ a data management subsystem maintaining internal data sources and facilitating access to internal- and external data sources; ƒ a model management subsystem maintaining internal models and facilitating access to internal and external models (i) for analyzing organizations and industries, and (ii) for assessing intellectual capital; ƒ a user interface subsystem facilitating input, output and language processing; and ƒ a knowledge-based management subsystem (the main focus of this first iteration of the development process) providing intelligence for (i) the selection and implementation of appropriate models and (ii) for knowledge acquisition from a knowledge repository for keeping track of current and best practice scenarios. During further iterations of the development process, the proposed conceptual design will be refined through formal and informal sound-boarding against knowledgeable academics and practicing experts and through implementation of partial prototypes. Such partial prototypes will serve to clarify particular problematic design aspects. Areas for further research include investigating the implications for conceptual design of cooperation, competition and coopetition (cooperating while simultaneously competing (Loebke and Angebrn 2006)) in respect of acquisition, diffusion and use of knowledge. Incentives and mechanisms to enable and encourage sharing of knowledge (possibly between competitors) may be required. This could emanate/be realized in the form of a knowledge brokering system. The structuration of such a knowledge brokering system would be linked to – if not included in – the conceptual design of a knowledge management support system for assessment of intellectual capital.

References Andriessen, A. (2004a) “IC valuation and measurement: classifying the state of the art”, Journal of Intellectual Capital, Vol 5, No 2, pp230-242. Andriessen, A. (2004b) Making Sense of Intellectual Capital: Designing a Method for the Valuation of Intangibles, Butterworth-Heinemann, Oxford, p88, pp283-375. Best Practices, LLC (1999) “Knowledge Management: Best Practices in Intellectual Assets Measurement” [online], http://www3.best-inclass.com/bestp/domrep.nsf/Conten/D159CE67DD5ED04485256E290081853A!OpenDoc ument Accessed on 03/05/05 Bontis, N., Dragonetti, N.C., Jacobsen, K. and Roos, G(. (1999) “The Knowledge Toolbox: A Review of the Tools Available To Measure and Manage Intangible Resources”, European Management Journal, Vol 17, No 4, pp391–401. Bontis, N. (2001) “Assessing knowledge assets: a review of the models used to measure intellectual capital”, International Journal of Management Reviews, Vol 3, No 1, pp41-60. Brooking, A. (1999) Corporate Memory: Strategies for Knowledge Management, International Thomson Business Press, London, p16. Castellanos, A.R., Rodríguez, J. L. and Ranguelov, S. Y. (2004) “University R&D&T capital: What types of knowledge drive it?”, Journal of Intellectual Capital, Vol 5, No 3, pp478-499. Chen, J., Zhu, Z. and Xie, H.Y. (2004) “Measuring intellectual capital: a new model and empirical study”, Journal of Intellectual Capital, Vol 5, No 1, pp195-212. Edvinsson, L. and Malone, M.S. (1997) Intellectual Capital: Realizing Your Company’s True Value by Finding Its Hidden Brainpower, Harper Business Press, New York, p68, pp147-160. Green, A. (2005) “A Framework of Intangible Valuation Areas” in Creating the Discipline of Knowledge Management: The Latest in University Research, Stankosky, M.S. (Ed.), Elsevier, Oford, pp189-208.

445

The 7th European Conference on Knowledge Management

Hanley, S. and Malafsky, G. (2003) “A Guide for Measuring the Value of KM Investments” in Handbook on Knowledge Management 2: Knowledge Directions, Holsapple, W. (Ed.), Springer, Berlin, p371. Harrison, S. and Sullivan, P.H. (2000) “Profiting from intellectual capital: learning from leading companies”, Industrial and Commercial Training, Vol 32, No 4, pp139-148. Holsapple, C.W. (2003) “Knowledge and Its Attributes” in Handbook on Knowledge Management 1: Knowledge Matters, Holsapple, W. (Ed.), Springer, Berlin, p166. Housel, T. and Bell, A.H. (2001) Measuring and managing knowledge, McGraw-Hill/Irwin, Boston, Massachusetts, pp34-36. Jackson, P. (1990) Introduction to Expert Systems, Addison-Wesley, Wokingham, England, pp191216. Kannan, G. and Aulbur, W.G. (2004) “Intellectual capital: Measurement effectiveness”, Journal of Intellectual Capital, Vol 5, No 3, pp389-413. Loebke, C. and Angebrn, A. (2006) “Coopetition” in Encyclopedia of Knowledge Management, Schwartz, D.G. (Ed.), Idea Group Reference, Hershey, PA, pp58-66. Luthy, D.H. (1998) “Intellectual capital and its measurement” [online], http://www3.bus.osakacu.ac.jp/apira98/archives/htmls/25.htm Accessed on 27/04/06 Marr, B. (2005) “Management consulting practice on intellectual capital: Editorial and introduction to special issue”, Journal of Intellectual Capital, Vol 6, No 4, pp469-473. Mouritson, J., Bukh, P.N. and Marr, B. (2004) “Reporting on intellectual capital: why, what and how?”, Measuring Business Excellence, Vol 8, No 1, pp46-54. Nonaka, I. and Yakeuchi, H. (1995) The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Oxford University Press, New York, p62. Oz, E. (2002) Management Information Systems, Thomson Learning, Boston, Massachusetts, pp343-344; pp506-355. Pretorius, A.B. and Coetzee, F.P. (2005) “The Requirements for Management of Knowledge Concerning the Assessment of Intellectual Capital”, Conference proceedings of 6th European Conference on Knowledge Management, University of Limerick, Ireland, 8-9 September 2005, Remenyi D. (Ed.), Academic Conferences, UK, pp486-495. Rescher, N. (1969) Introduction to Value Theory, Prentice Hall, Englewood Cliffs, NJ, p61. Sánchez, P., Chaminade, C. and Olea, M. (2000) “Management of intangibles – An attempt to build a theory”, Journal of Intellectual Capital, Vol 1, No 4, pp312-327. Smith, H.A. and McKeen, J.D. (2003) “Valuing the Knowledge Management Function” in Handbook on Knowledge Management 2: Knowledge Directions, Holsapple, W. (Ed.), Springer, Berlin, p356, p361. Sveiby, K.E. (2002) “Methods for Measuring Intangible Assets” [online], http://www.sveiby.com/Portals/0/articles/IntangibleMethods.htm Accessed on 26/04/06 Swanborn, P.G. (1981) Methoden van sociaal-wetenschappelijk onderzoek, Boom Meppel, Amsterdam, p23. Teece, J.D. (2003) “Knowledge and Competence as Strategic asset” in Handbook on Knowledge Management 2: Knowledge Directions, Holsapple, W. (Ed.), Springer, Berlin, p130. Turban, E., Aronson, J.E. and Liang, T. (2005) Decision Support Systems and Intelligent Systems, Prentice Hall, New Jersey, pp10-11, pp22-23, pp49-51, pp109-127, pp502-508, p514, pp552-556, pp583-584, pp802-822. Van Buren, M.E. (1999) “A Yardstick for Knowledge Management”, Training & Development, May, pp71-78.

446

Developing a Taxonomy for Knowledge Management Documents Organization in Digital Libraries S.Mohsen Rahnamafard and Hajar Fatemi ShariatPanahi Tehran University, Faculty of Management, Tehran, Iran [email protected] [email protected] Abstract: Many people, working on KM, face a similar problem of classifying their documents on KM. Since KM is a multidisciplinary field of science and experts from different fields have worked on it during the last decades, we have widespread points of view. An overarching theory of knowledge management has yet to emerge, perhaps because the practices associated with managing knowledge root in a variety of disciplines and domains. In this paper we suggest new way to organizing KM documents. We develop a taxonomy which aimed to help KM experts organize KM documents in accurate and carefully way. Our proposed taxonomy of KM document organizing will have three main features: 1. all files about any fields of KM will have a place in our taxonomy 2. Each file will have only one place 3. 3 drills down movement can access every file. Keywords: Information technology, knowledge management, taxonomy, KM frameworks

1. Introduction Being in 21st century, we are having a knowledge-based economy. What that means is quite simple: Our economic well-being and competitive advantage are dependent on knowledge resources. In fact, the Intellectual capital and knowledge assets are the prime factors and resources of production in knowledge-based economy. In the words of Jack Welch, former CEO of General Electric, “Intellectual capital is what it’s all about. Releasing the ideas of your people is what we’re trying to do, what we’ve got to do if we’re going to win.” (Stankosky 2005) Agreeing on existence of a domain with a developing trend, called knowledge management, there is not any wide spreading approved definition of it and not any border for the domains it includes. This is apparent in many papers and resources like Beckman (1999), (Lueg 2002), (Davenport and Cornin 2000), (Bollinger & Smith 2001), (De Grooijer 2000), (Holsapple & Joshi 2000) and (Bullinger et al. 1998). Knowledge management is a new and exciting topic of academic research that has been adopted, over the past few years, by many diverse discipline areas, with a variety of points of view. In practice, KM is a cross-functional area in organisations and there is disagreement as to whether it should be considered a technical issue, a human resources issue, a procedural issue or a part of strategic management (Bollinger & Smith 2001) As cited in a study that Stankosky had done on reviewing the knowledge management curriculum programs in 2000, there are an initial collection of knowledge management study impact areas that shows the variety and diversity disciplines and domains of knowledge. (Table 1).Table 1, list of knowledge management study impact areas, shows the variety and diversity disciplines and domain of knowledge. Table 1: List of knowledge management study impact areas Systems Theory Risk Management Assessment Intelligent Agents Management of R&D Decision Support Systems Modeling and Simulation Data Mining / Data Warehousing Enterprise Resource Planning Business Process Engineering

Systems Analysis Systems Engineering Leadership Ethics Communications Theory Organizational Psychology Visualization Groupware Virtual Networks

Strategic Planning Management-by-Objectives Total Quality Management Management Theory Management of IS Database Design / Database Management Systems Data Communications & Networks etc.

On the other hand the study, done by Michael J.D. Sutton in McGill University (2002) showed that lacking an accepted definition for KM, academics, (as well as practitioners), have not been able to agree upon curriculum, certification benchmarks, or course contents. A tabulation of all the 79 KM programs discovered so far offered by 47 institutions is presented in below. Table 2, list of schools present KM courses, shows the variety and diversity disciplines and domain of knowledge.

447

The 7th European Conference on Knowledge Management

Table 2: List of schools present KM courses Department of Computing Science School of Information Management and Systems Department of Management Division of Communication and Education College of Science School of Media and Information School of Information Systems College of Business and Economics Department of Artificial Intelligence Faculty of Communications,

Faculty of Health and Science School of Public Policy School of Engineering and Applied Science, Engineering Management and Systems School of Knowledge Science Department of Behavior in Organizations College of Continuing and Adult Education Division of Education, Arts and Social Sciences Center for Professional Development School of Library and Information Science

Outside the academic world and in business environments also lots have been done for defining and categorizing KM domains. One of the most important of them is the European Council effort to establish a standard framework of KM. Figure 1 shows a schema of the results of a brainstorming workshop attempting to find out KM domains.

Figure1: A brainstorming session result on KM domains (Weber 2001) However, one of the most striking aspects of knowledge management is the diversity of the domain and the lack of universally accepted definitions of the term itself and its derivatives, knowledge and management. As a consequence of the multidisciplinary nature of KM, the terms inevitably hold a difference in meaning and emphasis for different people. Regarding to the above discussion, an interesting point is an overarching theory of knowledge management has yet to emerge, perhaps because the practices associated with managing knowledge root in a variety of disciplines and domains. Some KM specialists made suggestions on developing taxonomies and matrixes on KM concepts, most of them have different aims, such as trying to discover future researches on KM or develop a framework only on KM applications and tools. None of theme has a birds-eye-view approach to the whole KM documents and records classification. On the other hand in recent investigations on digital libraries several scholars have argued that it is relevant to provide a shared model of the term taxonomy employed to classify the documents of the library itself. For an up-to-date viewpoint, containing relevant references see (Malizia et al., 2004 cited by Bellomi 2005). So, developing a general and standard taxonomy model with more reliable and comprehensive acceptances, covering all aspects of KM, would be a way to solve the problem of KM classification in libraries, particularly in digital libraries. We suggest taxonomy through this view of point after a complete review on literature about KM taxonomies, matrixes, and other classifications, which can help any km specialists or librarian or take in use in digital libraries to classify KM documents with following characteristics:

448

S.Mohsen Rahnamafard and Hajar Fatemi ShariatPanahi

1. All files about any fields of KM will have a place in our taxonomy 2. Each file will have only one place 3. 3 drills down movement can access every file.

2. Related works There are two approaches for the categorization of body of knowledge. First is “Taxonomy”, refers to some so called phrases such as tree-view, hierarchy, directory and more. Taxonomy was once only the science of classifying living organisms, but later the word applied to a wider sense, and now it also refers to either a classification of things, or the principles underlying the classification. Almost anything, living objects, non-living objects, places, and events, may be classified according to some taxonomic scheme. Taxonomies are frequently hierarchical in structure and may include single children with multi-parents. Taxonomy might also be a simple organization of objects into groups, or even an alphabetical list. Mathematically, a hierarchical taxonomy is a tree structure of classifications for a given set of objects. At the top of this structure is a single classification, the root node that applies to all objects. Nodes below this root are more specific classifications that apply to subsets of the total set of classified objects Second approach of classification is frameworks, which are named matrixes in this paper inspired by mathematic concepts. When we want to organize related things, we use multilateral matrix or table to compose different subjects into one frame and use this frame to address a new subject. Frameworks understood as a holistic and concise description of the major elements, concepts and principles of a domain. They aim to explain a domain and define a standardized schema of its core content as a reference for future design implementations. A KM matrix explains the world of KM by naming the major KM elements, their relationships and the principles of how these elements interact. In contrast of taxonomies, matrixes cover more interdisciplinary or interrelated things and subjects. Out of the issues discussed above, this research effort will lead approaches to classify KM documents in new taxonomy, but first a completed literature review with regard of these two types of categorization have been studied.

2.1 Taxonomies The first and the most popular taxonomy of KM domain is people, process and technology synergic domains. As Australian government departments showed in “Better Practice Checklist of KM” we can add content aspect to this categorization in order to cover the concept more accurately. On the other hand, KM has emerged as an inter-disciplinary framework to assist organizations engage in the wider information /knowledge economy. As KM impacts all the aspects of organization, we can use organization theories classifications in managing its documents. Richard L. Daft explains six basic components of any organization in his book;” organization Theory”. We can use these components for our categorization; Strategy, Structure, Human Resource, Culture, System, Technology. Generally we can divide knowledge tools to three categories: Knowledge generation (Identification, Selection, Acquisition etc.), Knowledge organizing (Representation, codification, Refining etc.), Knowledge transfer (Distribution, Accessing, Sharing, Utilize etc.).Still we can find more famous taxonomy classifications on KM such as European KM framework, Australian government KM framework, KMBOK domains etc. We will describe some of them in detail below. It is noticeable that respecting our first definition, most of the KM classifications has used the word framework wrongly. So based on definition, we categorize them in taxonomies.

2.1.1 Australian government KM framework There are several projects worldwide working on standards for KM (in Britain, Europe, South Africa, Asia and the USA). Australia is well advanced in its process with an Interim KM Standard released in 2003 and the full Standard in 2004. The standards project aims to provide a national focal point for KM and to stimulate Australian advances in the area. The Standard initiative followed the publication by Standards Australia of a preliminary document on KM (Standards 2003) developed with input from an extensive list of Australian KM researchers and practitioners and included the integrated KM framework shown in Figure 2.

449

The 7th European Conference on Knowledge Management

Knowledge Foundations

Knowledge Processes

Knowledge Alignment

Context

Analysis

Planning

Sharing

Acquisition

Creation

Culture

Technology

Sustaining Systems

Figure 2: Australian government KM framework (Standards2003)

2.1.2 European KM framework The first draft of the European KM framework – which work about is still in progress and will be further developed and improved – includes the following core modules: ƒ Strategies ƒ Human + Social KM issues ƒ Organization ƒ Processes ƒ Leadership ƒ Performance measurement ƒ Business cases + implementation ƒ Technologies

Figure 3: European KM framework (WEBER1 2002)

2.1.3 KIM taxonomy The preliminary results of the theoretical analysis of the literature (George M. Giaglis, 2001) suggest that there might be an opportunity for drawing a list of sub-areas within the overall field of knowledge and information management in the form of taxonomy. According to this, as a novel classification for Knowledge and Information Management research, the KIM area consists of two domains that present different characteristics and research roadmaps: Knowledge Management and Information Management. Each of these two main research areas can be further divided into a number of more detailed themes as shown in the figure 4.

450

S.Mohsen Rahnamafard and Hajar Fatemi ShariatPanahi

Figure 4: KIM Taxonomy (Giaglis 2002)

2.1.4 KM body of knowledge (KMBOK) The KM Institute is building a specialized KMBOK. To this end, KM Institute has initially defined 30 domains and 17 industrial sectors, which are being staffed with world-class specialists. Shown in table3 Table 3: KMI Taxonomy (KMI 2005) Appreciative Inquiry (Best Practice Benchmarking) Business Disciplines; BPR, TQM, ISO9000, CMM, etc Change Dynamics & Management Communities of Practice Complexity (Chaos) Thinking & Applications Content Management Customer Relationship Management Decision Support Human Capital Information Architecture Innovation & Creativity Intellectual Property Leadership Learning Theory Knowledge Audits/Mapping

Knowledge Economics Knowledge Engineering KM Methods (Methodology) KM Performance (Metrics) KM Sciences KM Techniques Organizational Learning Personal KM Skills Project Management Semantic Web & Networks Storytelling & Narrative Strategic & Systems Thinking Knowledge Workforce Performance Workgroups & Teams KM Technologies

2.1.5 IKMF of Handzic This framework distinguishes between three generic types of knowledge processes: those that generate new knowledge; those that transfer the existing knowledge; and those that utilize possessed knowledge to produce new knowledge.

Figure 5: IKMF of Handzic (Handzic 2001) The framework further proposes two broad groups of socio-technological factors as major knowledge enablers. These include organizational environment and technological infrastructure. Most important factors within the organizational environment category include organizational structure, leadership and culture. Technological factors, on the other hand, cover a wide range of

451

The 7th European Conference on Knowledge Management

information and communication technologies and systems that provide a platform for knowledge processes.

2.1.6 IST project WISE KM tools and products The deliverable D-1.3 of the IST project WISE (2002) – Web-enabled Information Services for Engineering contains platform will fill existing gaps and will prove to be useful in supporting and improve the existing knowledge communication and work in engineering design for large engineering organizations. In order to better organize the large realm of KM tools and products and to facilitate the splitting of responsibilities among WISE partners that participate in the authoring of this deliverable they reached this classification: ƒ • Collaboration ƒ • Knowledge Mapping ƒ • Data Mining ƒ • Information Retrieval ƒ • Knowledge Discovery ƒ • Online Training Systems ƒ • Document Management ƒ • Organizational Memory ƒ • Pure Knowledge Management

2.1.7 KM architecture Ronald Maier and Thomas Hädrich (2004) showed an ideal layered architecture for KMS that represents an amalgamation of theory-driven in their paper “Centralized versus Peer-to-Peer Knowledge Management Systems”: ƒ Access services (Authentication, Translation, Transformation, Application (e.g. browser), Appliance (e.g. mobile) ƒ Personalization services (Portals, Profiling, Push services, Etc.) ƒ Knowledge services (Discovery; Search, Mining, Visualization, Navigation, Knowledge maps; Publication; ,Format, Structuring ,Contextualization, Workflow, Co-authoring - Collaboration: Skill mgmt, Community, Expertise mgmt, Awareness mgmt, Learning; Authoring, Course mgmt, Evaluation, Content management ) ƒ Integration Services (Taxonomy, Knowledge structure, Ontology, Meta data, Directory, Synchronization services) ƒ Infrastructure services (Intranet/ extranet, Fileserver, Imaging, Asset management, Security, Loading) ƒ Data and Knowledge bases (RDBMS, Internet, DMS, Data warehouses) There are also many other samples like above that we ignore them and summarize them to the mentioned samples.

2.2 Matrixes 2.2.1 Integrative framework and review of emerging themes Figure 6 represents a summary of Argote (2003) efforts to create a framework for organizing the literature based on the relative positioning of work along two critical dimensions: knowledge management outcomes (knowledge creation, retention, and transfer) and properties of the knowledge management context (properties of units, properties of the relationships between units, and properties of knowledge). The knowledge management outcomes are represented on the vertical axis. Knowledge creation occurs when new knowledge is generated in organizations.

452

S.Mohsen Rahnamafard and Hajar Fatemi ShariatPanahi

Attempts to transfer knowledge can lead to the creation of new knowledge. Despite the diversity of research on knowledge management, theoretical explanations can be organized according to three properties of the context within which knowledge management occurs: properties of units (e.g., an individual, a group, or an organization), properties of the relationships between units, and properties of the knowledge itself. The context dimension displayed along the horizontal axis highlights the fact that different theories of knowledge management give causal priority to different contextual properties. The framework in Figure 6 can be used to characterize research in the knowledge management literature.

Figure 6: Integrated framework and Review of emerging themes ( Argote 2003) Because many studies in the special issue analyze more than one outcome and more than one contextual variable, Argote (2003) did not place studies in the cells of their framework. Rather they discuss relevant findings from each study as they elaborate the framework. The framework can be used to identify points of integration across different traditions and gaps in our understanding of knowledge management.

2.2.2 A framework for characterizing

KM methods, practices, and technologies

Newman (1999) describes his framework (Table4) to be used in several ways: ƒ To organize and classify knowledge management methods, practices and technologies by relating them to distinct phases of the targeted knowledge flows ƒ To examine knowledge flows to understand the interactions and dependencies among pieces of information, communicators and their associated behaviors. Table 4: A framework for characterizing KM methods (Newman 1999)

This is not the only way the framework can be displayed. The framework is a general-purpose tool that can be applied to a variety of problems and solutions and adapted to individual work styles. (Newman 1999)

453

The 7th European Conference on Knowledge Management

2.2.3 Knowledge management conceptual framework In laying out all the so-called models, elements, definitions, pronouncements, cautions, and approaches, it became apparent that there were four principle areas or groupings, each containing many elements. The challenge was to find names for these four groupings and to validate them through some scientific approach. All the KM elements were grouped under the following: Leadership/Management, Organization, Technology, and Learning. ƒ • Leadership/management ƒ • Organization ƒ • Learning ƒ • Technology A KM research conceptual framework, based on the four pillars has been constructed and it incorporates the various functions of KM: knowledge assurance, knowledge capture, knowledge retention, knowledge transfer, and knowledge utilization (Figure 7). As shown in Figure 7, each Function is further divided into various categories. (Stankosky 2005)

Figure 7: Levels of the knowledge management conceptual framework (Stankosky 2005)

2.2.4 Toronto university knowledge management frameworks There are two ways of navigating the content in this guide: a “Conceptual” view that focuses on theories and concepts; and an “Application” view that highlights practices and case studies of organizations. Both views are based on a Knowledge Management Framework that the Toronto faculty has developed and applied in its projects. The framework identifies three types of organizational knowledge (tacit, explicit, and cultural knowledge), and four levels of organizational enablers (vision and strategy, roles and structure, practice and process, tools and platforms). Together, these elements provide a systematic way of looking at KM, which they define as: "A framework for designing an organization’s goals, structures, and processes so that the organization can use what it knows to learn and to create value for its customers and community." (Cheung 2004) ƒ Conceptual Framework

454

S.Mohsen Rahnamafard and Hajar Fatemi ShariatPanahi

This framework will show you the fundamentals of knowledge management. Tacit Knowledge Knowledge Creation

Explicit Knowledge Knowledge Sharing

Values Strategy

Planning & Execution Themes & Tensions

Implementation

Roles Structures

Management Structure

Frameworks Management Networks Tech. Support

Process Practice

Communities of Practice

Organization Work Processes

Tools Platforms

Collaboration Learning Networks

Collaboration Document Management Ontologies Systems

Cultural Knowledge Knowledge Use Building Culture Facilitating Creation Facilitating Transfer Complexity Leadership Ontology Learning Motivation Social Regulation Valuation Process Community Building

Figure 8: Toronto Univ. KM conceptual framework (Cheung 2004) ƒ Application Framework This framework will show you how KM can be applied to enrich an organization. Tacit Knowledge Knowledge Creation

Explicit Knowledge Knowledge Sharing

Values Strategy

Enabling Knowledge Creation

(Temporarily Not Available)

Roles Structures

Evolving Networks

Professionals Role of KM

Process Practice

Communities of Practice Concept Mapping Process

Libraries Work Flow

Tools Platforms

Framework Tools

Frameworks IT Support Support Systems

Cultural Knowledge Knowledge Use Culture Global Strategy Society Leadership Learning Structure Practice Processes HR Issues Implementation Portals

Figure 9: Toronto Univ. KM application framework (Cheung 2004)

3. Prepositional taxonomy It is easy to find some problems in mentioned taxonomies and matrixes. Some of them e.g. WISE and KM architecture are developed for special propose. So they are not useful for other purposes. In others, you cannot see any precise separation between layers of taxonomy or cells of matrixes. For example in KMI Taxonomy you can place a document about “Evolution of R&D Capabilities, the Role of Knowledge Networks within a Firm” both in “Intellectual Property” and “Communities of Practice” as well as “Innovation & Creativity” positions. Beside, none of this categorizations offer any clear guidance and method to their users in classifying files based on their models. And finally, offering many positions will make users confused. Trying to answer these problems, we are suggesting a new model in KM classification. Basically, this model faces KM as a new paradigm that has been affecting all other domains and fields of study and it approaches KM in a more macro way. The optimization criteria of the categorization as taxonomy, is the equality variance of files on different levels and also covering the basic three characteristics, explained in introduction. In our taxonomy, there are three hierarchical layers of Science, Knowledge and Knowledge Management. In the first layer, science is divided to Human and Social related and Tools and Technology related, based on Philosophy of science. The second layer is dividing each of these two, to three kind of Declarative, Strategic and Procedural knowledge, based on Epistemology. And finally in the third layer and based on different knowledge lifecycles, each of knowledge kinds divided to four kind of

455

The 7th European Conference on Knowledge Management

Acquisition, Analyze, Storing and Using. So a hieratical structure with 26 lower position, 6 middle positions and 2 higher positions will be able to place all KM files in a semi-density way.

Figure 10: Our proposed taxonomy of knowledge management

3.1 1st layer: philosophy of science The philosophy of science is the branch of philosophy that studies the philosophical assumptions, foundations, and implications of science. Science draws conclusions about the way the world is and the way in which scientific theories relate to the world. (Toulmin 1967) Regarding to this philosophy the researches in KM were generally fall under two broad categories. One research stream concentrates primarily on the technological issues surrounding the development and implementation of knowledge management systems. This stream of research draws predominantly on findings from the fields of computer science and information systems and sees knowledge management as an application area that extends the traditional realm of databases and information management into so-called knowledge bases and knowledge management systems. A separate research stream is approaching the same kinds of problems from a complementary perspective and attempts to tackle the managerial, organizational, and human issues surrounding the successful implementation of knowledge management within organizations. The primary reference disciplines for such research are management science, psychology, sociology, and other social sciences. (Giaglis 2002) Also Gaßen, (1999) differentiated the KM world in two branches of theory, namely on one side a technology centered KM thinking which is mechanistic, productivity driven and based on systems implementation, and on the other side, a human centered KM thinking which is based on constructivism, cognitive principles, and interaction approaches. However, talking about an academic discipline like KM, we can consider two different aspects; first is humanistic and social aspect and the second is machines and tools aspects.

3.2 2nd layer: epistemology Epistemology is the branch of philosophy, which talks about the nature, origin, and the scope of knowledge. The word "epistemology" originated from the Greek words episteme (knowledge) and logos (word/speech). There are different approaches to the theory of knowledge. Alavi and Leidner (2001) categorized knowledge as declarative, procedural, causal, conditional, and relational. These types of knowledge are the categories that can describe or explain events or things, where declarative knowledge is ‘know-about’ knowledge, procedural knowledge describes how certain activities can be done or we call it ‘know-how’ knowledge, causal knowledge describes reasons for certain happenings, conditional knowledge describes when certain activities or phenomena may occur and relational knowledge describes the relationships between occurrences of events or activities. (Alavi 2001) Fred Nickols (2003) has shown that generally three meaning is aimed, using the word “knowledge.” First, we use it to refer to a state of knowing, by which we also mean to be acquainted or familiar with, to be aware of, to recognize or apprehend facts, methods, principles, techniques and so on. This common usage corresponds to what is often referred to as "know about." Second, we use the word "knowledge" to refer to what Peter Senge calls "the capacity for action," an understanding or grasp of facts, methods, principles and techniques sufficient to apply them in the course of making things happen. This corresponds to "know how." Third, we use the term "knowledge" to refer to codified, captured and accumulated facts, methods, principles, techniques and so on. When we use the term this way, we are referring to a body of

456

S.Mohsen Rahnamafard and Hajar Fatemi ShariatPanahi

knowledge that has been articulated and captured in the form of books, papers, formulas, procedure manuals, computer code and so on. (Nickols 2003) However we choose Cognitive psychologists sort of knowledge which divides knowledge into two categories: declarative and procedural. Some add strategic as a third category. Declarative knowledge has much in common with explicit knowledge where declarative knowledge consists of descriptions of facts and things or of methods and procedures. One view of procedural knowledge is that it is the knowledge that manifests itself in the doing of something. As such it is reflected in motor or manual skills and in cognitive or mental skills. This knowing-is-in-the-doing view of procedural knowledge is basically the view of John Anderson, the Carnegie-Mellon professor mentioned earlier. Another view of procedural knowledge is that it is knowledge about how to do something. This view of procedural knowledge accepts a description of the steps of a task or procedure as procedural knowledge. Strategic knowledge is a term used by some to refer to what might be termed know-when and know-why. Although it seems reasonable to conceive of these as aspects of doing, it is difficult to envision them as being separate from that doing. (Nickols 2003) (Anderson, 1976; 1993; 1995)

3.3 3rd layer: knowledge management Finally in the last layer we focus on knowledge process or in other words KM life cycle. KM specialist has introduced different life cycles about KM activities during recent years. . (Table5) Table 5: different KM lifecycles (Ghaeni et al 2005)

Wiig(1993)

Phase 1

Phase 2

Create

Declaration Use

Holsapple&Joshi (1998) Acquire Davenport and Prusak (1998)

Create

Vander Spek & Spijkervet(1997)

Develop

Gartner Group (1998)

Select

Phase 3

Phase 4

Phase Phase Phase Phase 7 5 6 8

Transfer

Internalizationuse Codify

Transfer

Distribute

Combine

Hold

Create

Organize

Captures

Access

Use

Arthur Anderson and APQC(1996)

Createshare

Identify

Gather

Mappingorganizing

Apply

Probost & Raub & Romhard

Knowledge IdentificationCaptures

Developments

Sharing Uses

Newman, B. Conrad, K.W.(1999)

Creation

Utilization

Ruggles (1997)

Generation Codification Transfer

Retention

Transfer

Save Storage

Evaluate

Considering all of these approaches we can characterize KM by the following four activities in sum: ƒ 1. Acquire knowledge (learn, create, or identify); ƒ 2. Analyze knowledge (assess, validate, or value); ƒ 3. Preserve knowledge (organize, represent, or maintain); and ƒ 4. Use knowledge (apply, transfer, or share). Don’t get too concerned by the choice of words used here, but accept that to manage knowledge you must first have some knowledge to manage, you may need to analyze the knowledge you have, you will need to store the knowledge, and of course you will want to be able to access and use the knowledge in the future. (Watson 2003)

457

The 7th European Conference on Knowledge Management

4. Conclusions Reviewing most of the taxonomies and matrixes that have been presented for categorizing knowledge management contents, we proposed a new taxonomy based on 3 layers of science, knowledge and knowledge management. The proposed taxonomy is based on the context literature review and it has developed to solve some problems in existing classifications. So it includes 3 main characteristics: 1.All files about any fields of KM will have a place in our taxonomy 2.Each file will have only one place 3.Every file can be accessed by 3 drills down movement. We recommend future works on investigating more precisely on KM nature and adding some possible layers to this taxonomy, testing the taxonomy in a pilot society and analyzing the results for promotions and establishing general rules for better organizing files and documents.

References Alavi M. and Leidner D. (2001) “Review: Knowledge Management and Knowledge Management Systems, Conceptual Foundations and Research Issues” ,MIS Quarterly, 25(1), pp. 107136. Anderson J.R. (1976) Language, Memory and Thought, Erlbaum, Hillsdale. Anderson J.R. (1993) “Rules of the Mind”, Erlbaum, Hillsdale. Anderson J.R. (1995) “Cognitive Psychology and Its Implications”, W.H. Freeman and Company, New York. Argote L. and McEvily B. and Reagans R. (2003) “Managing Knowledge in Organizations: An Integrative Framework and Review of Emerging Themes” , Management Science ,Vol. 49, No. 4, April 2003, pp. 571–582 Beckman, T. J. (1999) “The Current State of Knowledge Management In J. Liebowitz (Ed.)” ,Knowledge Management Handbook, Boca Raton, FL: CRC Press Bellomi and Cristani and Cuel (2005) “A cooperative environment for the negotiation of term taxonomies in digital libraries” , Library Management, Vol. 26 , No. 4/5, 2005, pp. 271-280 Bollinger A. S. and Smith R.D. (2001) “Managing organisational knowledge as a strategic asset”, Journal of Knowledge Management 5/1 8-18. Bullinger H.-J. and Wörner K. and Prieto J. (1998) “Wissensmanagement – Modelle und Strategien für die Praxis” Wissensmanagement. Schritte zum intelligenten Unternehmen. pp. 21- 39, Springer; Berlin Cheung P. and Shaheen U. (2004) Knowledge Management Resource Guide Designed, FIS Davenport and Cronin, (2000) “knowledge management: semantic drift or conceptual shift” Journal of Education for Library and Information Science, Vel. 41 No.4, pp. 294-306 De Grooijer J. (2000) “Designing a knowledge management performance framework”, Journal of Knowledge Management, 4/4, 303-310. Gaßen, H.(1999) “Wissensmanagement – Grundlagen und IT-Instrumentarium”, Arbeitspapiere WI, Nr. 6/1999, Ed.: Lehrstuhl für allgemeine BWL und Wirtschaftsinformatik, Johannes Gutenberg Universität: Mainz 1999, p. 12. Ghaeni E. and Fatemi H. and Motevallian S.A. and Rahnamafard S.M. (2005) “Practical review of KM life cycles in organizations” , Proceeding of 5th national conference on R&D an knowledge management, winter 2005, Tehran, Iran Giaglis G. (2002) “European Research on Knowledge and Information Management: Current Status and Future Prospects”, Proceeding of 2nd International Conference on Systems Thinking in Management, pages 19-25 Handzic M. and Hassan H. (2003) The Search for an Integrated KM Framework, Australian Studies in Knowledge Management, University of Wollongong Press. Handzic M. (2001) “Knowledge Management: A Research Framework”, Proceedings of the European Conference on Knowledge Management, Bled. Holsapple C.W. and Joshi K.D. (1999) “Description and Analysis of Existing Knowledge Management Frameworks” Proceedings of the 32nd Hawaii International Conference on System Sciences KM Institute (KMI) (2006) “Knowledge Management Body of knowledge”, http://www.kminstitute.org/index.php?page=blank&subpage=rationale Lueg and Alamán (2002) “Knowledge Management and Information Technology” , Novática and Informatik/Informatique, UPGRADE Vol. III, No. 1, February 2002 7

458

S.Mohsen Rahnamafard and Hajar Fatemi ShariatPanahi

Maier, R., and Hädrich, T. (2004) "Centralized versus peer-to-peer knowledge management systems", Proc. OKLC04, Innsbruck (2004). Newman B. and Conrad K. (1999) “A Framework for Characterizing Knowledge Management Methods, Practices, and Technologies”, George Washington University Course, EMGT 298.T1, Spring 1999. Nickols F. (2003) “The Knowledge in Knowledge Management”, Knowledge Management Yearbook 2000-2001 Regans R. and McEvily B. (2003), “Network Structure and Knowledge Transfer: The Effects of cohesion and Range” , Administrative Sceince Quarterly, 48:240-267 Roehl, H. (2000) “Instrumente der Wissensorganisation”, Deutscher Universitätsverlag, Wiesbaden Schindler, M.(2000) “Wissensmanagement in der Projektabwicklung” Josef Eul Verlag , Köln 2000. Standards (2003) “HB275 Knowledge Management: A Framework for succeeding in the knowledge era”, Standards Australia Stankosky, M. A. (2005) “Advances in knowledge management: University research toward an academic discipline”, Creating the discipline of knowledge management: The latest in university research, pp. 1-14. Sutton M.J.D (2002) Running head: Topical Review of Knowledge Management Curriculum Programs, Programs in University Graduate Schools: Library and Information Science, Business, Cognitive Science, Information Systems and Computer Systems Schools ,McGill university, Graduate School of Library and Information Studies The WISE consortium (2002) Review of KM Tools ,Ref. 1.2 CYS/ 020326-1/Version A4 Issuing Date: 26-03-2002 Toulmin, Stephen (1967) The Philosophy of science, Hutchinsto university, London Watson I. (2003) , Applying Knowledge Management Techniques for Building Corporate Memories, Morgan Kaufmann Publishers Weber and Frithjof and Kemp and Jeroen (2001) “Common Approaches and Standardisation in KM” , European KM Forum Workshop Report from a Workshop at the European Commission’s IST Key Action II Concertation Meeting, Brussels, 14 June 2001. Weber and Wunram and Kemp and Pudlaztz and Bredehordt (2002) “Standardisation in Knowledge Management –Towards a Common KM Framework in Europe”, Proceedings of UNICOM Seminar “Towards Common Approaches & Standards in KM“, 27 February, 2002, London

459

From Teaching to Practicing Knowledge Management in Internal Auditing Services Camille Rosenthal-Sabroux1 and Michel Grundstein2 1 Paris – Dauphine University, Lamsade, Paris, France 2 Bénédicte Huot-de Luze, Senior Manager Internal Audit Services, KPMG [email protected] [email protected] Abstract: Our researches focus on Information systems, Knowledge Management and Decision Aid. In 1997, we initiated teaching of Knowledge Management (KM) in a Master Degree specialized in Information Technologies (IT) Audit, since it is strongly related to companies’ needs. This specialty is well recognized since more than five years by most practitioners in the sectors of IT audit and IT consulting. Knowledge Management is related to other disciplines as Information Systems, Information System Audit Approach, Quality Management Approach, Organizational Learning and Modelling Processes. Knowledge Management does not exist as a full discipline. Furthermore the French Association of Internal Audit (IFACI) has created a research group on Internal Audit and KM. This research group identified seven key success factors for introducing a KM approach in Internal Auditing Service. In this paper we present Knowledge Management as “The management of activities and processes that enhances creation and utilization of knowledge within an organization. It aims at two strongly linked goals: a patrimony goal and a sustainable innovation goal with economic and strategic, human, socio-cultural and technological underlying dimensions”. Then we emphasize the interest to develop KM in Internal Auditing Service and we discuss the key success factors highlighted by the research group. Keywords: Knowledge management, information technology audit, success factors and problems raised, teaching KM, tacit and explicit knowledge.

1. Introduction The Internal Audit directions have more and more to prove their performance and follow a quality approach. Scandals such as Enron and WorldCom have given rise to a general mistrust regarding the sincerity and truthfulness of the declarations of the companies and their management. This lost of confidence has consequently led some governments to strengthen the legislation on governance and internal controls. According to the 2005 IFACI survey, Internal Audit is deeply involved in the process of meeting the new legal requirements. Indeed, 65% of Internal Audit Directions have had to draft the projects report intented to meet the Sarbanes-Oxley requirements and 92% of companies subjected to Sarbanes-Oxley have involved their internal audit directions in the project. According to Michael P. Cangemini in Information Systems Control Journal Volume 2, 2006 page 5: “The Financial Executives International (FEI) Current Financial Reporting and Issues (CFRI) Conference on 9 November 2005 in New York, USA. The conference was opened by an address from FEI President Collen Cunningham, who said, “I believe Sarbanes-Oxley has brought about significant and positive change. Activities are conducted with more transparency and with investors in mind. The law has laid the foundation for stronger corporate governance, meaning that investors’ faith in the integrity of the capital markets has real reason to be restored. Thankfully, we’re seeing a renewed emphasis on ethical conduct and an understanding of how tone at the top drives the actions of an organization”. Furthermore, these scandals have triggered an increase of the materialisation of proves. Trusting oral information is now out of question. Everything has to be written and recorded as to document and support any statement. Internal Audit has evolved, been adapted and professionalized. It is nowadays creating value, with experienced practitioners. However, Internal Audit turnover is high, more than 23%, ie more than a fifth of the staff renewed each year. Indeed, the internal audit function is a transitory position; the average length of staying in an internal auditor position is 3 years. Risk of losing Knowledge is consequently very high. In this context, KM becomes a strategic issue since it takes an active part in the maintenance and improvement of the Internal Audit performance.

460

Camille Rosenthal-Sabroux and Michel Grundstein

In this paper we present our vision of Knowledge Management (KM). Then we introduce our teaching program in a Master Degree specialized in Information Technologies (IT) Audit. Next, we emphasize on the interest to develop KM in Internal Auditing Service. Finally, we discuss the key success factors for introducing a KM approach.

2. Our vision of knowledge management KM is often looked at from a technological viewpoint, which leads to consider the knowledge as an object and disregard as the same time the importance of the people. To avoid this drift, the CCRC ECRIN Working Group (2001) defined KM as follows: “Knowledge Management is management of the activities and processes that enhance the utilization and the creation of knowledge within an organization, according to two strongly linked goals, and their economic and strategic dimensions, organizational dimensions, socio-cultural dimensions and technological dimensions: (i) a patrimony goal and (ii) a sustainable innovation goal”. This definition implies three postulates: (i) Company’s knowledge includes two main categories of knowledge, (ii) Knowledge is not an object, (iii) and Knowledge is linked to the action.

2.1 Company’s knowledge includes two main categories of knowledge Within a company, knowledge consists in explicit knowledge on the one hand, comprising of all tangible elements (we call it “know-how”) and tacit knowledge (Polanyi, 1966) on the other hand , which includes intangible knowledge (we call it “skills”), on the other hand. The tangible elements are formalized in a physical form (databases, procedures, plans, models, algorithms, analysis, and synthesis documents) and/or are embedded in automated management systems, conception and production systems and in products. The intangible elements are inherent to the individuals who bear them, either as collective knowledge (the “routines” – non-written individual or collective action procedures (Nelson & Winter, 1982) or as personal knowledge (skills, crafts, “job secrets”, historical and contextual knowledge, environmental knowledge – clients, competitors, technologies, socio-economic factors). Ref. Figure 1.

Figure 1: Company’s knowledge

461

The 7th European Conference on Knowledge Management

2.2 Knowledge is not an object Knowledge lies in the interaction between an interpretative Framework (incorporated within the head of an individual, or embedded into an artefact) and data. This postulate is based on the theories developed by Tsuchiya (Tsuchiya, 1993), who deals with the construction of tacit individual knowledge. According to his research, the tacit knowledge, which lies within one’s brain, is the result of the meaning one allocates – through one’s interpretative schemes – to the data that one perceives as part of all the information received. This individual knowledge is tacit and it may or may not be expressed. It becomes collective knowledge as soon as it is shared by other individuals, whose interpretative schemes are “commensurable”, ie schemes that enable a minimal common level of interpretation, which is shared by all members of the organization.

2.3 Knowledge is linked to the action From a business perspective, knowledge is created through action. Knowledge is essential for the business’s functioning and is finalized through its actions. Hence, one has to be interested in the activities of the actors – decision-makers – engaged in the process contained in the company’s missions. This vantage point is included in the use of the concept of knowledge, which cannot be separated from the individual placed within the company, his/her actions, decisions and relations with the surrounding systems (people and artefacts).

3. Teaching KM in a master degree specialized in information technologies audit According to our KM definition: “Knowledge Management is management of the activities and processes that enhance the utilization and the creation of knowledge within an organization, according to two strongly linked goals, and their economic and strategic dimensions, organizational dimensions, socio-cultural dimensions and technological dimensions: (i) a patrimony goal and (ii) a sustainable innovation goal”, KM doesn’t exist as a full discipline. It must be integrated into all the activities of the organization, notably in Information Technologies Audit. The purpose of the specialty is to educate future professionals on the audit and consulting methodology and techniques in an Extended Corporate Information system environment. Objective is for them to become able to: ƒ Handle the conceptual framework of the IT audit methodology, ƒ Contribute to the evolution of the IT audit methodology, ƒ Apply this methodology to the various IS fields using the potentialities of the ICT, ƒ Address recommendations, write and present audit reports, ƒ Initiate the consulting aspect of the profession, ƒ Manage IS risks, ƒ Audit the techniques, the management and the development of information systems, ƒ And apply this method to other related profession as the “MOA” (French equivalent of the business analyst or key user). Therefore, the courses of the specialty are divided into 3 main axes: ƒ Courses related to the main scientific topics of the field: methods of audit, fundamental audits, audit of the ERP, IS security. ƒ Courses related to the surrounding fields such as the organizational management, the IS governance, the Extended Corporate Information system, the behavioural aspect, law and IS, the knowledge management aspect, the communication aspects, and the new security regulations. ƒ A three-month internship (Internal Auditing Services)

462

Camille Rosenthal-Sabroux and Michel Grundstein

The objectives of the KM courses are to provide insight on the fundamental principles of the Knowledge Management, to introduce the issues of capitalization of knowledge assets within the company, to position the Knowledge Management facing, to suggest a methodological framework allowing to set up the appropriate preliminary studies to a KM project, and to highlight the contribution of the KM approach to the Information Technologies Audit.

4. The interest to develop KM in an internal auditing service: The key success factors In order to introduce KM in internal audit departments, a research group called « a KM approach for an internal audit director » was created on behalf of the IFACI with the participation of internal auditors. KM is not an objective in itself; it is a way to improve the quality and the knowledge (Know-how and skills) and to limit the lost of knowledge. Internal auditing is “an independent, objective assurance and consulting activity designed to add value and improve an organization's operations. It helps an organization accomplish its objectives by bringing a systematic, disciplined approach to evaluate and improve the effectiveness of risk management, control, and governance processes.”

4.1 The interest to develop KM in an internal auditing service KM is for an internal audit direction a way to meet its objectives, as fas as it contributes to: Reduce the risk of losing crucial skills; Support the team when employees are retiring, resigning or moving to other services within the company ƒ Reduce mission length and costs in the long term; ƒ Improve the auditors’ competencies; ƒ Increase the entity reactivity; ƒ Strengthen the services and deliverables quality; ƒ Promote creativity and innovation; ƒ And increase training quality. It would be wrong to consider KM as no more than tools management, documentation management, or information diffusion and archiving. KM is a wider process, which implies in particular the identification of tacit knowledge, the explicitation of this knowledge, and/or the implementation of knowledge transfer techniques.

4.2 The key success factors for introducing a KM approach The success of a KM project in an internal audit service depends on several factors. Below are listed the important points on which, we think, we must lay stress on considering the innovative dimension of this kind of project. These key factors raise some problems that are summarized in the table 1.

4.2.1 Take the double dimension of knowledge into consideration Knowledge Management takes tacit as well as explicit Knowledge into account. Knowledge Management is generally seen as the management of explicit knowledge – deemed equivalent as information management - and ignores the tacit side of knowledge. Knowledge Management highlights the importance of the tacit part of knowledge. Knowledge Management displays the interest to promote the exchange and the sharing of tacit knowledge on the one hand and their conversion into explicit knowledge on the other hand so as to extend the field of knowledge that can be managed by rules of industrial properties.

463

The 7th European Conference on Knowledge Management

4.2.2 Obtain the executives’ support In a KM approach, it is important to increase the executives’ awareness and to obtain their support, given the innovative nature of the approach and its managerial impact. In order to obtain the executives’ support, it is necessary to enhance the KM strategic role and to integrate the approach to the global strategy of the entity. Indeed, KM is not an end in itself, but a way to improve the entity performance. One way to obtain the executives’ support is to communicate regularly on the work thanks to key performance indicators, which are a pointer to the value, added by a KM approach.

4.2.3 Link the approach to the internal audit direction performance The internal audit direction effectiveness could be measured among oters by answering some questions as: ƒ How does internal audit take part in the global strategy of the entity? ƒ Are human resources set in accordance with the process in place at the internal audit direction? Are the processes in place reactive to the new needs of the activities? The implementation of KM contributes to answer to the previous questions, notably through: ƒ A better performance on process analysis (time saving, effectiveness, cost reduction…); ƒ A higher relevance for the development audit plans; ƒ Favourable conditions for innovation and creation of new knowledge; ƒ The improvement of the documentation and deliverables quality and the services provided by the internal audit direction; ƒ An identification of key persons carrying crucial knowledge; ƒ A support when social mobility, retirement or workers are being laid off; ƒ A better visibility of knowledge detained by the internal audit direction which allows a better reactivity; ƒ Time saving for the recommendations; ƒ And a better answer for training needs.

4.2.4 Involve the audit team The aim of this approach is that the Knowledge Management becomes an automatism. Have the auditors involved and obtain their support is therefore an essential key success factor. Following elements must be taken into account to achieve this goal: ƒ Describe the objective of the Knowledge Management and present all the advantages related to this approach; ƒ Explain the approach by precisely detailing each step; ƒ Define the way to “capture” knowledge (direct/indirect transfers…); ƒ Create a climate of confidence between members; ƒ Encourage team members to work together; ƒ Involve all auditors, with no exception, in the project; ƒ Define a precise role to each auditor; ƒ Promote each team member’s role and behaviour; ƒ Establish creative, sharing and cohesion culture; ƒ Include in the recruitment process the values necessary to the Knowledge Management; ƒ Ensure a favourable environment for information circulation;

464

Camille Rosenthal-Sabroux and Michel Grundstein

ƒ Know how to assess the project progress and follow up key performance indicators; ƒ Train the auditors to the tools supporting the project; ƒ And take the difficulty to implement such approach into account. Furthermore, considering the intangible nature of knowledge, following elements must be specifically taken into account: ƒ Collegially identify critical knowledge in order to share it; ƒ Define the vocabulary through a glossary; ƒ Limit divergences of interpretation; ƒ Communicate on ambiguous critical knowledge; ƒ And do not only focus on knowledge related to the process “auditing”.

4.2.5 Adapt the approach to the culture and the context In the steps of Knowledge Management, it is essential to adapt the approach to the culture and the context of the entity. ƒ Certain elements must be taken into account: ƒ Give the project workable objectives by setting up precise limits ƒ Avoid the “big bang” approach ƒ Develop this approach on a pilot process ƒ Take into account and include the existing process ƒ Adapt the project to meet the entity’s needs ƒ Promote a way of transcribing tacit knowledge (company memory, story-telling or feedback…)

4.2.6 Include Knowledge Management into existing approaches It is essential the KM is not simply added to existing processes but attached as far as possible to each of them, by taking advantage of a process modelling project, a quality approach or the new regulations (SOX for instance…).

4.2.7 Use tool adapted to the KM approach and not the opposite The Knowledge Management will be more profitable if it is based on relevant and adapted tools. The entity must not be subjected to technical choices that do not meet its business needs. Therefore, it is essential to think of the Knowledge Management approach prior to determine the technological tools that will support this approach.

5. Conclusion Following a short introduction on our view of the KM and the specific master programme, we’ve put forward the issues of KM within the internal audit services. We have noted the interest of the students for KM, which lead to deep discussions. During internal audits of the Company Information System, some students (30% to 40%) have taken the initiative for including KM elements whenever it is possible. Their tutors in the company have turned their attention to the KM, which leads to positive results. This situation led us to suggest a KM approach in an internal audit department to the group sponsored by the IFACI. The key success factors have been summarized in this document.

465

The 7th European Conference on Knowledge Management

Table 1: Problems raised for promoting the key success factors Key success factors 1) Take into account the double dimension of knowledge

Problems raised How can tacit information be identified? How can the entity value the possibility of clarifying them? How can explicit information be managed? 2) Involve the Executives How can the strategic role of KM be highlighted? How can KM be included in the global strategy of the entity? How can the entity communicate regularly on the progress of work performed through performance indicators revealing the added value of the KM approach? 3) Link the approach to the performance How does the internal audit take part in defining the global strategy of the entity? Are Human Resources in accord with process of the internal audit services? of the internal audit management Are processes reactive to new business’ needs? How can performance on process analysis be measured (time saving, efficiency, cost reduction…)? How can pertinence in establishing audit plans be valued? How can favourable conditions for innovation and creation of new information be implemented? How can contribution to quality of templates and internal audit services be measured? How can people with crucial knowledge be identified? How can support in case of mobility, leaves or lack of activity be guaranteed? How can the entity be aware of information possessed by internal audit services allowing it to be more reactive towards new activities/business? How can the entity save time to set up recommendations? How can the entity meet its needs of training? 4) Involve the audit team How can KM goal be described? How can benefits from the KM approach be presented? How can this approach be explained by detailing all steps? How can the way of “capturing” information be defined? How can trustworthy relationship among team members be implemented? How can team work be encouraged? How can all auditors feel involved in the project without exception? How can a precise role be offered to each auditor? How can the role and behaviour of each team member be promoted? How can a creative, sharing and cohesion culture be implemented? How can values necessary to KM in recruitment be included? How can a favourable environment for knowledge sharing be ensured? How can the progress of work performed be measured? How can performance indicators be followed? How can auditors be trained to tools supporting the project? How can difficulties in implementing such an approach be taken into account? How can critical information be identified collectively in order to share them? How can the vocabulary through a glossary be defined? How can divergences of interpretation be limited? How can the entity communicate on critical but not explicit information? How prevent the entity from focusing only on information related to the process «auditing”? 5) Adapt the approcah to the culture and How can the project be shaped in feasible objectives using precise limits? How can the “big bang” approach be avoided? the context How can the KM approach be developed on a pilot process? How can existing approaches be included and taken into account? How can the project be adapted to the entity’s needs? How can a way of transcripting explicit and tacit information be promoted (company memory, or story telling or feedbacks…)? 6) Include the KM to existing How can KM be included in a quality approach? approaches How can KM be included in management via process approach? How can KM be included in Cobit approach? How can KM be included in a SOX regulation approach? 7) Use tool adapted to the approach and How can the functionalities of a tool adapted to the KM approach be defined? How can this tool be chosen? not the opposite

References Chua, B.B. & Brennan, J. (2004). Enhancing Collaborative Knowledge Management Systems Design. In D. Remenyi (Ed.), 5th European Conference on Knowledge Management (pp.171-178). Reading, UK: Academic Conferences Limited. COBIT® (2000, 2002). Gouvernance, Contrôle et Audit de l’Information et des Technologies Associées. Translation of Control Objectives for Information and Related Technology. Information Systems Audit and Control, (3rd Edition). Rolling Meadows Illinois: IT

466

Camille Rosenthal-Sabroux and Michel Grundstein

Governance Institute. Translated into French language by AFAI the French Chapter of the Information Systems Audit and Control Association - ISACA. Paris: AFAI. Cohen, D. & Prusak, L. (2001). In Good Company: How Social Capital Makes Organizations Work. Harvard Business School Publishing. Coleman, D. (1999). Groupware: Collaboration and Knowledge Sharing. In Liebowitz (Ed.), Knowledge Management Handbook (Section IV, pp. 12-1 – 12-15). Boca Raton, Florida: CRC Press LLC. Davenport, T.H. & Prusak, L. (1998): Working Knowledge. How Organizations Manage What They Know. Boston, MA: Harvard Business School Press. Grundstein, M., & Rosenthal-Sabroux, C. (2004). GAMETH®, A Decision Support Approach to Identify and Locate Potential Crucial Knowledge. In D. Remenyi (Ed.), Proceedings 5th European Conference on Knowledge Management (pp. 391 – 402). Reading, UK: Academic Conferences Limited.Huntington, D. (1999). Knowledge-Based Systems: A Look at Rule-Based Systems. In J. Liebowitz (Ed.), Knowledge Management Handbook (Section IV, pp. 14-1 – 14-16). Boca Raton, Florida: CRC Press LLC. Moore, C., R. (1999). Performance Measures for Knowledge Management. In J. Liebowitz (Ed.), Knowledge Management Handbook (chapter 6, pp. 6.1-6.29). Boca Raton, Florida: CRC Press LLC Tsuchiya, S. (1993): Improving Knowledge Creation Ability through Organizational Learning. International Symposium on the Management of Industrial and Corporate Knowledge. Compiègne, France: UTC-IIIA, ISMICK'93 Proceedings. Wensley, A.K.P., & Verwijk-O’Sullivan, A. (2000). Tools for Knowledge Management. In C., Despres, & D., Chauvel (Eds), Knowledge Horizon: The present and the Promise of Knowledge Management (Chapter 5, pp. 113-130). Woburn, MA: Butterworth-Heinemann. Wiig, K. (2004). People-Focused Knowledge Management. How Effective Decision Making Leads to Corporate Success. Burlington, MA: Elsevier Butterworth-Heinemann.

467

Knowledge Management in the Extended and Virtual Enterprises: A Review of the Current Literature Enrico Scarso, Ettore Bolisani and Maria Rita Arico’ University of Padua, Vicenza, Italy [email protected] [email protected] [email protected] Abstract: The models of the extended and virtual enterprise (EE/VE), that are increasingly attracting the attention of scholars and practitioners, represent new modalities of organisation of inter-firm relationships aimed to increase flexibility and responsiveness in a global competitive environment. The functioning of those structures has important implications in terms of Knowledge Management (KM). Actually, the EE/VE forms originate from the fact that the single firm is not capable to internally develop and manage all the knowledge needed to be competitive in the global economic environment, and hence it must resort to external sources of knowledge. This raises several questions, i.e.: how can knowledge assets be managed in an interorganisational context? How should inter-organisational knowledge transfers be effectively organised and managed? What are the opportunities and limitations of ICT applications for this? In principle, the literature of Knowledge Management offers important suggestions to face such questions. However, the studies of KM have originally developed in relation to other contexts of application (namely, the problem of managing knowledge within the single firm). It is only recently that scholars have started to investigate the issue of interorganisational KM, with a particular reference to the so-called knowledge networks (i.e. inter-organisational solutions formed to create and share new knowledge). This paper presents a critical analysis of the recent literature on the overlapping issue of EE/VE on the one hand, and inter-organisational KM on the other. The main purpose is to identify notions, concepts, and approaches of the two areas of study that may useful to develop a systematic approach of KM in the EE/VE models. Keywords: Extended enterprise, Virtual enterprise, Knowledge management, Knowledge networks, Literature review

1. Introduction Extended and virtual enterprise (EE/VE) are increasingly attracting the attention of scholars and practitioners, mainly because they represent new modalities of organisation of inter-firm relationships aimed to increase flexibility and responsiveness by combining different competencies and capabilities. Those structures have important cognitive implications since they stay on an intense exchange of information and knowledge among involved firms. This is the reason why they are indicated as knowledge-based organisations, and as such they have to be managed. This raises several questions, i.e.: how can the common pool of knowledge be managed? How should inter-firms knowledge flows be favoured? What are the opportunities and limitations of ICT applications for this? In principle, the literature of Knowledge Management (KM) offers important suggestions on this matter. However, such discipline has originally developed in relation to other contexts, namely managing knowledge within the single firm. However, in recent times, KM scholars have started to deal with the issue of managing knowledge across organisations, with a particular reference to the so-called knowledge networks, i.e. inter-organisational arrangements formed to favour the creation and sharing of knowledge among different firms. It is from these studies that several indications useful to manage EE and VE as knowledge-based organisations can be drawn. This paper presents a critical analysis of the recent literature on the overlapping issue of EE/VE on the one hand, and inter-organisational KM on the other, with the aim to identify notions, concepts, and approaches of the two areas of study that may used to develop a systematic approach of KM in the EE/VE models. More specifically, section two analyses the new models of extended and virtual enterprises as are illustrated and developed in the current literature. Section three examines the recent field of study of knowledge network management, seen as an extension from original KM literature. Section four proposes a comparison of the two fields of study in terms of how they treat the topic of “inter-firm network management”. The final section draws some remarks on the

468

Enrico Scarso, Ettore Bolisani and Maria Rita Arico’

possible implications for research, especially in terms of integration of the two distinct but converging areas.

2. Extended and virtual enterprises as knowledge-based organizations Extended Enterprise and Virtual Enterprise are common terms in the recent literature indicating new models of inter-enterprise relationship. Such models have different origins but some common features in their definition. EE refers to a long-term form of coordination between firms involved in the value chain of a specific product or family of products, each of them focused on different phases of production or different components of the product. One firm (usually the one which sells goods and services to the final market) has the role of coordinator of the whole group (Childe 1998). In the VE model, there is also a central firm, that creates temporary links between independent organisations with the aim to develop specific products or services characterised by short life-cycle (Jagdev and Thoben 2001). Different from the EE, this is a project-oriented structure, that may dissolve once reached its target (Halaris et al. 2003), even though it could be reconstituted for new projects or market opportunities. What is of relevance here is that both the definitions recall aspects that underline the central aspect of interorganisational knowledge exchange. As a matter of fact, EE and VE indicate groups of firms characterised by different core-competencies, which co-operate with the purpose to combine efficiency and production flexibility. Firms can achieve this aim by means of an intense exchange of information and knowledge supported by ICTs (Browne and Zhang 1999). Some authors explicitly indicate EEs/VEs as knowledge-based organisations, in which distinct firms gather and exploit knowledge using explicit efforts and, whenever possible, planned processes of transfer (Kinder 2003; Preiss 1999). Thus, knowledge is generally indicated as the primary resource in an EE/VE structure, in which member firms participate in the development and delivery of goods and services with their peculiar knowledge and skills, in order to satisfy customer needs and to innovate in product, process, and managerial practice (O’Neill and Sackett 1994; Preiss 1999; Venkatraman and Henderson 1998). The mechanisms for knowledge exploitation and exchange in EE/VE structures are seen under two main perspectives. The organisational view points to human aspects in knowledge exploitation and learning (Beesley 2004). The emphasis is put on the peculiar organisational solutions that are required to facilitate the management of knowledge assets and the interorganisational knowledge flows (Preiss 1999). The literature highlights that the practical implementations of EE/VE organisations are based on frequent exchanges of ideas, proposals, technical data, and other informational contents across business units, firms, and value chains. Co-design/co-development teams and other formal/informal structures are commonly used to implement collaborative knowledge creating processes. The technological view focuses on the role of ICTs (e.g., shared databases, extended ERP systems, web-based services, etc.) as the principal tool for knowledge sharing within an EE/VE (Zhuge 2005; Yoo and Kim 2002; Lillehagen and Karlsen 2001). In this view, the problem of knowledge codification arises as a critical issue. In other words, the technology can provide links that enable the effective sharing of data, but the communication of context-related knowledge requires a shared system of meaning to be understood and applied (Swann et al. 1999). The cognitive implications of the implementation of EE/VE models also relate to the kind of coordination mechanisms among economic activities. As mentioned, the “return on investment” in such structures is the possibility for each firm to concentrate on its own knowledge base and “let to the others” the development of complementary competencies that are essential to the functioning of the whole value chain. In accordance to the classic Williamson’s categorisation of markets vs. hierarchies (Tuma 1998), both EE and VE are considered intermediate forms between market and hierarchy (although with distinctions), which has important implications from a knowledge-based perspective. First, the EE structure represents an attempt to develop a further integration of pre-existing supplier-customer relations through a long-term ICT-based partnership (Jagdev and Thoben 2001). Mechanisms of coordination are based on directions from a leading enterprise, and this makes the

469

The 7th European Conference on Knowledge Management

EE a substantially stable and “quasi-hierarchical” structure. Instead, in the VE model a greater flexibility and more temporary relations can be permitted, thus establishing an environment where market transactions can partially also occur is (Goranson 2003). In both cases, the role of a “principal” firm, capable of co-ordinating the other trading partners, is clearly recognizable (Childe 1998; Lefebvre and Lefebvre 2002). From a cognitive viewpoint, this activity requires tasks such as: “mapping” the knowledge scattered among dispersed organisations; managing transfers of relevant information among nodes in a trustworthy environment; setting common rules and language standards, and regulating exchanges with external environment. Related to this, another important issue is that of trust. The effective functioning of a network of firms such as EE/VE requires limiting risks of opportunistic behaviour by participating partners. The reduction in information and knowledge asymmetries among parties is seen as a way to build a trustworthy environment (Spekman and Davis 2004). Finally, it is often possible to identify different stages of development and management of an EE/VE (Davis and O’Sullivan 1999; Tsai et al. 2004; Stanescu et al. 2002), that involve and require different processes of knowledge transfer. The implementation stage of an EE/VE implies the issues of partner selection, subdivision of work, setting common rules, etc. This requires “sharing of a vision” and agreement on common goals between participating firms, which generally implies sharing knowledge contexts and direct contacts between managers and teams. Once the structure is built, the design of the new product needs an effort to combine previous projects, past experience, and tacit competencies, and convert them into technical designs and specifications. Finally, the operations stage requires the exchange of explicit formats and contents to perform efficient transactions in the supply chain and implement just-in-time strategies: in doing this, the firms can take advantage of integrated information systems, whose implementation and use imply the pre-codification of knowledge into forms that can be treated by computers efficiently.

3. Inter-organisational knowledge management KM is another important field of management where knowledge is regarded as a key competitive asset. In particular, this “emerging discipline” stresses that knowledge requires effective management with specifically designed approaches. The “new” discipline of KM has aroused in recent years, but the literature already counts an impressive number of contributions. The research in KM has shed light on the nature of organisational knowledge processes and on the issue of how to manage it. Most of the literature has focused on knowledge generated, transferred, and used within a single organization, while less work has been done to understand how to manage knowledge across organisations (Parise and Henderson 2001; Spring 2003). Nevertheless, as the day-by-day practice shows, it is very unlikely that single firms can own or internally generate all the knowledge they require (Quintas et al. 1997; Bolisani and Scarso 2000). Hence, interactions with other companies for accessing to a wider pool of knowledge resources are needed to gain and sustain competitive advantages (Blecker and Neumann 2000). The resulting knowledge networks (i.e.: formal or informal relations to share knowledge, explore innovations, and exploit new ideas - Millar et al. 1997; Pyka 1997, 2002; Warkentin et al. 2001; Peña 2002; Inkpen and Tsang 2005) constitute a basic and distinctive feature of the current business environment. The recent studies of knowledge network management, that represent an effort to translate and re-frame KM notions and practices from internal environments to inter-organizational relationships (Seufert et al. 1999), are of great use to analyse knowledge processes involving various enterprises. This sort of “extended KM” raises more problematic issues than managing knowledge within the single firm. For instance, attempts to communicate meanings across the borders of the single firm may be difficult due to the lack of common goals, languages, values, and mental schemes. In particular, the cognitive distance or gap that separate knowledge sources and users makes the sharing of (really) useful knowledge difficult. Furthermore, the effective “functioning” of a knowledge network involves the subdivision of “cognitive tasks” and KM competencies among the participants. As long as the dimension of inter-firm networks and their importance to the competitive success of their members have been recognised by scholars and practitioners (Inkpen and Tsang 2005), the interest of the KM literature on networks has been increased. The recent literature of knowledge

470

Enrico Scarso, Ettore Bolisani and Maria Rita Arico’

network management highlights some essential concepts and issues. A particular attention has been paid to the conditions that facilitate the effective transfer of knowledge among members, in that networks have the main goal to provide firms with access to valued knowledge. Nevertheless, effective knowledge sharing inside a network cannot be taken for granted: many, in fact, are the factors that affect this process. First, the nature of the exchanged knowledge is a central point. As is well known, there is a substantial difference between tacit and explicit knowledge in terms of transferability. In principle, explicit knowledge can be transferred without any kind of limit, while tacit knowledge requires the existence of a shared interpretative context (i.e. a common language, shared mental models, and so on). Furthermore, the kind of exchanged knowledge is strictly linked with the nature of the involved task, since a more structured task require less tacit knowledge to be performed. As a direct consequence, the knowledge gap that separates network members is of importance. More specifically this gap has a double and conflicting effect. On the one hand, the larger the gap and the more useful the knowledge exchange is. On the other hand, the smaller the gap and the easier the exchange of tacit knowledge. Thus, a trade-off is required between the wideness and richness of the network knowledge pool (that favour the generation of new ideas) and its heterogeneity (that hinders its circulation inside the network). Another question is that inter-organizational arrangements for nurturing knowledge sharing can become sterile and even pathological when trust between partners is lacking and conflict is poorly managed (Panteli and Sockalingam 2005). As many scholars affirm, trust is at the heart of knowledge exchange, since it influences the extent and the nature of knowledge shared. The issue here is that since different kinds of trust exist, there must be a correspondence between the cognitive aims of the network and the kind of trust to be developed. Another point is that in general trust may evolve during time, and consequently the knowledge potential of the network may change. Also, the network structure matters. Structure is characterised by elements such as: size, organisation, and stability. Several studies state that size affects knowledge transfer processes significantly. When size increases, “structural holes” grow up, and sub-networks emerge (Hislop 2005). This creates “knowledge islands” that separate some members from others. As regards network organisation, this aspect is generally seen in relation to the different roles played by the various members. Knowledge transfers require one (or some) knowledge sources and receivers, and also one or more knowledge brokers whose job consists in favouring the knowledge transaction (Scarso et al. 2006). Furthermore, the success of a knowledge network depends on the presence of critical players as sponsors, managers, etc. In fact, the network must be managed, and this requires the subdivision of both the cognitive and the managerial work. Lastly, the stability of the network primarily affects and requires the development of trust (Panteli and Sockalingam 2005). More specifically, “superior” levels of trust, needed to exchange more complex and tacit knowledge, can be reached only after a certain time of cooperation. A less considered, but still a critical aspect concerns the economic side of the question. Like any other exchange, in a knowledge exchange there are individuals or organisations that take economic advantage from the process, and they (or others) might be requested to pay for it. A key question here is that the sum of benefits and costs has to be as much fairly distributed as possible. In some cases, economic incentives are needed to motivate people to contribute to the knowledge network. A last point concerns the technical tools used to enable the sharing of knowledge inside the network, generally denoted as Knowledge Management Systems. Even if they are not the panacea of the knowledge transfer problems, KMS are however vital to the effective working of a knowledge network, but have to be designed and implemented properly (Holsapple 2005). To sum up, several issues affect the sharing of knowledge inside a network of firms. In particular, different networks shows different features in terms of the knowledge processes they enable and

471

The 7th European Conference on Knowledge Management

their effectiveness. Hence, the dynamics of knowledge transfer vary across different network types (Inkpen and Tsang 2005), and this has notable implications, as analysed below.

4. EE from a KM perspective As illustrated, the two areas of study considered here are developed to meet different goals, and have their roots into different fields. The literature on EE/VE models examines the “new” organisational configurations for manufacturing and delivering products and services, while the literature on knowledge network management especially focuses on the mechanisms and processes for sharing knowledge among firms. Our analysis highlights that the two areas have a number of points in common that, in our view, are particularly important, and namely: ƒ The main object is, in both cases, inter-organisational networks allowing co-operation between (at least in part) autonomous companies; ƒ The focus is how it is possible to manage these networks effectively, and what are the fundamental roles and mechanisms for this; ƒ Knowledge is seen as a crucial element in the management of networks, not only (as is obvious) for a KM viewpoint but also (as is less obvious) for the new EE/VE models; ƒ Finally, ICT applications are deemed to play a vital role in the development and functioning of inter-firm networks, which raises essential issues of their selection, design, and implementation. Although there is awareness that ICTs cannot solve all the problems, their importance is recognised in both the literatures. Having said that, the two areas of study have several diverging aspects that sometimes represent complementary views but, in other cases, are contrasting points. We attempt to examine such critical topics, focusing on the implications in terms of “knowledge and its management” (Table 1). Table 1: Comparison between EE/VE models and knowledge networking Issue

EE/VE models

Knowledge Networking

Purpose

respond to particular operational or economic needs

transfer and share knowledge

Configuration and structure

subdivision of competencies and tasks to reach a trade-off between stability and flexibility

“cognitive” nature and heterogeneity of the network, knowledge floes between nodes

Key roles

leading firm

sources and users of knowledge; knowledge brokers

Network relationships

co-ordination of operational tasks and economic transactions

nature and characteristics of knowledge transfers

Role of knowledge

knowledge as “competence” or “know how” for performing tasks in the network

knowledge as the object of transfer and management

Managerial processes and key performances

adaptation of traditional managerial models to the inter-firm environment

processes of knowledge transfer and management

Issues of ICT design and management

(direct) connection between information systems

tools for exchanging and sharing multidimensional knowledge contents

4.1 Purpose of networking The EE/VE models are seen as networks built to respond to particular operational or economic needs (i.e.: how to design, manufacture and delivery a product in the global market). Knowledge networks, instead, focus on the main purpose of “how to transfer and share knowledge”; this is of course finalised to business, but as a sort of “consequent effect” that is generally not analysed directly in economic terms. This distinction has important implications in terms of selection, roles of members, and division of tasks (see below).

472

Enrico Scarso, Ettore Bolisani and Maria Rita Arico’

4.2 Configuration and structure In the EE/VE literature, the main question is what structure reaches a trade-off between stability and flexibility for competing in a highly dynamic environment. From a knowledge-based perspective, this means focusing on the firms’ core competencies and know-how that are needed to the functioning of the whole network, and how to subdivide and co-ordinate tasks accordingly. In the studies of KM, what characterises the structure of the network is its size (in terms of “amount” and heterogeneity of knowledge) and the links between nodes (in terms of sources and users of knowledge).

4.3 Key roles In an EE/VE, there is generally one firm that co-ordinates the network. Generally, this role is seen in terms of either operational tasks, or co-ordination of economic transactions within the network. In both cases, there are implications in terms of managing knowledge, but this is generally seen as a secondary aspect. In the KM literature, roles of members are identified in terms of their function in a process of knowledge transfer (i.e.: source, receiver, broker or intermediary).

4.4 Network relationships The EE/VE literature often focuses on the mechanisms of co-ordination of both operational tasks and economic transactions. For instance, crucial questions in those studies are if member firms have long-term or temporary relationships, if the network is based on market vs. hierarchical transaction mechanisms, etc. In such cases, the economic issues (e.g.: transaction costs) are considered more relevant than the cognitive aspects. The relationships in a KM network are essentially defined by the nature and characteristics of knowledge transfers between members.

4.5 Role of knowledge In an EE/VE model, knowledge is essentially intended as “competence” or “know-how” that a firm possesses about a particular market, activity, industry, technology, etc. In KM networks, knowledge is the main object of transfer between firms; it is this transfer that directly determines the value of the relationship (and of the whole network).

4.6 Managerial processes and key performances The (dynamic) efficiency, intended as the capability to respond to the external competitive pressures, is the main performance required to an EE/VE organisation. To achieve this, a significant effort is made to adapt the classic managerial schemes and models (e.g.: scheduling, task allocation, planning procedures, etc.) to the new inter-firm environment. KM studies, instead, focus on the design and implementation of methods and practices to manage knowledge transfer (i.e.: identification of appropriate tools for different kinds of knowledge, subdivision of tasks in KM processes, etc.)

4.7 Issues of ICT design and management In EE/VE organisations, the main concern is how to connect information systems of different firms; the emphasis is thus on extended ERP, EDI applications, and similar technologies. From a knowledge-based viewpoint, the central issue is thus the codification of knowledge in forms that can be directly handled by computers. In the case of knowledge networks, the literature considers the broader problem of transferring different kinds of knowledge through ICTs. Here, a critical aspect is how and when computer applications can underpin the sharing of tacit forms of knowledge.

5. Discussion and conclusions, implications for research The analysis proposed provides a number of interesting insights. First, the two fields of study focus on similar questions (i.e.: how to manage inter-firm networks) with a number of differences. These different perspectives can be seen as complementary to one another. Indeed, a network can be considered to be a “multi-level web of firms”. In other words, at the upper level we have cognitive

473

The 7th European Conference on Knowledge Management

relationships, while at the bottom level operational interconnections. The analysis of the connection between those two layers can also be an interesting point for further research on inter-firm networking. However, as the literatures examined show, there is little integration between the two fields. Even though knowledge is considered an essential asset in both the areas of study, the focus, languages, and practical research approaches that are used are in some cases significantly different. An effort of homogenisation can be still necessary. Also, there are some key issues for inter-firm network management that still require further study from both the two viewpoints. Here, we address the issue of the practical economic aspects of network management. For instance: what are the benefits of networking for the single firm and for the entire network? How can we measure that? Also, it is important to note that building, managing, and maintaining a network is a complex task that requires both infrastructural investments (i.e.: physical assets) and intangibles (i.e.: investment in individual and organisational learning, standard setting, etc.). How can costs be subdivided between members? What is the role of the leading network? The focus of the EE/VE literature is not far from these issues, but these are just treated faced based on either transaction costs analysis, or on traditional accounting schemes. The literature on knowledge network has, instead, not yet reached a significant level of analysis of the economic measurement of knowledge management activities. An effort of further research is thus required.

6. Acknowledgements This paper contributes to the FIRB 2003 project “Knowledge management in the Extended Enterprise: new organizational models in the digital age”, funded by the Italian Ministry of Education, University and Research.

References Beesley, L. (2004) “Multi-level complexity in the management of knowledge networks”, Journal of Knowledge Management, Vol.8, No.3, pp71-100. Blecker, T. and Neumann, R. (2000) “Interorganizational Knowledge Management — Some Perspectives for Knowledge oriented Strategic Management in Virtual Organizations”, in: Knowledge Management and Virtual Organizations, Yogesh Malhotra (Ed), Idea Group Publishing, Hershey – London, pp63–83. Bolisani, E. and Scarso, E. (2000) “Electronic communication and knowledge transfer”, International Journal of Technology Management, Vol. 20, Nos.1/2, pp116-133. Browne, J. and Zhang, J. (1999) “Extended and virtual enterprises-similarities and differences”, International Journal of Agile Management Systems, Vol.1, No.1, pp30–36. Childe, S.J. (1998) “The extended enterprise-a concept of co-operation”, Production Planning & Control, Vol.9, No.4, pp320-327. Davis, M. and O’Sullivan, D. (1998) “Communications technologies for the extended enterprise”, Production Planning & Control, Vol.9, No.8, pp742-753. Goranson, H.T. (2003) “Architectural support for the advanced virtual enterprise”, Computers in Industry, Vol.51, No.2, pp113-125. Halaris, C., Kerridge, S., Bafoutsou, G., Mentzas, G. and Kerridge, S. (2003) “An Integrated System Supporting Virtual Consortia in the Construction Sector”, Journal of Organizational Computing and Electronic Commerce, Vol.13, No.3/4, pp243-265. Hislop, D. (2005) “The effect of network size on intra-network knowledge processes”, Knowledge Management Research & Practice, Vol.3, pp244-252. Holsapple, C.W. (2005) “The inseparability of modern knowledge management and computerbased technology”, Journal of Knowledge Management, Vol.9, No.1, pp42-52. Inkpen, A.C. and Tsang, E.W.K. (2005) “Social capital, networks and knowledge transfer”, Academy of management Review, Vol.30, No.1, pp146-165. Jagdev, H.S. and Thoben, K.D. (2001) “Anatomy of enterprise collaborations”, Production Planning & Control, Vol.12, No.5, pp437-451.

474

Enrico Scarso, Ettore Bolisani and Maria Rita Arico’

Kinder, T. (2003) “Go with the flow-a conceptual framework for supply relations in the era of the extended enterprise”, Research Policy, Vol.32, No.3, pp503-523. Lefebvre, L.A. and Lefebvre, E. (2002) “E-commerce and virtual enterprises: issues and challenges for transition economies”, Technovation, Vol.22, pp313-323. Lillehagen, F. and Karlsen, D. (2001) “Visual extended enterprise engineering and operationembedding knowledge management and work execution”, Production Planning & Control, Vol.12, No.2, pp164-175. Millar, J., Demaid, A. and Quintas, P. (1997) “Trans-organizational Innovation: A Framework for Research, Technology Analysis & Strategic Management, Vol.9, No.4, pp399-418. O’Neill, H. and Sackett, P. (1994) “The extended manufacturing enterprise paradigm”, Management Decision, Vol.32, No.8, pp42-49. Panteli, N. and Sockalingam, S. (2005) “Trust and conflict within virtual inter-organizational alliances: a framework for facilitating knowledge sharing”, Decision Support Systems, Vol.39, pp599-617. Parise, S. and Henderson, J.C. (2001) “Knowledge resource exchange in strategic alliance”, IBM Systems Journal, Vol.40, No.4, pp908-924. Peña, I. (2002) “Knowledge networks as part of an integrated knowledge management approach”, Journal of Knowledge Management, Vol.6, No.5, pp469-478. Preiss, K. (1999) “Modelling of knowledge flows and their impact”, Journal of Knowledge Management, Vol.3, No.1, pp36-46. Pyka, A. (1997) “Informal networking”, Technovation, Vol.17, pp207-220. Pyka, A. (2002) “Innovation networks: from the incentive-based to the knowledge-based approaches”, European Journal of Innovation Management, Vol.5, No.3, pp152-163. Quintas, P., Lefrere, P. and Jones, G. (1997) “Knowledge Management: a Strategic Agenda”, Long Range Planning, Vol.30, No.3, pp385-391. Scarso, E., Bolisani, E. and Di Biagi, M. (2006) “Knowledge intermediation”, in Encyclopedia of Knowledge Management, David Schwartz (Ed), Idea Group, Hershey, PA, pp360-367. Seufert, A., von Krogh, G. and Bach, A. (1999) “Towards knowledge networking”, Journal of Knowledge Management, Vol.3, No.3, pp180-190. Spekman, R.E. and Davis, E.W. (2004) “Risky business: expanding the discussion on risk and the extended enterprise”, International Journal of Physical Distribution & Logistics Management, Vol.34, No.5, pp414-433. Spring, M. (2003) “Knowledge management in extended operations networks”, Journal of Knowledge Management, Vol.7, No.4, pp29-37. Stanescu, A.M., Dumitrache, I., Curaj, A., Caramihai, S.I. and Chircor, M. (2002) “Supervisory control and data acquisition for virtual enterprise”, International Journal of Production Research, Vol.40, No.15, pp3545-3559. Swann, J., Newell, S., Scarbrough, H. and Hislop, D. (1999) “Knowledge management and innovation: network and networking”, Journal of Knowledge Management, Vol.3, No.4, pp262-275. Tsai, C.Y., Tien, F.C. and Pan, T.Y. (2004) “Development of an XML-based structural product retrieval system for virtual enterprises”, International Journal of Production Research, Vol.42, No.8, pp1505-1524. Tuma, A. (1998) “Configuration and coordination of virtual production networks”, International Journal of Production Economics, Vol.56-57, No.1, pp641-648. Venkatraman, N. and Henderson, J.C. (1998) “Real strategies for virtual organizing”, Sloan Management Review, fall 1998, pp33-48. Warkentin, M., Sugumaran, V. and Bapna, R. (2001) “E-knowledge networks for interorganizational collaborative e-business”, Logistics Information Management, Vol.14, No.1/2, pp149-162. Yoo, S.B. and Kim, Y. (2002) “Web-based knowledge management for sharing product data in virtual enterprises”, International Journal of Production Economics, Vol.75, No.1, pp173183. Zhuge, H. (in press, 2005) “Knowledge flow network planning and simulation”, Decision Support Systems.

475

Information Broker Approach for Management Information Systems in SME Christian-Andreas Schumann and Claudia Tittmann University of Applied Sciences Zwickau, Germany [email protected] [email protected] Abstract: The generation of the business achievements as products and services of SME is realised by intra-organisational and intra-operational structures and inter-company networks increasingly. The huge quantity of originated data generated in the business processes has to be extracted and collected purposeful. The organisational units are provided with useful data according to a concept of the position- and hierarchydependent information service by special management information and knowledge transfer systems. Therefore, the organisational units are comprehended and developed as competence cells and as parts of one or several competence networks and clusters, respectively. Furthermore, the network architecture for the competence and information exchange including a suitable information and business model is created for the support and the automation of the processes. The result will be a Knowledge-Information-Broker as special knowledge transfer system or tool for SME. That is why, the complex processes and functionalities in the business divisions have to be analysed and described in relation to the database and information systems. The first level is the organisational and technical model of the knowledge and information exchange system. The content definition and benchmark model in the second level completes it. The third level is the application of the Knowledge-Information-Broker as business approach for the exchange of information and knowledge as evaluable goods in intra-organisational and inter-company networks. The general concept of KnowledgeInformation-Broker will be presented in the paper. Keywords: Knowledge transfer, knowledge broker, semantic net, and management information system

1. Information and knowledge within business processes Knowledge management is an approach to improving organisational outcomes and organisational learning by introducing into an organisation a range of specific processes and practices for identifying and capturing knowledge, know-how, expertise and other intellectual capital, and for making such knowledge assets available for transfer and reuse across the organization. (Wikipedia-KnowledgeManagement, 2006) The increasing application and impact of the knowledge management is established through: ƒ The transformation of society, work, workers, organisations, products, and services into knowledge society, knowledge work, knowledge workers, intelligent organisations as well as intelligent products and services ƒ The increasing acceleration of knowledge development, acquisition, storage, diversification, and distribution ƒ The growing complexity of organisational systems, architectures, processes, and functions ƒ The resulting reaction of the management with initiatives such as Business Process Reengineering (BPR) Lean Management, Relationship Management, Risk Management, Intelligent Planning and Forecasting etc. ƒ The modern information and communication methods, systems, and technologies as catalysts and as enabler for new organisational concepts and architectures as well as systems and tools. (Maier, 2003; Dick and Wehner, 2001) Knowledge Management is a complex approach characterised by socio-economical, psychological, organisational, and technical aspects. (Fig. 1)

476

Christian-Andreas Schumann and Claudia Tittmann

Figure 1: The Influences onto knowledge management (Dick and Wehner, 2001) There is a clear confession of the management to the knowledge management. The management recognised that knowledge management has strategic relevance. Knowledge and Information are rated as one of the most important resources. (Fig. 2)

Figure 2: Importance of resources in enterprises (Telephone Survey, 2003) But there are still a number of problems in introducing knowledge management in enterprises. Especially the complex change of philosophy and strategy, the long introduction period, huge volume of investigations as well as the interactions with the other systems and processes of the enterprises complicate the implementation of the knowledge management. They are completed by a number of additional factors of disturbance. (Fig. 3)

Figure 3: Barriers for the implementation of knowledge management (Telephone Survey, 2003) State of the art in the enterprise practice is that the knowledge can be distinguished in two main streams: documents knowledge and process knowledge. In the past the majority of the management persons were focused on the documents knowledge because it is easier to handle. It is fixed knowledge in the shape of documents. The documents knowledge is really relevant but the discovering of the process knowledge should complete it. Thereby, the information retrieval has a secondary role. Rather the methodology of linking information units with process steps is essential. (Zagos and Kiehne, 2003)

477

The 7th European Conference on Knowledge Management

A business process is a recipe for achieving a commercial result. Each business process has inputs, a method and outputs. The inputs are a pre-requisite that must be in place before the method can be put into practice. When the method is applied to the inputs, then certain outputs will be created. (Wikipedia-BusinessProcess, 2006) Therefore, the process knowledge reflects not only defined facts as fixed information and data but also the methodological knowledge for the design, the implementation, and the realisation of dynamic processes. The special process knowledge of the single specialist has to be transformed into the general process knowledge and to be integrated with the detailed process knowledge of the other specialists. (Fig. 4)

Figure 4: Collaborative Implementation and Application of Knowledge Spaces (Zagos and Kiehne, 2003) It is possible to optimise the existing business processes and to use the process know-how for generating standard processes as best practice solutions. The recent task in the knowledge management is to integrate the documents and process knowledge in the business theory and practice. Therefore, the network theory is important. It is applied in the framework of the networks of competence as a special kind of knowledge transfer network. (Schumann and Tittmann, 2004)

2. Process knowledge in relation to the competence networks The significance of networks of competence can be derived by the trend that company structures organise themselves are increasingly decentralized. On the one hand it comes to the legal spin-off of departments (Ortmann and Sydow, 2001). Primarily non-core processes will be separated. On the other hand cooperation with suppliers, customers or even competitors increases, and enterprise networks are formed on this way. Henceforth, short-term market relations will be replaced by cooperative meshed structures. The enterprise-spanning cooperation processes effect an increasing integration. The consequence is the mutual dependence of the partners. This requires a stronger coordination of the transactions within the scope of the cooperation. Furthermore, the specialist areas information management and knowledge management become more and more important (Nonaka and Takeuchi, 1995). The formation of competence networks implicates increasing complexity by the multiplication of information and knowledge. Additionally the business processes of the competence network yield new knowledge: process knowledge, which is useful for the whole network as well as for each competence partner. In spite of knowledge is no material value it has an immense importance: The assets of enterprises (Daum, 2002) consist of the components circulating assets, fixed assets, financial assets, and intangible assets. Into the fourth category, all immaterial enterprise values belong to the intangible assets. The generic term for this is „goodwill“. It is also referred to as intellectual capital. In this context the whole market value of an enterprise will be determined as follows: Market Value = Book Value + Intellectual Capital Intellectual Capital = Human Capital + Structure Capital + Relationship capital

478

Christian-Andreas Schumann and Claudia Tittmann

It becomes relatively difficult to determine the intellectual capital. The decreasing capital investments into material assets are the trends in enterprises. (Weber and Hess, 2005). The balance value of the enterprises amounts is partly only 10% of the market value in knowledgeintensive industries. (Daum, 2002). That is why, the meaning of utilisable evaluation of information and knowledge increases permanently. Networks of competence (Fig. 5) are a solution for handling of information and knowledge by providing information transfer and knowledge transfer. These networks organise and structure the competences of the partners and associated knowledge.

Figure 5: Structure of competence networks Important tasks for the competence network are retrieval and benchmarking as well as using and disposing knowledge. The realisation of these tasks demands from information technological view a knowledge management system for structuring knowledge of the competence partners, processes, skills, projects, etc. So for the competence network should be built up a knowledge network, based on semantic relations (Fig. 6). Indicators for benchmarking will complete the relations in the knowledge network.

Figure 6: Knowledge representation by semantic graphs Thus document and process knowledge can be saved, extracted, and utilised. The system has to supply the management with information about success-critical ranges of the enterprise and to realise the performance monitoring of processes. From the strategic view managers are enabled to control assets from their own enterprise or out of the other competencies

479

The 7th European Conference on Knowledge Management

in the network. This makes the configuration of assets for value chains much easier. Furthermore, the business processes can be controlled and benchmarked, and a risk management can be planned.

3. Business process benchmarking of organisational and additional knowledge The total knowledge in competence networks consists of the knowledge delivered by competence partners involved in the network, and additionally, knowledge extracted from business processes. The knowledge potential of every cooperation partner in the network as well as the whole organisational knowledge increases iteratively with the number of executed cooperation and processes. The accretion of knowledge also enormously affects the knowledge value of every competence partner. Therefore evaluating of this knowledge is necessary. This process needs three perspectives on the knowledge in companies to be examined: ƒ Intellectual capital, yielded into the network by the companies ƒ Intellectual capital, created by the cooperation processes ƒ Intellectual capital, took out of the network by the companies. The structured knowledge admission mapping and knowledge profiles are used for the acquisition and structuring of the newly integrated knowledge. These knowledge profiles consist of the components of the knowledge capital, that means human capital (e.g. number of co-workers, qualification), structure capital (e.g. resources), and relationship capital (e.g. enterprise contacts). It is possible to create a subset of skill profiles by using the entirety of the knowledge profile. These competence profiles deliver the support for the management in order to detect qualified cooperation partners. The imported knowledge and the new developed knowledge of an competence network have to be processed from two aspects: ƒ Information identification by semantic search and expert search ƒ Performance measure based decision support. Finally there is another important fact: how to handle security and authorisation. The ideas for collecting and using knowledge are not enough. Rule for who has access to which information and knowledge have to be integrated in the knowledge network. Knowledge components of the competence partners are mapped into the knowledge net, the knowledge base of the competence network. Afterwards the knowledge is distributed and made accessible according to set authorisations. (Fig. 7)

480

Christian-Andreas Schumann and Claudia Tittmann

Figure 7: Knowledge profiles within cooperation networks The structure of the knowledge net is related to the definition of the semantic net (WikipediaSemantischesNetz, 2006). A semantic net is a formal model of terms and its relations. Already in the early 1960s semantic nets were suggested by the linguist Ross Quillian as one possible representation form of semantic knowledge (Quillian, 1967). Semantic nets represent an optimal method for the representation of knowledge. The purposeful development of the models of terms is the basic condition for an efficient knowledge net (Hahn and Heeren, 2004) and fundamental for a working competence network. The concept of a semantic net will be extended by a detailed performance measurement system for the benchmarking of the knowledge. That means for an competence network, that first of all a strategy of semantic relations and performance measures is created to support the future value processes. The performance measurement system working on the fundamental semantic net should integrate the following dimensions, derived from the perspectives of intellectual capital: ƒ Benchmarking of companies knowledge (knowledge profiles), including human capital, structure capital, and relationship capital ƒ Benchmarking of business processes and cooperation, including companies view, partnership view, and process view ƒ Risk evaluation, including measures for strategic objectives within the network The performance measurement system integrates aspects of the balanced scorecard conception (measurement of the company objectives) and the knowledge balance (evaluation of the intellectual capital). Therewith, both dimensions cooperation identification, and process evaluation (Bergrath, 2004) are represented.

481

The 7th European Conference on Knowledge Management

4.

Architecture of a knowledge broker system for retrieval and distribution of information and knowledge within the network

Finally for using the knowledge-based competence network an implementation into integrated system architecture is necessary. The architecture of the knowledge-based system for information distribution for the management of SME consists of three complexes. (Fig. 7)

Figure 8Architecture of a knowledge broker system The first level will be the organisational and technical model of the knowledge and information exchange system. The content definition and benchmark model in the second level will complete it. The third level will be the application of the Knowledge-Information-Broker as business approach for the exchange of information and knowledge as evaluable goods in intra-organisational and inter-company networks. The tools integrated in the system work into four directions: development-orientated, individualsorientated, utilisation-orientated and data-orientated (Seidel, 2003). The bottom level provides the fundamental services (IT- and database services). For a working knowledge net a well planned and homogeneously IT-infrastructure is essential. In the second level the knowledge structure will be defined. This corresponds to the semantic meta-model consisting of the following components: ƒ Definition of the structure of classes, ƒ Relations between classes and their instances, ƒ Authentication-Model ƒ Performance Measurement-Model ƒ Triggers / Workflows ƒ Definitions of expert searches, semantic searches, and reports. Main focus should be set on the definition of the semantic class structure, because it defines structure, knowledge, and cooperation processes of competence partners within the network.

482

Christian-Andreas Schumann and Claudia Tittmann

Finally the third level enables the user editing of assigned knowledge areas, purposeful knowledge navigation and authorization-dependent knowledge representation. Existing external knowledge can be imported by standardized interfaces (e. g. XML, CSV). This knowledge will be integrated by mappings in the existing semantic structure. According to the information needs (e. g. in the management) the definition of semantic queries and expert searches supports the fast provision of knowledge on demand from the knowledge net. SME networks are enabled to evaluate existing knowledge and create new knowledge by using an information-knowledge-broker architecture. It is efficient to use this knowledge for business processes.

5. Summary The enormous significance of the intellectual capital for the market value of an enterprise requires the development and use of knowledge-based systems making appropriate information available for the management, and supports it by planning and executing value-relevant processes. Since enterprises unite their competences into networks, the concept is extended to a performance measurement knowledge network. The optimised knowledge for cooperation and the creation of value processes as well as the purposeful distribution of information are ensured by this system. Primarily, the focus is not set on knowledge quantity, but rather on the investigation of the optimal knowledge view for co-operation processes. At last but not least the enterprise success by optimised using of the available intellectual capital of the companies networks is essential for the re-engineering of the enterprise processes.

References Bergrath, A. et.al (2004): Wissensnutzung in Klein- und Mittelbetrieben. Wirtschaftsverlag Bachem, Köln. Daum, J. H. (2002): Intangible Assets. Galileo Press GmbH, Bonn. Dick, M.; Wehner, Th. (2001): Wissensmanagement der Stand der Diskussion. Hahn, O.; Heeren, R. (2004): Kompetenzplattform Mechanische Fügetechnik. Shaker Verlag GmbH, Aachen. Maier, R. (2003): Wissensmanagement in Organisationen, Paper, INSITU, MLU Halle. Nonaka, I.; Takeuchu, H. (1995): The Knowledge Creating Company. Oxford University Press, New York / Oxford. Ortmann, G.; Sydow, J. (2001): Strategie und Strukturation. Gabler Verlag, Wiesbaden. Quillian, M. R. (1967): Word concepts. A theory and simulation of some basic semantic capabilities, in: Behavioral Science 12 Schumann, Chr.-A.; Tittmann, C.; Weber, J.; Wolle, J. (2004): The Impact of Networks for Knowledge Transfer to the related Networks of Competence, VPP2004, Proceedings, TU Chemnitz. Seidel, M. (2003): Die Bereitschaft zur Wissensverteilung. Deutscher Universitäts-Verlag, Wiesbaden. Telephone Survey (2003): „Wissensmanagement in deutschen Unternehmen“, LexisNexis Deutschland, Frankfurt/Münster, Untiedt Research, Hattingen März / April 2003 Weber, J.; Hess, T.; Hachmeister, D. (2005): Controlling und Management von Intangible Assets. Gabler Verlag, Wiesbaden. Zagos, A.; Kiehne, D.-O. (2003): Wirtschaftliche Perspektiven, Anwendungen und Chancen für das Prozessorientierte Wissensmanagement, InTraCoM, Stuttgart Wikipedia-BusinessProcess (2006): http://en.wikipedia.org/wiki/Business_process, 02/05/2006 Wikipedia-KnowledgeManagement (2006): http://en.wikipedia.org/wiki/Knowledge_management, 02/05/2006 Wikipedia-SemantischesNetz (2006): http://de.wikipedia.org/wiki/Semantisches_Netz, 06/05/2006

483

Architecture for Effective Knowledge Creation in Educational Institutions and Dissemination Through Satellite Technology – An Indian Experience S.Shanthi and V.C.Ravichandran College of Engineering Guindy, Anna University, Chennai, India. [email protected] [email protected] Abstract: Indian education faces the challenges of both quantity and quality. To support this challenge, the Indian Space Research Organization has launched a satellite ‘EDUSAT’ completely dedicated to education intended to meet the requirements of several sectors of education. The recently tested Indian Virtual Learning Programme through EDUSAT and conduct of a survey research on the educational channel of India’s largest technical university at Chennai, working on the learning technology of Two way video and One way Video Two way audio respectively, warrants implementation of Knowledge Management programme in institutions of higher learning. Also the results of a comparative analysis performed between the practices to manage knowledge in Business sector and that of institution of higher learning revealed educational institutions in India are hitherto to take lessons from the business world in adopting Knowledge Management programme to create, acquire, store, share, utilize, transfer, disseminate knowledge and in sustaining its competitive advantage. This paper aims to review the SECI framework to assess its role in enabling knowledge creation in educational institutions. Further model architecture has been envisaged for effective Knowledge Management in educational institutions with special reference to knowledge creation, storage and effectively to disseminate the same to the required people, at the required time and in the required format through satellite technology. Keywords: Knowledge management in education, knowledge creation, knowledge dissemination, satellite technologies in higher education.

1. Education in India The extension of quality education to remote and rural regions becomes a Herculean task for a large country like India with multi-lingual and multi-cultural population separated by vast geographical distances, and in many instances, inaccessible terrain. The lack of adequate rural educational infrastructure and non-availability of good teachers in sufficient numbers adversely affect the efforts made in education. ‘Providing urban amenities in rural areas (PURA) is essentially to address comprehensively all these problems. It is conceived around four types of connectivity, with the aim to speed up the process of achieving total rural prosperity. They are Physical connectivity, Electronic connectivity, Knowledge connectivity and Economic connectivity. Efforts have been continuously taken for a long period of time for Physical Connectivity and it still will take considerable time as it involves heavy capital and recurring budget. Electronic Connectivity is the essential infrastructure for Knowledge transfer, which will cause Economic Prosperity (PoI). The focus of India is now on the Electronic Connectivity, i.e. meeting the needs of education in India through technology, especially Satellite technology. 1.1 Educational experiments India has had numerous projects during the last five decades of media ventures. Indian Space Research Organization emerged with INSAT series of Satellites. Institutions of higher learning started playing a major role in providing educational content to be disseminated through the media ventures. Five Indian Institute of Technology, 4 National Technical Teachers Training Institute and Indira Gandhi National Open University started creating software for technology education. The Consortium of Educational Communication was set up by the University Grants Commission, which started making knowledge-ware available to people with the help of satellite technology. The Training and Development Communication Channel (TDCC) of INSAT, which provides the ‘oneway video two-way audio’ teleconferencing network for interactive training and education, Higher Education Television Channel, ‘Vyas’, Gyan Darshan (GD) with four operational channels helps provide quality education to a larger population. The Indian Space Programme has promoted the development and applications of space technology for the socio-economic benefits of the nation through hired satellites initially and later on by the indigenously built INSAT satellite system and

484

S.Shanthi and Dr.V.C.Ravichandran

now recently the exclusively dedicated satellite for education EDUSAT. EDUSAT has a capability to provide a C-Band national beam, KU-Band national beam and five KU-Band regional beams. This would facilitate imparting of education in the regional languages and meet the needs of the state governments. EDUSAT will supplement curriculum based teaching, provide effective teacher’s training, provide greater community participation, increase access, strengthen education efforts and provide access to new technologies through a well thought ground segment established in schools, colleges and other institutions. A wide range of technological possibilities is the most vital support system for enhancing the quality of education. The figure below projects all technical possibilities on EDUSAT such as radio broadcast, TV broadcast, online education through internet, computer connectivity & data broadcasting, talkback channel, audio-video interaction, voice chat on internet, asymmetric internet through TVRO and video conferencing.

Figure.1: Technical possibilities of EDUSAT

1.2 Knowledge dissemination through satellite technology Anna University is a highly renowned affiliated type of University, having brought into its fold about 225 Self-financing Engineering Colleges, six Government Colleges and three Government-aided Engineering colleges located in various parts of Tamilnadu State in India. It offers higher education in Engineering, Technology and allied Sciences. Anna University caters to the knowledge needs of the large student community in its fold through various ways, one such being the dissemination through its educational channel AU-TDCC. AU-TDCC (Anna University-Training and Development Communication Channel) is the first channel to go on EDUSAT since September 2004. It provides curriculum and syllabus based live interactive lecture series to its affiliated engineering colleges of Tamilnadu. This paper presents the results of two surveys, the first conducted among the students of engineering colleges in Tamilnadu, India who are users of the channel AU-TDCC. The first survey studies the effectiveness of the channel in disseminating knowledge and the user requirements in terms of Quality of Content delivered, the most preferred teaching-learning process, the need for reuse of knowledge disseminated, the existing knowledge storage mechanisms, the preferred mode of transmission and the preferred mode of reuse of knowledge. The data presented in this paper is a part of the major survey research conducted in selected 12 engineering colleges from North, South, East and West regions of Tamilnadu. In the selected colleges, a survey was conducted among the students and faculty of all branches of engineering and an in-depth interview with the Heads of Departments and Principals of all these institutions. Random sampling was adopted to select samples and questionnaires were distributed to these samples (in each branch and in each year of study). This paper considers the data of 300 student samples collected from a college each in urban and rural area, Chennai and Thuckalay. The findings given in Figure 2 clearly state that knowledge that is disseminated from those institutions with abundant knowledge sources to those that lack in them, is well received as it contributes to a great extent to the whole process of teaching-learning. The basic infrastructure is now available to enable knowledge transfer through satellite technology. Meanwhile, the results also draw attention to the need for refinement of the

485

The 7th European Conference on Knowledge Management

whole system to facilitate delivery of better quality content; mechanisms to store and reuse knowledge as and when required and further usage of various other modes of transmission.

Figure.2: Results of survey on educational channel The results obtained from the survey given above clearly state the need for much professionally managed knowledge to be disseminated. Content disseminated does not completely satisfy the requirements of the learners, as it is evident from the results. The knowledge disseminated is not available for further use owing to lack of proper storage or sharing mechanisms. The learner’s community prefers all formats and internet mode for retrieval of knowledge as and when required. Though some of these are already being addressed, to some extent, in a random manner, it needs to be systematized and done professionally. Having the infrastructure ready, now the knowledge content that is developed and disseminated has to pass through proper stages. Knowledge will fuel economic prosperity. But for this to be realized a lot more needs to be done in terms of identifying knowledge resources, capturing, formatting, storing and effectively delivering them through the available media. Continuous feedback mechanisms to monitor the relevance and effectiveness of the delivery mechanisms have to be put in place.

486

S.Shanthi and Dr.V.C.Ravichandran

1.3 Integrating management of knowledge and media The success of education in achieving its objective of providing high quality education to all those who will require it can only be achieved by properly managing the Knowledge and Media. The survey results discussed in Figure.2 leads to the following model which aims to deliver quality content to the end user.

Figure.3. Integrated management of knowledge and media ƒ Knowledge Requirement Analysis and Specification- A proper research study to analyze and specify the Knowledge Requirements ƒ Knowledge Identification - Identification of experts or expertise already captured, for the various domains of Knowledge Required ƒ Knowledge Acquisition - The knowledge available with the experts to be acquired and captured in suitable format with appropriate illustrations using graphics and other multimedia authoring tools. ƒ Knowledge Base- The knowledge acquired from experts internally or from external sources on a mutual sharing basis or for a cost have to be properly organized and stored in the Knowledge Base so that easy and efficient storage is possible and later leads to effective retrieval and updates. ƒ Knowledge Dissemination through proper Media Management. User Requirement AnalysisSelection of delivery Mechanism-Managing the Media ƒ Closed Loop Control. Analyse the effectiveness of the delivery mechanism-Analyze the changing user requirements, Knowledge Requirements-Media Access Capabilities At this moment when Virtual Classrooms, E-learning, Two way video, Audio on Demand, Video on Demand have started supplementing conventional education at Universities, higher education still faces tremendous challenges. To provide right knowledge, to the right people at the right time; knowledge has to be managed in terms of identification, acquisition, storage, creation, sharing, transfer and dissemination across the Universities. Towards achieving this objective, the implementation of knowledge management in Institutions of higher learning as a key solution is discussed further in this paper.

2. Knowledge and management A business discipline called Knowledge Management emerged that identifies captures, organizes, and processes information to create knowledge. Knowledge Management is a conscious effort to get the right knowledge to the right people at the right time so that people can share and put information into action in ways that improve an Organization’s performance. The management of knowledge has to do with how people use information to solve problems. Knowledge Management is also concerned with the sources of information that become knowledge and how knowledge can be stored and retrieved for use by specific and particular clients. The existence of practices to manage knowledge at both business and educational sectors are studied with the help of in-depth interviews and surveys to understand the current status and further to arrive at an architecture that would help manage knowledge at Institutions of Higher Learning. Knowledge Management in one of the top five information technology companies of India is considered, for understanding the

487

The 7th European Conference on Knowledge Management

practices to manage knowledge in terms of knowledge creation, sharing, utilization, transfer and dissemination. In-depth interviews with employees of the organization revealed that the top management of the organization encourages and rewards knowledge sharing, knowledge reuse and new knowledge creation through both culture as well as technology. The organization under study had practices such as established communities of practice, corporate yellow pages, provision of talk rooms, after action reviews, knowledge fairs, knowledge networks, knowledge repositories, best practices database, lessons learnt database, knowledge maps, intranet and forum to share knowledge thereby enabling employees to share and constantly create new knowledge. Culture plays a tremendous part in motivating employees to make tacit knowledge explicit and engage in new knowledge creation.

2.1 Institutions of higher learning and knowledge management practices To study the management of knowledge in institutions of higher learning, a short spectrum survey was conducted among the teaching faculty members of various government and private engineering colleges. To have a representative sample for the study, the survey was conducted among the participants of an International conference, which had attracted faculty members from various parts of India, Andaman Nicobar Islands and a few Universities from abroad. Random sampling was done to select 30 samples from the participants. The selected samples constituted Lecturers, Assistant Professors, Professors and Heads of Departments. A questionnaire was administered to the samples, which consisted of 37 questions aimed at studying the culture and technologies in place for knowledge management in their institutions. Questions were asked to find if there is a formal vision or strategy to manage knowledge, capture of key expertise in some form, reuse rate of stored knowledge, rewards for sharing and reusing knowledge, existence of on-line communities of practice, conducive layout to share knowledge, technological infrastructure to share knowledge. The following figure portrays the existing scenario in educational institutions. The results obtained clearly state the non-existence of practices to capture the best practices or lessons learnt, work together in teams integrating various departments of the institution, reward mechanism to appreciate reuse of knowledge. It points to the low rate at which new knowledge is created in the institution. This is a clear indication of the need for a formal Knowledge Management programme in Institutions of higher learning. Still a sizable amount of the Institutions are yet to realize the concept of knowledge management and to take full advantage of its knowledge potential. Institutions of higher learning need to be re-configured to fit changing requirements. From the samples that responded that there is a knowledge management programme formally or informally to some extent, the Correlation values are positive (0.561716) for the relationship between willingness to share knowledge and creation of new knowledge. Also a positive relationship (0.350586) exists between culture and willingness to share. The results validate the need for a formal Knowledge Management programme in all Institutions of higher learning to share, create and make best use of its knowledge in sustaining its competitive advantage and to improve its academic, research products and services.

3. Architecture for knowledge management in institutions of higher learning Even if universities, research institutions and laboratories produce large amounts of knowledge, it will be of little use until the majority of the population actually receive and assimilate it. Only then the process of knowledge communication is complete. The infrastructure required to disseminate knowledge to large masses is very much available but the practices to take care of managing knowledge at its own centers of creation is lacking to the core. Every institution needs to take lessons from the business world in identifying its own knowledge and in sustaining it for competitive advantage. India is now flooded with world-class institutions from every part of the world. In such a context, the nation needs to identify its own centers of excellence and sustain its competitive advantage to withstand the stiff competition.

488

S.Shanthi and Dr.V.C.Ravichandran

Figure.4: Results of survey on knowledge management practices

3.1 SECI process for institutions of higher learning An organisation creates knowledge through the interactions between explicit knowledge and tacit knowledge called the `knowledge conversion'. Through the conversion process, tacit and explicit knowledge expands in both quality and quantity. There are four modes of knowledge conversion. They are: (1) socialization (from tacit knowledge to tacit knowledge); (2) externalization (from tacit knowledge to explicit knowledge); (3) combination (from explicit knowledge to explicit knowledge); and (4) internalization (from explicit knowledge to tacit knowledge) (Nonaka, 1995). The figure shows the four modes of knowledge conversion and the evolving spiral movement of knowledge through the SECI (Socialisation, Externalisation, Combination, and Internalisation) process. Each educational institution needs to build up its own framework in order to benchmark its progress. This will depend very much upon its particular sector of education and its individual strengths and weaknesses. Establishing this will help it judge what will best drive its knowledge management programme, and allow it to set standards and targets. All institutions already possess some level of knowledge management within their existing structure. This has to be identified and institutions must commence a formal Knowledge Management programme. Prior to commencing one, it is helpful to perform a knowledge audit to measure the strengths and weaknesses of the institution. The audit will help the institution identify the inhibitors of internal knowledge sharing and

489

The 7th European Conference on Knowledge Management

identify each and every factor for managing knowledge towards being a competitive advantage. Every institution must strive to establish a strategy to identify its strengths and weaknesses, enable new knowledge creation, establish mechanisms to store, share, utilize, reuse knowledge and disseminate the same to the large mass. The institution must embark on a strong culture to enable knowledge sharing and utilization; define the skills and competency necessary to create new knowledge. The supporting infrastructure should provide both the right technologies, the right motivation and rewards package.

Figure.5: Knowledge spiral (Nonaka & Takeuchi 1995) Much of the education process is about the building up of tacit knowledge and making it explicit for the learners. The SECI process as defined for the organizations to convert tacit into explicit and back to tacit knowledge will well suit the education process, where the whole challenge lies in this conversion. Institutions of higher learning constantly create new knowledge through its various interactions internally and externally, through its research and development activities and the constant search for knowledge through various conferences, seminars and workshops. For knowledge educators, the most difficult thing is to learn how to transfer tacit knowledge into explicit knowledge for learners’ retrieval. Through discussions, meetings, chats, Invited talks and interviews, institutional knowledge in terms of curriculum development process, subject knowledge, research process, strategic planning process and also administrative process can be made tacit to the person(s) interviewing or engaged in discussions. This tacit to tacit conversion can be made explicit through informal codification. The knowledge can be converted into various formats of power point presentations, audio files, video files, text files, web applications and can well be stored for further reference. Technologies such as intranet, internet, knowledge sharing website, groupware, list servers, knowledge repositories, databases, knowledge networks play its part in sharing the explicit knowledge. Tacit knowledge converted to explicit and stored in repositories are accessed by the academic community and thereby create new knowledge out of the assimilated knowledge and their own tacit knowledge and further stores the same for others reference. The explicit knowledge found in such storage and sharing technologies are read, assimilated and made tacit again. The entire process of knowledge conversion can take place in the institution. The knowledge spiral suggested by Nonaka & Takeuchi (1995) can well be extended to the institutions of higher learning to convert tacit to explicit and back to tacit. This process requires adequate leadership, culture and technology. Content created out of the knowledge conversion process can be disseminated through the satellite and other technology. Key expertise existing in both tacit and explicit forms need to be captured and codified. Knowledge codification is the translation of explicit knowledge into some written or visual format. Frequently the codified knowledge is to be stored within knowledge management systems, or manuals. Individualized learning should be transformed into institutional learning through documenting the knowledge into the Institution’s knowledge repository. This repository can be a bank of best practices and lessons learnt. A well-established technological infrastructure to support communication of knowledge should be present. The layout of the institution has to be conducive to knowledge sharing formally and informally. Many programmes designed to exchange knowledge is to be created such as Knowledge fairs and exchanges. Reward systems are to be identified to promote team based relationships, knowledge sharing, reuse and creation of new knowledge. Measurement techniques are to be established to track the degree of knowledge management initiatives and its outcome. The success or failure stories must be captured, as they are extremely

490

S.Shanthi and Dr.V.C.Ravichandran

valuable in following the best practices or to avoid reinventing the wheel. Knowledge management roles are to be identified. The most important thing that a knowledge manager can do is to identify causes that prevent sharing of Knowledge and overcome them. Reward and recognition system should be consistent with the Institution’s missions and objectives. The architecture, infrastructure and culture to manage knowledge should constantly be updated to make people committed to knowledge sharing and creation. Knowledge communities are probably one of the best practical means of developing and leveraging tacit knowledge. A very broad definition of Communities of Practice is that it is a group of people with a common interest who work together informally in a responsible, independent fashion to promote learning, solve problems, or develop new ideas (Storck and Hill, 2000; Wenger and Synder, 2000). Within a CoP, people collaborate directly; teach each other; and share experiences and knowledge in ways that foster innovation. Institutions must create knowledge repositories. The purpose of knowledge repository is to take knowledge embodied in documents, reports, articles and put it in a repository where it can be easily stored and retrieved. Tacit knowledge has to be codified and to be made available on these repositories. Institutions can set up Knowledge centres, which will be responsible for the addition, maintenance and review of documents placed within the knowledge repositories. These knowledge centres should also be responsible for the security and accuracy of information and knowledge that is placed within the repositories. Personnel in these knowledge centres need be responsible for the setting up and maintenance of the knowledge dictionary for the Institution. The knowledge dictionary is very important as it holds the key to searching and retrieving documents held within knowledge repositories. The results of the survey conducted among the students of various engineering colleges of Tamilnadu, the faculty members of Institutions of Higher Learning and that of the Business Organization has clearly necessitated the extension of the concepts of knowledge management present in the business sector to the education sector. The above discussions on the SECI process of knowledge creation finally leads to the much required detailed architecture for effective knowledge creation and management in Institutions of Higher Learning. The architecture proposed below will result in a knowledge grid where the institution of higher learning will interact with the various research and development institutions, Industries and other academic institutions through high bandwidth optical and satellite connectivity. In every institution the adoption of SECI process will constantly result in creation of new knowledge. The entire process of academics, research, administration and planning of the institution can be well managed through appropriate culture, technology, leadership and strategies and measurement mechanisms as mentioned in Figure 6. Through Socialization, Externalization, Combination and Internalization processes, there will be a constant sharing and generation of new knowledge. Technology will act as a facilitator for knowledge storage and sharing among members of the academic community. Tacit Knowledge acquired through research activities, academic-industry collaborations, administrative experiences and planning gets converted into explicit knowledge for further sharing and retrieval. The transformation of tacit to explicit knowledge will lead to fruitful integrated assignments among the various departments of the institution. The explicit knowledge becomes tacit once again on assimilation of the various knowledge sources through internal and external interactions. The entire process of conversion is suitably supported with an encouraging culture and environment to share and reuse knowledge. Recognizing knowledge as the main driver of success, this architecture suggests a clearly formulated vision and strategy for knowledge creation, commitment to manage and create knowledge, valuing employees for their intellect and their capacity to create knowledge, motivating high levels of individual, team and Institutional learning, creating culture that facilitates knowledge creation, launching new knowledge based products and services, creating a reward system that encourages knowledge sharing and transfer, using technology as a catalyst to support knowledge networks and communities of practice, and providing continued support by leaders to constantly spread the message of sharing and leveraging knowledge. The democratization of knowledge resources induces people at every level to contribute, to participate, to interact, to grow and to learn.

491

The 7th European Conference on Knowledge Management

Figure. 6: Knowledge management architecture for institutions of higher learning

3.2 Knowledge management in educational media centres On successful implementation of knowledge management concepts in the Institutions of Higher Learning, it is further required to manage knowledge at the educational media production centres for effective dissemination of the same. Technology can provide media software producers and managers with new ways to manage. Effective Knowledge Management requires a hybrid solution of people and technology. Technology can provide the vehicles, for the successful management of knowledge, the various forms of content and the dissemination modes. Information Communication Technologies and other Knowledge Management tools make information easier to store, access and manipulate and help in managing the content right from its inception through its dissemination process. 3.2.1 Content management Selecting and implementing a content management system will be one of the largest knowledge management projects to be tackled by many educational software production centres. When the content is generated, it should be a sharable learning object across all the learners locations and across all platforms. Content can be generated in many ways, the first is the assimilation of the subject by an expert teacher through research study of many books and articles leading to the generation of quality and creative content in a presentable form, the second form of content on a self learning method, and the third which could be extracted through a digital library and presented just-in-time to all the remote students and those who need it. In addition to traditional databases, it is recommended to integrate multimedia and database technology to allow

492

S.Shanthi and Dr.V.C.Ravichandran

production centres to effectively store, index, retrieve and disseminate images, text, video and audio any other multimedia files. The core of most Content Management System solutions is a central repository supported by a range of tools for manipulating and managing the content. Content Management System will have many simultaneous users.

Figure 7: Life cycle of content management system Media Server: Production centres will have to systemize the production process for efficient management. A three-stage process can be adopted. The roughly produced content on the basis of formative research has to be placed in server1: Development Server. The next following phase is the Testing. Process research could be undertaken to find out the value of the content in terms of its utilization, usefulness and relevance so that mid-course corrections, change of emphasis, change in communication approach can be brought about. Content which fails to meet the standards could be sent back to server 1 and if it conforms to the standards and requirements of the region, language, format etc., the content could be moved to the next server level, the Test Server. Contents, which have cleared this stage, could then be pushed to the final server, the Production Server, which could be then given permission for dissemination. Server 3-Production Server will facilitate the dissemination process. The basic search will allow the user to search by subject area, phrase, format or keyword. If the user wishes to contribute a resource to the multimedia database, submission process could be done by pressing the contribute button based on details of criteria. The Content Management System must be supported by adequate documentation for users, administrators and developers. 3.2.2 Document management Any content management system must be associated with relevant documents explaining all the processes involved through the stages of formative research, production, process research and summative research. A document management system provides for easy and faster access to all the documents. It takes care of creating, storing, editing and distributing documents. It facilitates authorization and authentication of users to specific document, along with version control of the documents. Content Management will prove to be effective if and only if media organizations adopt and effectively maintain the following types of documents. 1. As Is Document - Document to state what the present requirements are for content generation. 2. To Be Document – Document reflecting the expected production capacities 3. Document for Technical requirements and other related issues 4. Document for Testing conditions 5. Document for Review and feedback assimilation 6. Document for further suggestions and improvements. Document and Content Management products so far supporting the Information Technology Industry can be extended to the educational software production centres for its effective and efficient management.

4. Conclusion With the help of survey results and in-depth interviews among the students and faculty of Institutions of Higher Learning and that of Business organizations, this paper has proposed a detailed architecture to enable new knowledge creation and to implement Knowledge Management

493

The 7th European Conference on Knowledge Management

in Institutions of Higher Learning. The paper has also presented ways to manage knowledge in the Educational Media Production centres. This architecture developed for institutions of higher learning will rectify the problems revealed by figure 2 and figure 4. When connectivity is established with high bandwidth, linking all universities, research institutions and industries, the emergence of a greater need to manage knowledge effectively will be met by this proposed architecture. The goal to achieve excellence in the educational system largely depends on the implementation of knowledge management processes and practices in the institutions of higher learning, which are the sources of knowledge creation, acquisition and dissemination. This model architecture envisaged for effective Knowledge Management in educational institutions and in the educational media centers will prove to be effective and efficient in satisfying the dual demands of education in India, that of Quality and Quantity. The proposed architecture will enable effective knowledge creation and dissemination of the same to the required people, at the required time and in the required format through satellite technology. For institutions of higher learning, both in India and elsewhere, this proposed architecture for knowledge management is particularly promising and appropriate.

References Abdul Kalam, APJ and Sivathanu Pillai, A. (2004) “‘Envisioning an empowered nation Technology for societal Transformation” Tata McGraw Hill. Armistead, Colin and Magda Meakins. (2002) “A Framework for Practising Knowledge Management”, Long Range Planning. Vol.35, p.49-71. Davenport, T and Prusak, L. (1998) “Working Knowledge: How Organizations Manage What They Know, Harvard Business School Press” Boston. Gamble, P R and Blackwell, J. (2001) “Knowledge Management”. Kogan Page. London. Holsapple [Ed.] (2000) ”Handbook on Knowledge Management Vol I”, Springer Verlag. Kreiner, K (2002) “Tacit Knowledge Management: The Role of Artifacts”. Journal of Knowledge Management, Vol.6, No.2, p.112-123 Kasturirangan, K. (2003) Space program-India forges ahead, http://www.isro.org National Knowledge Commission, http://www.knowledgecommission.org Nonaka, I. and Takeuchi, H. (1995) “The Knowledge Creating Company”. Oxford University Press. New York. Nonaka, I, Toyama, R. and Konno, N. (2000) “SECI, Ba and Leadership: A Unified Model of Dynamic Knowledge Creation”, Long Range Planning, Vol.33, p.5-34. O’Dell, C. and Grayson, C.J. (1998) “If only we knew what we know: Identification and transfer of internal best practices,” California Management Review, 40,3, 154-174. President of India, Shri.A.P.J.Abdul Kalam’s speech www.presidentofindia.nic.in, PoI “Papers on Educational Sectors for EDUSAT Utilization” Development and Educational Communication Unit, ISRO, Ahmedabad, 2003. Paper on Distance Education Papers on Educational sectors for Edusat utilization, DECU/ISRO publications, December 2003. Prof. Marmar Mukhopadhyay, (2003) “Educating the nation-Need for a dedicated satellite’, DECU/ISRO publications. Pan, S and Leidner, D. (2001) “Bridging Communities of Practice: The pursuit of Global Knowledge Sharing,” Working paper, National University of Singapore. Sohan Vir Chaudhary, (2003) “Open Distance Learning in India A case for a dedicated Educational Satellite”, DECU/ISRO publications. Soo, C., Devinney, D. Midgley, D. and Deering, A. (2002) “Knowledge Management: Philosophy, Processes, and Pitfalls”. California Management Review, Vol.44, No.4, p.129-150.

494

The Moderating Role of the Team–Leader in the Value of Knowledge Utilization: An Extension of Haas and Hansen's Situated Performance Perspective Evangelia Siachou and Anthony Ioannidis Athens University of Economics and Business, Athens, Greece [email protected] [email protected] Abstract: This empirical study builds on the rationale behind Situated Performance Perspective (the effective use of the organizational knowledge from outside the team is task-situated) and also combines elements of prior research related to knowledge transfer issues within teams in organizations. It examines the effective transfer of knowledge (personal or codified) from outside the team, which is embedded in different organizational sources, and the effective use of incoming knowledge within the team. In so doing, it considers the case of action-teams having to deal with unpredictable situations and thus with a greater need to obtain and use specific knowledge from outside the team, within time constraints. Moreover, this framework views the team-leaders as knowledge-processors playing the dual role of recipients of knowledge acquired from outside the team and of sources of knowledge required for their team to complete the tasks undertaken successfully. Results verify that the active participation of team-leaders in the knowledge transfer mechanisms leads to effective knowledge utilization. Keywords: Team-leadership, knowledge transfer, knowledge use, action teams

1. Introduction The transfer of knowledge is one of the organizational knowledge management processes (knowledge acquisition, creation, storage/retrieval and application) encouraged by many organizations. Transfer of knowledge within organizations can occur at various levels: between individuals, within or between teams in an organization (Argote and Ingram, 2000; Rulke et al, 2000; Alavi and Leidner, 2001). At each of these levels, there is a relationship of knowledge exchange between a source and a recipient depending on the characteristics of everyone involved; that is the willingness or unwillingness of the source to share knowledge and/or the recipient's absorptive capacity or lack of it (Szulanski, 1996). Although the ability to transfer knowledge within organizations promises success as it has been found to improve performance, the effective transfer of knowledge remains a problem for many organizations (Szulanski, 1996; Hansen, 1999; Argote and Ingram, 2000; Cabrera and Cabrera, 2002). A plausible explanation could be the tendency of organizations to accept unquestioningly the uni-dimensional nature of intra-organizational knowledge transfer. For instance, the advanced information technology (Knowledge Management Systems) that many organizations employ extensively does support knowledge transfer, yet alone it is not always adequate to transfer knowledge to the parts of the organizations where this is needed and can be used effectively. Knowledge transfer yields maximum benefits when its multi-faceted nature is taken into account. Much research into organizational knowledge transfer indicates the complexity of this process, in that it distinguishes various factors with a significant impact on knowledge exchange. Moreover, research evidence shows the process can also be selective (Argote et al, 1990; Epple et al, 1991; Darr et al, 1995) and often task-situated (Haas and Hansen, 2005); its effectiveness depending on organizational situations. Among other factors, the role of regular communication, personal acquaintance, and meetings as transfer mechanisms facilitate the knowledge transfer process (Darr et. al, 1995); mutual trust and influence increase the levels of knowledge sharing between the groups and lead to beneficial organizational performance outcomes (Nelson and Cooprider, 1996). The role of strong ties in transferring knowledge between organization subunits (Hansen, 1999) and, furthermore, the motivational inclination of the source to share knowledge with others - as well as the existence of the transmission channel (Gupta and Govindarajan, 2000).

495

The 7th European Conference on Knowledge Management

2. An extension of Haas and Hansen's situated performance perspective 2.1 The nature of the problem The Situated Performance Perspective proposed by Haas and Hansen (2005) expands prior research on the effects of knowledge transfer within teams on team-task performance, viewing organizational knowledge as information good with practical value. It assesses the use of knowledge embedded (Badaracco in (Nelson and Cooprider, 1996:411) in organizations (either personal or codified) by measuring task performance outcomes. Furthermore, it indicates that the conditions under which a task is undertaken influence the effectiveness of obtaining and using organizational knowledge from outside the team. Specifically, this perspective considers the level of prior experience of the team with the task, and the competitive intensity of the task situation, as two important conditions, moderating the effects of obtaining and using the organizational knowledge from outside the team required to complete the team-task successfully. Drawing on the Situated Performance Perspective, and aiming to extend prior research, an attempt is made to examine another important factor suggested by Haas and Hansen (2005): the effect of the team leader's role in the efficient use of organizational knowledge. Specifically, the purpose is to examine whether the level of the team leader's participation in the knowledge transfer process within the team may affect the value of organizational knowledge. Generally speaking, it is expected that team leaders set up the conditions required obviating the inherent difficulty of the knowledge transfer (Davenport and Prusak, 1998; Rulke et al, 2000). To do this, they should have the ability to motivate team members to accept knowledge from outside the team or be able to exploit resources of knowledge outside the organization (Szulanski, 1996). In addition, the nature of the tasks that teams have to perform is a significant factor that determines the role of a team leader in the exchange of organizational knowledge within the team. Action teams, comprising members with special skills who have to deal with unpredictable situations, have a greater need to obtain and use accurate and specific knowledge from outside the team, within time constraints, in a coordinated consequence of actions (Edmondson, 2003).

2.2 The theoretical framework When a knowledge transfer process is implemented by an organization, it is almost prerequisite that the appropriate organizational context supports such a process. In other words, organizations have, first, to create the appropriate culture that respects knowledge transfer and second, to establish suitable knowledge infrastructure that enhances and facilitates exchange of knowledge between and within their constituent functional parts. Although it is very difficult to pinpoint the proper organizational context, it is supposed that formal structures and systems (i.e. formal and informal communication channels), sources and coordination of expertise, familiarity with knowledge-oriented technologies and the behavior–framing characteristics of knowledge sharing could promote effectively the transfer of knowledge within organizations (Argote et al, 2000; Rulke et al, 2000). Reviewing the relevant literature (Darr et al, 1995; Gupta and Govindarajan, 2000; Alavi and Leidner, 2001), several mechanisms of knowledge transfer are isolated (i.e. personnel movement, training, communication, technology transfer, replicating routines). For the present study, it was decided to examine the knowledge transfer mechanism of personal formal and informal communication channels and tools (to obtain personal knowledge) as well as impersonal communication tools (to accumulate codified knowledge). Personal formal communication channels refer to scheduled communication activities (e.g. annual executives’ conferences, departmental meetings, ad hoc situational committees, training sessions and / or speak-up groups), whilst personal informal communication channels refer to similar (unscheduled) communication activities taking place informally (Nelson and Cooprider, 1996; Alavi and Leidner, 2001; Reagans and McEvily, 2003). Additionally, communication tools - either formal or informal - refer to scheduled or unscheduled e-mail exchanges and phone calls respectively. Finally, impersonal communication tools are the knowledge repositories in which the organizational knowledge is embedded, such as newsletters, intranet, textbooks and/or various web sites relevant to the tasks undertaken.

496

Evangelia Siachou and Anthony Ioannidis

In an attempt to define the expected moderating effects of the team–leader's active participation in the knowledge transfer process, it is proposed that the team-leaders act as knowledge processors: in this view, team–leaders behave as both the “source” and the “recipient” in a knowledge exchange procedure. It is expected of team-leaders to support and promote the desired culture for an effective knowledge exchange within and between parts of the organization, exhibiting a number of characteristics, which could affect the efficiency of the process of knowledge transfer. Based on Szulanski’s characterizations, it is proposed that team-leaders, when playing the role of knowledge-recipients from outside the team, need to recognize the value of incoming knowledge, making any necessary modifications in order to complete the tasks undertaken (absorptive capacity) (Cohen and Levinthal, 1990; Szulanski, 1996; Gupta and Govindarajan, 2000) or, equally, to institutionalize the utilization of the incoming knowledge (retentive capacity) (Szulanski, 1996). Furthermore, following the same taxonomy, team-leaders in the role of knowledge-source have to manifest high levels of motivation in order to share important knowledge with the team members, to devote time and resources to support the transfer, and also to motivate the members of the team in terms of ongoing learning, accepting and using the incoming knowledge effectively. The implication here is that the characteristics that team-leaders have to manifest – in the role of the knowledge-sources or of knowledge-recipients - affect the effectiveness of the transfer and the valuable use of the incoming knowledge within the team, and thus could be barriers or enablers accordingly, when a knowledge exchange occurs. Following Haas and Hansen's (2005:26) taxonomy of organizational knowledge, obtaining and transferring knowledge through communication channels (either formal or informal) from outside the teams refers to personal knowledge, and obtaining and transferring knowledge from database systems outside the teams represents codified knowledge. These two types of knowledge – personal and codified – comprise knowledge utilization. It has also to be noted that the quality of incoming knowledge, together with the immediate transfer to the team members, determines the effectiveness of the knowledge transfer process. It is expected that the action level of the teamleaders (active or non-active participation) when knowledge exchange occurs will affect the performance outcome of the teams. In other words, the implication here is that an effective knowledge exchange may enable the members of the teams to make valuable utilization of the incoming knowledge acquired from outside the team, that to complete their tasks undertaken successfully, yielding beneficial performance outcomes (Figure 1). Inputs Knowledge obtaining through Communication Channels and Tools

Constraint Variables

Expected Outcomes

Figure 1: The fundamental role of the team–leader in the knowledge transfer process

3. Methods 3.1 Data collection and sample Data for this study were collected through a questionnaire survey. A total of 180 questionnaires were distributed to team-members and team-leaders of 30 teams of different organizations (i.e.

497

The 7th European Conference on Knowledge Management

financial services, manufacturing). 102 individuals yielding a response rate of 56,7% returned completed questionnaires. The questionnaire was developed after an extensive review of Knowledge Management issues and comprised a total/ of 100 question items in seven subject related sessions, measuring a range of aspects of the effect of the team leader's role in the efficient use of knowledge within teams in organizations. Teams examined in this study are defined as action teams the members of which frequently faced exceptional situations and in order to face up to these successfully have a great need to obtain and use accurate and specific knowledge from outside the team within time constraints (Edmondson, 2003). Moreover, all of the teams exhibit turnover homogeneity, meaning that their members work full-time together for a period of time and are completely aware of the knowledge existing among team-members. Presuppositions for a team to be included in the final sample was that tree-quarters of their members responded (Ancona and Caldwell, 1992) and, moreover, the participation of the teamleader in the research (Teams whose team-leaders did not respond were excluded from the final sample).

3.2 Measures The proposed theoretical framework is tested through several question items each of which represents a particular measurement in a multi-item scale (5-point Linkert scale). However, given the large number of question items there is a need to condense them into subjective relative sets. Cronbach reliability coefficient is first used to indicate acceptable internal consistency and reliability of the question items, which then are factor analyzed using principal component analysis and a varimax rotation given concept-related factor solutions. In each case, factor solutions are also supported by inspection or of the Scree plot. The results of principal factor analysis are summarized on Table 1. To determine the relationships within the inputs, constraint variables and expected outcomes (Figure. 1) we use correlation analysis. Since this paper is about teams, the data for each team are grouped together (based on the average of the individual responses) finally yielding twenty cases corresponding to the 20 teams included in the final sample. Linear regression analysis is performed on quantified characteristics of the relation between the inputs and constraint variables. Specifically, the team-leaders’ action level in the knowledge transfer process is measured in terms of their participation in communication activities, in seeking to obtain knowledge required from outside the team. Hence, it is asked of both team-leaders and team-members to indicate the extent to which they participated in, and made use of (i) personal formal communication channels (a=0.92) and tools (a=0.44) (ii) personal informal communication channels (a=0.93) and tools (a=0.82) as well as (iii) the extent to which they obtained knowledge from various organizational sources (i.e. extracted knowledge through impersonal communication channels (a=0.99)). Similarly, two sets of question items concerning the dual role of team-leaders either as “source” or “recipient” of knowledge from outside the team measure the fundamental role of the team-leader in the knowledge transfer process. Team-leaders indicate whether or not they felt each of the items were parts of their responsibility when a knowledge exchange procedure occurs (a=0.86, a=0.81). For the same purpose, team-members indicate whether or not the leader of their team exhibiting appropriate characteristics so as to behave as either knowledge source (a=0.95) or recipient (a=0.40) of the incoming knowledge from outside the team. Additionally, the effectiveness of the knowledge transfer process (i.e. the effective and valuable use of the incoming knowledge by the members of the teams) is tested among other things in terms of the familiarity with the changes happening in the organizational environment, flow and quality of new knowledge, obtaining knowledge within time constraints as well as the uncertainty of finding the new knowledge required that team-members may need in order to perform their tasks

498

Evangelia Siachou and Anthony Ioannidis

undertaken. It asked, then, of both the team-members (a=0.81) and team-leaders (a=0.87) to indicate whether or not they experienced each of the question items when a knowledge transfer process occurred. Analysis is also controlled for team size, team skills diversity and team cognition that might impact the effectiveness of knowledge transfer within organizational teams. As team size reflects the maximum number of individuals involved in the team (Katz, 1982; Hackman, 1990; Haas and Hansen, 2005), respondents report the total numbers in their teams (including team-leaders). Moreover, as education is found to be one of several sources of knowledge that contribute to one's expertise (Dahlin et al, 2005:1108), skills diversity of the teams is measured in terms of respondents' majors, and their field of expertise by asking them to choose their educational field from 11 listed items (e.g. Finance, Business Administration, Social Sciences, etc) appearing on the questionnaire, and to indicate further the level of major that they hold (e.g. BA, MBA, Ph.D). Finally, the influence of team cognition on the knowledge transfer process is controlled by asking both team-leaders and team-members to indicate whether or not they assimilate incoming knowledge easily and make effective and valuable utilization of new knowledge acquired (a=0.84).

3.3 Results Correlation analysis of relationships between team-leaders’ and team-members’ participation in the communication activities posits a significant negative relationship between team-leaders’ and teammembers’ participation in personal formal communication channels. A similar negative relationship exists between the use of personal formal communication tools made by the team-leaders and team-members. In contrast, correlation analysis points out a significant positive relationship between the team-leaders’ and team-members’ participation in personal informal communication channels as well as between team-members’ and team-leaders’ activities for accumulating knowledge from various organizational knowledge sources and the equivalent team-leaders’ activities. Correlation results appear in Table 2. Concerning the relationships between what team-leaders felt as knowledge processors and what the members of their team indicated, correlation analysis supports some positive relationship between team-leaders’ and team-members’ opinions. The opinions of team-leaders and teammembers concerning the role of the team-leader as recipient of knowledge from different organizational sources are also positively correlated. The results of the correlation analysis are shown on Table 3. Moreover, the results of regression analysis used to investigate the possibility that team-leaders’ participation in the communication activities (inputs) could be a good predictor of the effectiveness of the knowledge transfer process support this supposition. The team-leaders’ participation in the personal formal communication channels is positively related to the effective and valuable use of the incoming knowledge by the team-members. Regression analysis also determines that the participation of the team-leaders in the informal communication channels and/or in the impersonal communication activities is positively associated with the effectiveness of the knowledge transfer process and could be treated as predictor of the valuable use of the incoming knowledge within organizational teams. According to the additional results of the regression analysis, the relationship between the constraint variables and the expected outcomes of the proposed theoretical framework is positively related, meaning further that team-leaders’ characteristics either as source” or “recipient” for the knowledge from outside the team could facilitate the effectiveness of knowledge exchange within organizational teams. Table 4 summarizes the results of regression analysis. As regards control variables, the team size that reflects the maximum number of the members involved with a team, the relatively small final sample does not allow us to isolate significant differences or relationships concerning the team size and its effect on a knowledge transfer process. Controlling for the team skills diversity impact on the effectiveness of the knowledge transfer process within organizational teams, correlation analysis provides some significant

499

The 7th European Conference on Knowledge Management

negative relationship between team skills homogeneity and the effective and valuable utilization of the new knowledge by the members of the team (r=-.0.47, DF=17, p right(en))

Figure 5: The nested set model

3.2 Chunk presentation in the document browser To present the pieces of information retrieved by the query expansion, we created a browser-based tool, which gives the user the possibility to operate on the result sets (see Fig. 6). The basic idea is to present the user not only a link to the referenced document, but a glimpse on the actual content. Realizations of this concept exist in several web search engines such as Google. Yet we extended this concept to collect implicit feedback. Ideally the document segments displayed have a high relevance for the query and contain the information needed by the user. In any case the user is able to rate the relevance of the result sets without having solely the document name to rely on. In addition to opening the dedicated document, it is possible to expand the view of the chunk to the left or right chunk, to read the whole chapter or to remove the chunk completely from the view or to minimize it. All of these operations on chunks can be used to collect information on the relevance of this chunk for the query. This information can eventually be used to re-compute the relevance of the chunk for the input query and adapt the results of further search requests on the same topic: simply closing the chunk will decrease the estimated relevance of a chunk; working with the chunk, like viewing the full chapter will increase it. Eventually, presenting the best fitting chunks, computed from the previous operations performed of the users, optimizes the result set for a given query. Naturally, this optimization is only useful when similar search tasks are often repeated like in call centers or first level support.

740

The 7th European Conference on Knowledge Management

Figure 6: The document browser

3.3 Feedback collection As stated above, the document browser provides different operations on information chunks. They are the following: ƒ “Download”: This operation is rated with 5µ, µ being the feedback constant. Downloading the whole document shows that the user is sure that it contains exactly the information he needs. This is especially the case when mobile information supply is concerned. ƒ “View whole document”: Viewing the whole document is rated with 4µ. It shows that the user is interested in the content of the document and that he is yet not really sure that he will need the whole document. He therefore wants to be able to close some of the non-relevant chunks contained in the document. ƒ “View chapter”: For the same reasons as stated above, this operation is rated with 3µ. ƒ “View paragraph”: This operation is rated with 2µ. ƒ “Back”: This operation can be positive or negative. The user reestablishing the last state of his browser window means that the last operation was wrong. Thus its feedback value is simply subtracted from the cumulated feedback value. ƒ “More”: A rating of µ was assigned to this operation ƒ “Less”: This is the reciprocal operation to “more”. It was thus accorded a rating of -µ ƒ “Close”: This is the reciprocal operation to all “view” operations. We thus mapped it to the average of the ratings of view operations and accorded it a rating of -3µ. The ratings of operations on a chunk are sequentially summed to compute a cumulated feedback value that is stored with the query in a feedback database along with user context. After a period of data accumulation, a stored procedure for computing the new word similarity values is triggered.

741

Axel-Cyrille Ngonga Ngomo and Frank Schumacher

4. The feedback module The feedback module is the key module of this tool. By learning the context-dependent conceptterm relations, we ensure that the meaning of the concepts describing the task is implicitly learned by the system and that it thus returns relevant documents even when an empty query is sent to the system.

4.1 Topology of the neural network The query expansion activates sequentially the different layers of our system (concept layer, term layer and data layer). The retrieval path equivalent to an expanded query is a four-layer directed graph (see Fig. 7) that can be mapped to a neural feed-forward neural network with shortcut connections. His topology is as follows:

Figure 7: Topology of a retrieval path ƒ The first layer represents the concepts describing the task to achieve. ƒ The second layer contains terms that are either similar to the concepts in the first layer or that where given in as input and could not be matched to a concept. The weights of the connections between the first and the second layer are computed automatically using an analysis of the distribution of keywords of interest in a firm intern data corpus. ƒ The third layer contains words that are similar to those in the second layer. ƒ The fourth layer is the data layer. We compute a vector representation of each of the information chunks using the distribution of the terms in the third layer in the “headline”. The transpose of the term-document matrix is then used to connect chunks and terms. As shown in Fig. 7, some documents that do not contain any of the terms in the original query can be retrieved (blue document in the figure). Such documents are ranked lower that documents that contain the query (yellow document). Neurons in the first layer can be linked to the neurons in the second (shortcut connections) or the third layer. The network generation was implemented using the JoOne5-library 1 . Since this library can not handle shortcut connections directly because of the layer-paradigm it uses to represent networks, we had to add temporary neurons to ensure correct connections between the neurons. All layers were modeled as linear layers.

4.2 Learning The learning phase consists in two passes: the forward and the backward pass. During the forward pass, the neural network is sequentially activated downwards (i.e. from the concepts to the documents, see Fig. 7). The forward pass is basically the query expansion phase, during which the query dependent topology of the neural network is edified through the activation spreading between the layers described above. The result of the forward pass is an activation value for the documents.

1

http://www.jooneworld.com

742

The 7th European Conference on Knowledge Management

The backward pass is used to compute the new weights of the edges between the neuron layers. Thus it implements the learning as such. It consists in activating the document layer with feedbackdependent values, propagating these activation values back into the neural network and computing new edges weights out of the difference between the activation values during the forward pass and the new activation values.The initial activation of the document layer for this pass consists in the feedback values generated by user interaction monitoring combined with the document activation computed during the forward pass. activation new = (1 + norm( feedback ))activationold , Where norm is a function that ensures, that the cumulated feedback is in the interval [-1, 1]. We chose to use the following adapted sigmoid function to achieve this:

norm( x) =

1 − e−x 1 + e−x

The norm of a negative value stays negative, the norm of a positive value positive and that the norm of 0 is 0. The feedback-dependent activation values are back-propagated in the neural network through the first three layers to finally alter the weights of the connections between the concepts and the terms. Using a standard learning technique to adjust the weights of the edges between the concepts and the terms adapts the document ranking by later queries in the sense of the user. We used the batch version of the standard back-propagation algorithm without momentum (Zell 97) to update the weights of the edges between the concepts and the terms. The altered weights of the connections between concepts and terms are stored back into the query expansion module.

5. Results The functionality described above has been fully implemented but hast not yet been tested in a big setting such as a company. We tested the three main modules using the CLMET (Corpus of Late Modern English Texts) (see Fig. 8). The most important feature for our system being the learning of concepts, we tested it by trying to learn the concept “beauty”. This was done using twenty learning iterations on three positive feedbacks with different learning rates, we had the following results: Table 1: Learning the concept “beauty” Learning rate 0 0.05 0.1 0.2 0.5

Beauty 1 1 1 1 1

Virtue 0.16 0.68 0.68 0.70 oscillation (chaotic behavior)

Land 0.16 0.16 0.16 0.16 oscillation (chaotic behavior)

The system learned “beauty” as being beauty + virtue. The similarity value for “land” stayed almost constant since no negative feedback values were used. For learning rates above 0.4 the system behaves chaotically and the learning depends heavily on the number of iterations. Smooth changes are reached for learning rate values around 0.1 (see Fig. 8). Since the retrieval is based on the vector space model, i.e. on the relative weights, searching for the concept “beauty” will now boost “beauty” and “virtue” and thus neglect “land”.Our tests showed that the edges between concepts and keywords, which express the semantics of the concepts in the given context, get stronger with time. The absolute weights of such edge can be used to compute the relevance of a concept for a task, leading to an automatic determination of the semantics needed to complete it. Furthermore, empty queries retrieve contextually more relevant data with time. This realizes the goal we were aiming at: Knowledge workers using the feedback functionality implicitly store their knowledge about the semantics needed to complete the knowledge intensive tasks included in the everyday business process instances. This allows (especially new) workers in the same role to retrieve relevant pieces of information even without any knowledge about the keywords necessary to find them. Furthermore, the system can adapt to time-related changes in the semantics of concepts simply be updating the weights of the edges between these concepts and terms. New concepts can be added to the description of a task by monitoring the activation of the all concepts related to

743

Axel-Cyrille Ngonga Ngomo and Frank Schumacher

the terms activated in the second layer during the back-propagation phase and adding concepts that very often have an activation level above average in the given context. The query expansion without feedback had a 15% better precision. We are aware, that the retrieval results are highly dependent on the models used (ontology, data set for similarity computing, etc.). We thus aim at testing our system with standard data sets such as PUBMED publications 2 and standard ontologies such as the GALEN ontology 3 . The extraction of semantic information to create meaningful chunks is a difficult task, and depends heavily on the accuracy of the document creator regarding the structure of it. The extraction process of Word-documents by the COM-object-model with a .NET program is still suboptimal because each call to this object is encapsulated. Thus the computing time for one document can be immense. This field needs further research, with a lot of fine-tuning to the chunk-generating algorithms. Learning rate = 0.01 0,8

0,7

Similarity to "beauty"

0,6

0,5 Land Virtue

0,4

0,3

0,2

0,1

0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

Iterations

Learning rate = 0.1 0,8

0,7

Similarity to "beauty"

0,6

0,5 Land Virtue

0,4

0,3

0,2

0,1

0 1

2

3

4

5

6

7

8

9

10

11

12

Iterations

2 3

http://www.pubmed.com http://www.opengalen.com

744

13

14

15

16

17

18

19

20

21

The 7th European Conference on Knowledge Management

Learning rate = 0.2 0,8

0,7

Similarity to "beauty"

0,6

0,5 Land Virtue

0,4

0,3

0,2

0,1

0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

Iterations

Learning rate = 0.5 1,8 1,6 1,4

Similarity to beauty

1,2 1

Land Virtue

0,8 0,6 0,4 0,2 0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

Iterations

Figure 8: Learning the concept “beauty”

6. Discussion The work presented in this paper is related to the areas of query expansion, implicit feedback and document segmentation.

6.1 Query expansion Using thesauri or other forms of semantic knowledge to improve IR has been done in different settings of which most tend to try expanding user queries and/or document descriptions. One of the first experiments done to evaluate the use of thesauri was carried out by Wang, Vandendorpe, and Evens (1985). The result of this experiment showed that the use of thesauri for query expansion can noticeably improve both precision and recall. Yet experiments conducted by Voorhees and Hou (1992) and Voorhees (1994) on subsets of the TREC collection showed that not only an improvement but also a degradation of the precision or recall can result from using thesauri for IR. In more recent papers other authors like Taghva, Borsack and Condit (1998) showed that thesauriaided retrieval does not considerably change either the recall or the precision of information retrieval engines. Our results show that a considerable improvement of the precision can be achieved even on small data sets.

745

Axel-Cyrille Ngonga Ngomo and Frank Schumacher

Some approaches use more formal semantic models. For example the Thyssen-KruppCommunity-World project (Muehlhoff, 2004) exploits a distributed semantic model that networks information sources and people in a knowledge map that can be used to retrieve relevant information in an efficient manner. The use of formal models implies high modeling costs. We aim at striking a balance between complexity and costs. The use of ontologies guarantees the minimal complexity necessary to describe the semantics of high-level concepts. Yet modeling out each and every one of these concepts manually would have been too costly and thus not practically feasible. Thus we decided to compute and learn the relations between concepts and terms automatically. The quality of the corpus influences our results heavily. Our tests on the OHSU-TREC-9 corpus for filtering and the CLMET showed that domain specific corpuses are more suitable for retrieval tasks, since they contain less poly-semantic words. The precision of the engine is enhanced by 0 to 15% depending on the corpus used and on the ontology model.

6.2 Feedback Another category of methods aiming at improving retrieval quality is based on user feedback. Since the invention of relevance feedback (Rocchio, 1971), different methods to explicitly collect and process user feedback have been developed. Yet such methods do not usually produce the needed amount of data, since users are normally reticent to evaluate documents. Thus implicit methods were developed. They collect data from the user without having him to explicitly evaluate documents. Most techniques of this type monitor the time spent reading a document and some basic direct interactions such as clicking, scrolling, printing etc. (Morita and Shinoda, 1994) (Claypool et al., 2001). Kelly and Teevan (2003) give a good overview over actual techniques used in this domain. Our technique differs from all those presented in both papers, because it operates on smaller pieces of information and uses the navigational behavior of the user along the nested tree representation to determine the feedback value. Furthermore, we do not learn the similarity between queries and documents. Instead we learn the semantics of the concepts used in the documents and try to map it to the semantics of the users.

6.3 Document segmentation White et al. (2004) evaluated six implicit feedback techniques on a new document representation for retrieval results. Their presentation approach is similar to the one presented here, since they present different views on the documents. Yet they do not segment documents to more granular pieces of information. There exist a few algorithms to create chunks from unstructured documents, especially from PDFdocuments. One was introduced by Liensberger (Liensberger 2005). He detects segments by using a whitespace density graph and is cutting the document by the “rivers of whitespace”. This method is especially effective with documents with more than one column. With “Outside In”, Stellent created a very powerful text conversion tool (Stellent 2006). Until recently, the output of this transformation engine would have been plain text with no semantic information. It is now possible to convert documents to the “FlexionDoc” XML-Schema developed by Stellent, which contains a lot of additional and useful information. We consider using this output as a base for our chunk creation.

7. Further work We shall be concerned with implementing the rest of the functionality necessary to test out system in the firm context. This includes the automatic labeling of term clusters and the addition of other learning techniques for concept description. The document browser will be customized, to enable an intuitive work. A main focus will be improving the performance of the document segmentation, both in performance and accuracy. Plans are to put aside the COM-object-model and use the XMLfunctionality of Word2003 or the Stellent Tool “Outside In”. Improved algorithms to create semantically coherent chunks will be tested. Later, we plan to implement a “chunkify-on-the-fly” feature, which will allow the segmentation of documents just in time.

746

The 7th European Conference on Knowledge Management

References Claypool, M., Le, P., Wased, M., Brown, D. (2001) “Implicit interest indicators”. In Proceedings of the 6th international conference on Intelligent user interfaces, pp. 33–40. Kelly, D., Teevan, J. (2003) “Implicit feedback for inferring user preference: a bibliography”. SIGIR Forum, 37(2):18–28 Liensberger C. (2005) “Ideas for extracting data from an unstructured document”, [online], http://www.chilisoftware.net/Private/Christian/ideas_for_extracting_data_from_unstructured _documents.pdf Microsoft (2006) Microsoft COM Object Model Technologie, [online], http://www.microsoft.com/com/default.mspx Morita, M., Shinoda, Y. (1994). “Information Filtering Based on User Behaviour Analysis and Best Match Text Retrieval”. In W. Bruce Croft and C. J. van Rijsbergen, editors, Proceedings of the 17th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, pp. 272–281, Dublin, Ireland Muehlhoff, T. (2004). "Ein Konzern will wissen, was er weiß“. In Petkoff B. Gronau N. and T. Schildbauer, editors, Proceedings of KnowTech 2004, pp. 45–53, Munich, Germany. PDFBox (2005) PDFBox Java PDF Library Online-Ressource, [online], http://www.pdfbox.com/ Rocchio, J.J. (1971) The SMART Retrieval System, Experiments in Automatic Document Processing. Prentice Hall Salton, G. (1983) Introduction to Modern Information Retrieval. McGraw-Hill Stellent (2006) Stellent “Outside In”, [online], http://www.stellent.com/de/produkte/outside_in/transform_server/index.htm Taghva, K., Borsack, J., Condit, A. (1998) “The Effectiveness of Thesauri-Aided Retrieval”. Technical Report TR-98-01, Information Science Research Institute of the University of Nevada, Las Vegas. Voorhees, E.M. (1994). “On expanding query vectors with lexically related words”. In Harman D.K., editor, “Proceedings of the TREC-2”, pp. 223–231, Gaithersburg, Maryland, USA Voorhees, E.M., Hou, Y.-W (1992) “Vector Expansion in a Large Collection”. In Harman D.K., editor, Proceedings of the TREC-1, pp. 343–352, Gaithersburg, Maryland, USA Wang, Y.-C., Vandendorpe, J., and Evens, M. (1985) “Relational thesauri in information retrieval”. J. American Society for Information Sciences, 36(1):15–27, 1985. White, R.W., Jose,J.M. , van Rijsbergen, C.J., Ruthven, I. (2004) “A Simulated Study of Implicit Feedback Models”. In Proceedings of the ECIR 2004. Sunderland, United Kingdom. Zell, A. (1997) Simulation neuronaler Netze. Oldenbourg: Addison-Wesley.

747

A Holonomic Approach to Strategic Knowledge Elicitation Rosane Pagano and Alberto Paucar Manchester Metropolitan University Business School, UK [email protected] [email protected] Abstract: It has been pointed out the strategic importance of the cyclic transformation of tacit

knowledge into explicit knowledge for the knowledge-creation organisations. Such complex transformation is in many instances supported by informal settings like knowledge cafes. Its criticality however deserves a more methodological approach. This paper examines the holon framework with the view of informing the systematic elicitation of strategic knowledge. It draws upon the experience and practical application of elicitation methods – in particular causal mapping and scenario building techniques – to elicit middle-high managers’ strategic knowledge by candidates to a Masters in Business Administration. The two methods are benchmarked against the holonomic cycle of knowledge development with regards to their inclusiveness of strategic variables, leading to a more integrative approach to strategic knowledge elicitation. A predominantly qualitative methodology is used in this comparison.

Keywords: Knowledge elicitation methods, knowledge management, systems thinking

1. Introduction In the last twenty years there has been a change of strategic thinking paradigms in management. The turbulence and discontinuities encountered in the global business scenario made all the more clear the need for systematic knowledge of strategic planning. Despite the questioning of the feasibility of such planning in very chaotic times, Drucker (1968) has since recognized that there is no alternative to managers but to anticipate possible future scenarios and attempt to intervene in the business processes, if their organizations were to survive. It was an urge for strategic planning to go beyond the traditional approach of mainly extrapolating past experience. Over this period of time, strategy became closer and closer to operations, strengthening the feedback loop that should exist between the two – effective operations may trigger a strategic issue and vice-versa. A manifestation of such close interaction is data warehousing in the retail sector. Thus, one can speak of strategic knowledge at various levels of the organisational structure. It is in this context that Masters candidates at Manchester Metropolitan University Business School (MMUBS) carry out their research project, which quite often focus on strategic issues and involve eliciting strategic knowledge at different management levels of the organisation they work in. As action researchers, they influence strategy formulation and seek convergence from strategists. Typically the issues investigated stem from the perceived need of harnessing strategic competence (Hodgkinson and Sparrow, 2002), that is, achieving strategic responsiveness (renewal capacity in chaotic times). It means picking up and interpreting weak signals, which are indicative of change, and acquiring actionable knowledge (knowledge which is scientifically sound and also useful to practitioners). Nonaka (1995) has highlighted the importance of the transformational cycle tacit-toexplicit in knowledge creation and in organisational learning. The more effective the transformation the more effective management intervention is likely to be. Hence the importance of methods and techniques. Representations of strategic knowledge improve the practice of strategic management. Application of knowledge elicitation methods as a basis for affecting the strategy process assists in overcoming cognitive bias and inertia, forcing strategists to explicitly recognize them. This paper draws upon the experience of samples of Masters candidates, who used one of the two techniques – causal mapping, and scenario planning – for strategic knowledge elicitation. Although limited, this collection of field experiences in live organizations provided the authors (as supervisors) with some evidence to suggest that these methods should be positioned in a more encompassing framework that expresses the cycle of organisational learning. We argue for the holonomic framework. This could help practitioners, such as many of MMUBS Masters candidates,

748

The 7th European Conference on Knowledge Management

contributing to a more systematic and integrative approach to the elicitation of strategic knowledge held by decision-makers in organisations.

2. The integral holon in organisational learning The holon construct is not new. It has been introduced by Koestler (1967) and since then it has evolved within organisational studies in an attempt to theorise at different levels the complex relations between organisational units. Recently Edwards (2005) has used such a construct to integrate the essential characteristics of each of the most currently used research paradigm in organisational studies. This ensemble is referred to as his holonomic framework. In this paper we draw upon the aspect of the framework that deals with how the holon construct accommodates the organisational learning cycle, and go on to propose an integrative conceptual umbrella to two frequently applied methods of capturing strategic knowledge (scenario building and causal mapping). This is referred to in this paper as the honomic cycle of knowledge development. The experience of practitioners on the Masters programme corroborates the view that a systematic approach to intelligence gathering can help implement an effective future thinking strategy. The quadrants of the integral holon are consciousness, cultural, social, and behavioural as indicated in Figure 1. They are the major areas in which development occur. The coordinates of such quadrants encompasses two dimensions: the interior- exterior dimension and the individualcollective dimension. The former represents the degree of subjective experience (or increasing objective behaviour). The latter represents the degree of self-agency (or increasing social communion). The mapping of a particular holon quadrant onto a particular phase of the learning cycle is summarized in Table 1. The mapping of a particular developmental dimension onto a particular set of learning skills is summarized in Table 2.

Figure 1: Developmental quadrants (adapted from Edwards 2005) Table 1: Association between holon quadrants and learning phases (adapted from Edwards 2005) HOLON QUADRANT Consciousness Cultural Social Behavioural

LEARNING PHASE Reflecting experiential learning; observation; collecting the data Meaning interpreting the results; meaning-making Testing testing implications; discussing the findings Acting physical action and involvement; performing the method

749

Rosane Pagano and Alberto Paucar

Table 2: Association between holon quadrants and learning phases (adapted from Edwards 2005) DEVELOPMENTAL DIMENSION Individuality Communality Interior Exterior

LEARNING SKILLS Agentic - unifocal / sequential Communal - multifocal / random networks Abstract - imaginal / conceptual Concrete - physical / interacional

The work of psychologists such as Piaget and Kolb have been influential in explaining individuals’ learning processes, and have been used as a basis to extend its explanatory power to the collective, such as organisational learning. One can recognise underlying Edwards’ integral cycle of learning a typical Kolb (1984) cycle, which is practical experience, reflection on the experience, forming generalizations, preparation for active experimentation, then further practical experience. In essence Edwards’ framework seems to extrapolate Kolb’s learning cycle from the individual to the social (organisations), although there are a few distinctions in the use of terminology. The use of scenario building and of cognitive mapping techniques in gathering business intelligence carries the challenge of capturing collective knowledge through individual experiences. Argyris and Schön (1974) suggested that workers have mental maps, which inform workers’ actions in particular, situations. Their theories of action have later evolved to the organisational learning perspective. They argued that ‘learning agents, discoveries, inventions, and evaluations’ should be encoded in individual images and the maps they construct with others. “There must be public representations of organisational theory-in-use to which individuals can refer. This is the function of organisational maps. These are the shared descriptions of the organisation which individuals jointly construct and use to guide their own inquiry.” (Argyris and Schön, 1978). Scenario building and cognitive mapping techniques viewed as the holonomic cycle of learning can help knowledge management practitioners, facilitators, or interventionists, to build such an important reference, especially in the organisational strategy. In sections 3 and 4 we will be reviewing those two techniques respectively, and interpreting them in the light of the holonomic framework. This interpretation is based on some of the experience of practitioners on the MMUBS Masters programme. Although they can be employed for the benefit of one individual alone, the use of these techniques in organisations often aim to reach a collective agreed view hold by a team of managers (strategic decision-makers). In this sense, they are a group technique to facilitate a group learning process.

3. Scenario building and the holonomic framework In the context of this study, scenarios are understood as alternative future-oriented views of the world which provide some grounds for better informed strategic planning and decision-making, and for circumventing future uncertainties. Scenario thinking has been applied since 1960s. In the last ten years however it has increasingly been used as a tool for strategic planning, knowledge management, and a means of constructing a shared view among team members. The benefit of the scenario approach reported by practitioners is that it contributes to a more effective change management; more effective assessment of existing strategies and options; more effective risk management; more cohesive mental maps of a management team and a more speedy learning cycle. In the shorter term, it increases organisational adaptability through a more skilful observation of the business environment, and in the longer term, it develops organizational resilience. Despite the perceived benefits, it is worth noting though that there is very little in terms of formal comparative evaluation of this technique with others. The appeal to managers of a scenario-based intervention is that it is systematic yet highly flexible, and also highly participative. It forces strategists to explicitly confront the implications of their options for the current strategy and to confront the operating beliefs underlying their decisionmaking process. It does involve though extensive data gathering. Scenario building consists of three broad stages, as identified by Hodgkinson and Sparrow (2002). First, to develop a master list of key forces (around 100) that would be likely to determine the longer-term shape of the industry (social, political, technological and economic). Second, the key forces identified are rated for their

750

The 7th European Conference on Knowledge Management

overall importance in terms of shaping the future and classified into trends and uncertainties. Trends are forces that were deemed to be predictable (overall direction and impact within the time frame under consideration). Uncertainties are forces that are deemed to be unpredictable. In the third stage this material is then used to develop a range of scenarios along two orthogonal dimensions. One dimension goes from the traditional to a new business model and the second goes from minor to major changes. Typically there would be four predominant scenarios, one on each quadrant. This ‘orthogonality’ means that scenarios are reasonably distinctive and probably more helpful to decision-makers. Ringland (2002) provides a more detailed account of the scenario building process, helpful to practitioners, describing it as a series of twelve stages rather than the three broad ones envisaged by Hodgkinson and Sparrow. Also, Ringland includes stages beyond the production of the scenarios themselves, that is, ones that we relate to the learning phases of ‘Testing’ (implications of the findings) and ‘Acting’ (choice of options and implementation plan). What Hodgkinson portraits is essentially Ringland’s stages two to five. Ringland’s twelve stages for developing scenarios have been well tried. We adapted and mapped them onto the holonomic cycle of knowledge development. Each stage is associated to a holon quadrant as indicated in Table 3. Table 3: Scenario building as a holonomic cycle of knowledge development HOLON QUADRANT Consciousness Cultural Social Behavioural

STAGES OF SCENARIO BUILDING

1. Identify focal strategic issue 2. Identify key micro economic forces 3. Identify driving macro economic forces 4. Rank by importance and uncertainty 5. Select the scenario logic 6. Flesh out the scenarios 7. Test implications for strategy 8. Selection of leading indicators 9. Feed the scenarios back to those consulted 10. Discuss the strategic option 11. Agree the implementation plan 12. Publicize the scenarios

The Consciousness quadrant (‘Reflecting’ learning phase) encompasses stages one to three. Stage one involves identifying the major decision or the big strategic question, for example “What are the long-term structural options for Texaco’s participation and leadership in the energy industry?” (Ringland, 2002). This is followed by the elicitation of forces (stage two) in the local environment, which are key to influencing the success of decisions, the choice of options opened up by the strategic question. These microeconomic forces are characteristically elicited by interviews, workshops, and observations. Research methods for data gathering and data analysis are used here. Research is also required in stage three to elicit macroeconomic forces that will affect forces in the local environment. Typically a PEST analysis is used to identify major trends. This quadrant (consciousness) requires a combination of agentic (individual) and abstract (interior) learning skills from the ones involved in the process in order to reflect and tease out the elements that will make up the fabrics of collective knowledge (added value) later in the development cycle. The Cultural quadrant (‘Meaning’ learning phase) encompasses stages four to six. In stage four microeconomic forces and macroeconomic forces are rated on their level of importance in shaping the future (influencing the outcome of the strategic question defined in stage one), and on their level of uncertainty. Then, in stage five, typically two predominant macroeconomic forces are selected as the two orthogonal axis (dimensions) against which four perceptively different scenarios are devised. Interweaving the prioritized microeconomic forces into a story line fleshes each scenario out. The Cultural quadrant requires a high degree of abstraction and conceptualization from the ones involved (as in the Consciousness quadrant) but here in combination with communal, multifocal learning skills, working towards a shared vision (scenario). Micro and macro economic forces need to be networked in a narrative involving usually causeeffect relationships and a characterization of the people involved. The Social quadrant (‘Testing’ learning phase) encompasses stages seven and eight. It entails rehearsing the answers to the strategic question against each scenario and testing the implications

751

Rosane Pagano and Alberto Paucar

to the organisation. It also entails finding a way of monitoring which scenario is looming in the real world. This is accomplished by selecting economic indicators that are specific to a particular scenario. The Social quadrant requires intense communal, multifocal learning skills from the ones involved, combined with interactional skills, in order to discuss and validate the findings. The Behavioural quadrant (‘Acting’ learning phase) encompasses stages nine to twelve. The underlying principle of this quadrant is to turn the findings and collective knowledge into actions that can be carried out by individuals. In stage nine, a one-to-one feedback to those interviewed or consulted in stage two on the scenarios created, so that they can apply to a decision they mentioned. A set of options is generated against each scenario and each option is graded from very negative to very positive. The choice of options will depend on organisational capabilities. The scenario project owner then agrees a plan to implement the selected options and to publicize the scenarios widely. The Behavioural quadrant requires a high degree of concrete learning skills outward looking, combined with a degree of self-agency.

4. Causal mapping and the holonomic framework Causal mapping is a technique that can be particularly helpful in developing effective strategies. Its operators are in essence quite simple. A causal map – the outcome of causal mapping – is an influence diagram in which actions or ideas, represented as statements, are connected through arrows representing a causal relationship. Actions or ideas are sometimes referred to as strategic variables, which are depicted as nodes in the influence diagram. The arrowheads connecting variables depict the directions of causality, terminating on dependent variables. There can be a perceived positive causal relationship or a perceived negative causal relationship on depend variables. An example of a situation to be mapped could be the impact of environmental change on organizational performance of a railway company. Variables might be floods; droughts; freight revenue; costs of physical repairs; encroachment of the highway carrier; inflationary spiral of wages; etc. (Hodgkinson and Sparrow, 2002). The purpose of causal mapping is to make explicit pathways between strategic thinking and actions. It is a structured mechanism for intervening in the strategic management process, which has increasingly been used to elicit managers’ strategic knowledge in consultancy projects. Masters candidates often take up the role of internal consultants when researching and writing their dissertation. Maps help teams reflect critically and systematically on strategic issues, investigate collective logics of organizations, and capture managers’ causal belief systems. The technique draws upon documentary sources of evidence or interviews, makes use of dynamic media (interactive computer software, whiteboards) and requires a facilitator. Usually maps are developed iteratively over a period of two or three days. Hodgkinson and Sparrow (2002) point out some limitations of each source of evidence. Documentary sources are non-intrusive, but often prepared for particular audiences, and therefore potentially biased. Direct elicitation procedures, on the other hand, are intrusive, but focused on issues of concern to the investigation. Bryson et al (2004) provides a detailed account of the causal mapping technique for practical business results, helpful to practitioners, identifying process guidelines for constructing causal maps. We have translated Bryson’s well-applied guidelines into ten steps for applying this technique and have mapped them onto the holonomic cycle of knowledge development. Each stage is associated to a holon quadrant as indicated in Table 4. The Consciousness quadrant (‘Reflecting’ learning phase) encompasses steps one to three. Step one involves identifying when a situation or strategic issue is suitable for mapping. Maps have the potential to be a lot more detailed than scenarios due to the causal links. This level of detail may not be required. In step two, members of the team responsible for constructing the map are asked to brainstorm individually as many ideas as possible, with little censure, as the objective is to encourage creative thinking in terms of options (who, what, where, when). Ideas (statements) are labelled (usually numbered) to facilitate linking. In step three the facilitator organises ideas according to their level of generality, establishing a preliminary causal linkage. Goal related statements will tend to be at the top of the map whilst action related statements will tend to be nearer the bottom. Performing the steps in this quadrant (consciousness) requires mainly a agentic (individual) and abstract (interior) learning skills from the ones involved in the process in order to tease out the nodes which will make up the scope (number of nodes) and depth (nature of the nodes) of the causal network.

752

The 7th European Conference on Knowledge Management

Table 4: Causal mapping as a holonomic cycle of knowledge development HOLON QUADRANT Consciousness Cultural Social Behavioural

STEPS FOR CAUSAL MAPPING

1. Identify the situation to be mapped 2. Brainstorm ideas 3. Establish a preliminary hierarchy of ideas 4. Categorize clusters of ideas 5. Identify goals, options, actions and assertions 6. Link statements 7. Check the map for consistence and completeness 8. Analyse the map 9. Use the map to inform decision-making 10. Publicize the mapping exercise

The Cultural quadrant (‘Meaning’ learning phase) encompasses steps four to six. In step four the group collectively is asked to recognize a theme or subject for a cluster of connected nodes (ideas). This will become the cluster heading and will encompass goals and actions. Redundant statements should be removed at this stage. Possibly a statement will need to be broken down in order to make clearer their connection to others. In step five, the team should come to an agreement on the nature of each node in the causal network, that is, as a goal, an option (strategic issue), an action or an assertion. “Much of the power in a map comes from showing how bundles of actions add up to strategies, and how strategies can lead to the achievement of goals or purposes. […] In order words, the strategic issue is a bundle of possible actions from which comes the set of actions actually chosen – the strategy.” (Bryson, 2004). Step six is the (non-trivial) team exercise of determining the direction of an arrow, that is, determining the influence of one node on another. The argument underlying how two nodes are connected should be well rehearsed at this stage. The Cultural quadrant requires a high degree of abstraction and conceptualization from the team (as in the Consciousness quadrant) but here mainly in combination with communal, multifocal learning skills (distributed learning). The Social quadrant (‘Testing’ learning phase) encompasses steps seven and eight. Step seven is to test out the map for completeness – all statements or nodes are associated to at least one connection – and for consistence – each goal is the logical consequence of a bundle of actions as in a logical story line (causal thinking). Step eight is to look at priorities, ranking goals in terms of importance and considering trade-offs that can affect team’s subsequent commitment to a course of action (negative goals). Statements displaying a large number of links might be a critical option or action to embrace in order to achieve several goals. The Social quadrant requires intense communal, multifocal learning skills from the ones involved, combined with interactional skills, concrete and externally oriented. The Behavioural quadrant (‘Acting’ learning phase) encompasses stages nine and ten. The underlying principle of this quadrant is to turn the findings and collective knowledge into actions that can be carried out by individuals. Options are chosen and tasks assigned to individual participants. The causal map should then be published to the wider organisational environment, reinforcing important learning points. The Behavioural quadrant requires a high degree of concrete learning skills outward looking, with more focus on a degree of self-agency. The strong principle underlying cognitive mapping is that of system thinking, as pointed out by Sherwood (1999). The notion of a cognitive map as being a diagram of influences among entities belonging to the system reflects the view that, in order to understand (predict and control) a system, one needs to understand more than the individual parts alone. The concept of connectedness is as fundamental to causal mapping as it is to system thinking. One of the landmarks in the use of such approach in management is the work of Checkland (1981), which is embodied by his Soft Systems Methodology (SSM). SSM is a process for managing in the broad sense: a process of achieving organised action. According to Checkland, life world is an ever changing flux of events and ideas and ‘managing’ means reacting to that flux. We perceive and evaluate, take action(s) which itself becomes part of this flux which lead to next perceptions and evaluations and to more actions and so on. It follows that SSM assumes that different actors of the situation will evaluate and perceive this flux differently creating issues that the manager must cope. Here, SSM offers to managers the systems ideas as a helpful weapon to tackle problematic situations arising from the issues. The world outside seems highly interconnected forming wholes, therefore it seems that the concept “system” can help us to cope with the intertwined reality we perceive. The methodology is summarised as follows: identify the situation considered problematic;

753

Rosane Pagano and Alberto Paucar

express the problem situation; name relevant human activity systems in ‘root definitions’; build conceptual models from the root definitions; compare models of the situation and the real world; identify systematically desirable and culturally feasible changes; take action in the situation to bring some improvement. The basic structure of SSM rests on the idea that in order to tackle real-world situations, we need to make sure that the ‘real-world’ is separated from the ‘systems thinking world’. This distinction is crucial for SSM because that assure that we won’t see systems ‘out there’ that are in the real world. SSM urges us to consider ‘systems’ as abstract concepts, which use can eventually help us to bring some improvements to the situation concerned. Although Checkland has expressed a most flexible way of applying his ideas in his later work, the seven stage methodology is still the most convincing and helpful account of the systems thinking enquiry.

5. Summary We have described and examined the steps for the application of two strategic knowledge elicitation techniques – scenario building and causal mapping – with regards to the learning skills required at each step. This led us to represent the application process of these techniques as a holonomic cycle of knowledge development, giving them a common integrative approach. Each methodological step was associated to one of the holon quadrants – consciousness, cultural, social, and behavioural. In scenario bulding, steps one to three require a combination of agentic (individual) and abstract (interior) learning skills in order to tease out the elements that will make up the fabrics of collective knowledge later in the development cycle. Thus these steps have been positioned in the consciousness quadrant. Steps four to six resort to a high degree of abstraction and conceptualization in combination with communal, networking learning skills. Thus these steps have been positioned in the cultural quadrant. Steps seven and eight make intensive use of participants’ communal, multifocal learning skills combined with interactional skills, in order to discuss and validate the findings. Thus these steps have been positioned in the social quadrant. Finally, in steps nine to twelve team members should display a high degree of concrete learning skills, outward looking with emphasis on self-agency. Thus these steps have been positioned in the behavioural quadrant. In causal mapping, performing steps one to three requires mainly an agentic (individual) and abstract (interior) set of learning skills from the ones involved in the process in order to elicitate the scope (number of nodes) and depth (nature of the nodes) of the causal network. Thus these steps have been positioned in the consciousness quadrant. In steps four to six, a predominant abstractive and multifocal learning skills are required from the team. Thus these steps have been positioned in the cultural quadrant. Steps seven and eight calls upon networking skills combined with practical and externally oriented skills. Thus these steps have been positioned in the social quadrant. Finally, steps nine and ten makes intensive use concrete outward looking learning skills, with emphasis on self-agency, positioning them in the behavioural quadrant.

References Argyris, C. and Schön, D. (1974) Theory in Practice: increasing professional effectiveness, JosseyBass, San Francisco. Argyris, C. and Schön, D. (1978) Organizational Learning: a theory of action perspective, JosseyBass, San Francisco. Bryson, J. et al (2004) Visible Thinking, John Wiley, Chichester. Checkland, P. (1981) Systems Thinking, Systems Practice, John Wiley, Chichester. Drucker, P. (1968) The Practice of Management, Pan, London. Edwards, M. (2005) “The Integral Holon: a holonomic approach to organisational change and transformation”, Journal of Organizational Change Management, Vol 18, No.3, pp269-288. Hodgkinson, G. and Sparrow, J. (2002) The Competent Organization, Open University Press, Buckingham. Kolb, D. (1984) Experiential Learning, Prentice Hall. Koestler, A. (1967) The Ghost in the Machine, Arkana, London. Nonaka, I. and Taukeuchi, H. (1995) The Knowledge-creating company: how Japanese companies create the dynamics of innovation, Oxford University Press, Oxford. Ringland, G. (2002) Scenarios in Business, John Wiley, Chichester. Sherwood, D. (1999) Seeing the Forest for the Trees: a manager’s guide to applying systems thinking, Nicholas Brealey, London.

754

Facilitating the Generation of Research Questions using a Socratic Dialogue Dan Remenyi School of Systems and Data Studies, Trinity College Dublin, Ireland [email protected] Abstract: The Socratic Dialogue may be used as a technique for facilitating conversation and reflection as a means of developing suitable research questions. The essence of this approach is that the discussants share experiences and in so doing they not only learn from each other but also identify interesting areas for research. The technique is affected through a structured conversation or discourse which is used to explore an agreed issue. The Socratic Dialogue ‘conversation’ consists of three steps or stages and is suitable for six to eight participants who may be colleagues or knowledge informants. It also requires a competent facilitator who can guide the group through its conversation and reflection. Keywords: Socratic dialogue, discourse, learning, discovery, reflection, research questions, dialectic, conversation, experiences

1. Introduction In business and management studies research questions often arise as a result of conversations and reflection. This paper discusses how the Socratic Dialogue, which is technique for the encouragement of conversation and reflection (Saran and Neisser 2004), may be used to facilitate the developments of research questions. Developing a research question is a very personal matter and mechanistic approaches will not often facilitate useful questions and thus the Socratic Dialogue alone would not be an ideal approach to settling on a research question. However if it is used correctly it can point the way to topics and issues which may then lead to research questions. Finding a suitable research question is a non-trivial part of any research project or research degree. The difficulty arises because the research question needs to have a number of characteristics which are sometimes hard to fulfil. It is important that a research question has most if not all of the following attributes. The research question needs to be 1 :1. of direct interest to the researcher and to his/her supervisor; 2. based on a real problem recognised by the academic community or practitioners in the field, and preferably both; 3. associated with one or two established fields of academic study, and preferably not more than two; 4. referred to directly or indirectly in the academic literature; 5. answerable by academic research practice; 6. if empirical then appropriate evidence is, or will become, available; 7. neither too wide ranging or so complex that it is unreasonable to expect it to be answered within the usual time allowed for a research degree; 8. reducible to sub-questions which may be answered directly by the collection of evidence; 9. Capable of being expressed in a clear and unambiguous manner. Because of the difficulty in finding a suitable research question, research degrees are sometimes started without having established a clear question in the hope that one may arise as a result of exploratory research during the first few years of the degree. However it is not unusual for research degree candidates to be still struggling for a suitable research question some years after the research degree has been formally commenced 2 . 1

Some of these atributes of a reseach question are interdependent and the list should not be considered to be in order of importance. 2 Research degree practice in the United Kingdom today normally required a research degree candidate to have submitted a research proposal which should include a fully developed research question. However even where this is strictly adhered

755

Dan Remenyi

2. The question of the question Advice offered to those commencing a research degree has traditionally been that a research question needs to be found through a thorough investigation and understanding of the academic literature. This meant that a research question would be discovered with the aid of some previous researcher identifying a problem and pointing out its potential for future research either in a published paper or perhaps in the final chapter of a research dissertation. A strict implementation of this approach meant that real problems experienced by practitioners, unless they were also referred to in the literature, were not eligible to be considered as a suitable research degree subject. It is now thought that this approach is too restrictive. Insisting that the research question originated in the academic literature, assisted in establishing and maintaining rigorous research standards, but at the same time it also lead to research questions being less relevant than they could or indeed should have been. It has now been established in the business and management field of study that a research question may be found by reference to current business situations – problems, challenges or opportunities. This does not, of course, release the research degree candidate from knowing the literature, but it does give the individual a different starting point. The problem which is now faced is that current business problems, challenges or opportunities are often not understood or expressed in such a way that they can be easily reduced to useable academic research questions. Sometimes what management suggests as a problem is actually not the case 3 . Sometimes managers are not especially articulate about their problems and challenges. Sometimes the expression of the issues is too vague or high level to be used as a research question. In these circumstances the research question has to be teased out so that the issues are clear and the question complies with all or most of the question characteristics described above. Teasing out questions usually means discussing the issues with several people and trying to find some degree of consensus. The Socratic Dialogue is a technique for facilitating this as an appropriately structured conversation and subsequent reflection can assist in highlighting the real research issues.

3. What is this Socratic Dialogue? The Socratic Dialogue is so called because it is based on a question and answer process which is not entirely dissimilar to that described by Plato as the dialectic and who attributed the development of that technique to Socrates. The traditional dialectic was originally conceived as a conversation between two people. Socrates, when he visited the marketplace in Athens, is reputed to have engaged individuals in controversial conversation and argued with them one at a time. He is said to have been a past master at this and was always able to point out the weaknesses and the fallacies in other people’s arguments 4 . Socrates normally achieved this by pointing to experiences which he and his victim could agree supported Socrates argument. In the 21st century this one-to-one approach would take too long to have any real impact even among relatively small groups of academics and their informants. In any event, it is now felt that perhaps more depth as well as breadth can be achieved when a group of individuals are involved in these dialogues as this offers a wider range of different insights. The original Socratic model relied on one ‘master’ who dialogued with the learners. The new approach allows multiple individuals to benefit from multiple experiences. There is quite intentionally no ‘master’. Everyone’s experience is equally valid and the facilitator plays a background role. The Socratic Dialogue is specifically not a debate – it an exchange and exploration of experiences. Although insights will come from participants hearing they speak as well as engaging in discussion, most of the benefit comes from active listening. What is said by the participants of the Socratic Dialogue is of course key to its success and here the facilitator needs to be careful that only experiences of the participants are discussed. Thus to be a member of a to, research degree candidates can change their minds and find themselves without a research question well into their degree registration. 3 It has sometimes been said that management is not really the art of solving problems but rather the art of finding out exactly what the problem is. 4 As Socrates did not write an account of this himself we only have Plato’s account of what actually took place.

756

The 7th European Conference on Knowledge Management

Socratic Dialogue and to get full benefit from it, an individual needs to be enthusiastic about sharing experiences and hearing the thoughts of others 5 . It is also the case that all the individuals who participate in the Socratic Dialogue may benefit from the conversation and reflection. It is not simply a tool which will benefit the researcher alone.

4. The Socratic Dialogue applied to research questions The first step in the Socratic Dialogue involves bringing a group of appropriate people together, say, six to eight, to closely examine an opening question 6 . Ideally the question should be made known to the individuals in advance and they all need to be familiar with the issues involved and be prepared to share their experiences with each other. The questions which are amenable to this are wide ranging and many. A few examples of such questions are:1. What is the nature of effective knowledge management? 2. How could the real benefits of intellectual capital be harvested? 3. What is the nature of good leadership? 4. How are we to improve our understanding of our client base? 5. What would help us achieve a greater commitment from our colleagues to our corporate strategy? 6. In what way could we obtain a greater delivery of benefits from our information systems? 7. What are the most critical human resource challenges which we currently face? These examples of an opening question are, in academic research parlance, possible research topics that can be explored with a view to helping in the establishment of a research question. It is important to focus on the fact that the conversation generated during the Socratic Dialogue is required to be entirely experientially based. This means that participants are not allow to discuss the question in terms of anything other than their own personal experiences with the issues involved. Mentioning theoretical views on the subject is generally not allowed.

5. Discovery as a way to knowledge At the heart of the Socratic Dialogue is the notion of learning through discovery. This discovery operates by creating an opportunity for debate and reflection in a non-threatening conversational setting. This objective enhances understanding by discussing the actual experiences of the group. The group needs to be open, non-adversarial and needs to treat all opinions as equally valid and important. Ideally the group should be relatively small. Members of the group may come from diverse backgrounds but care has to be taken with individuals from different status levels. In addition to examining research topics and research questions the Socratic Dialogue is generally used to:1. encourage participants to think independently and critically, and to then reflect on that thinking; 2. build self-confidence in individuals own thinking; 3. answer a philosophically oriented question and to endeavour to reach consensus,- i.e. to reach an outcome; 4. engage in the co-operative activity of seeking answers to questions and to understand each other through the exploration of concrete experiences; 5. Deepen individual insights and understandings.

5

See http://www.philodialogue.com/Authenticity.htm viewed 25 March 2006

757

Dan Remenyi

Ideally the Socratic Dialogue leads to a consensus among the individuals. But participants may not always reach a definitive outcome in the form of totally agreed research question or even a set of possible research questions. This should not necessarily be seen as a failure, as the Socratic Dialogue experience itself promotes reflective learning which is useful in the research process.

6. The process of a Socratic Dialogue 7 The process of a Socratic Dialogue requires the use of a set of guidelines or rules which, although not inflexible, need to kept consciously in the mind of the facilitator. At the outset it needs to be said that no philosophical training 8 is needed to be part of a Socratic Dialogue. Openness and an interest in learning from others are perhaps the only qualifications required to derive benefit from a Socratic Dialogue. A firm but sensitive facilitator is required who can ensure that the group does not stray too far off the subject and that no one person dominates the discourse. Before the group assembles an appropriate question needs to be established. This question needs to be well-formulated and general in nature. As it is effectively a research topic it should not be a long and complex question. Each member of the group needs to be made aware of the question in advance and be asked to reflect on their experience regarding the issue in the question. The only qualification for being a participant in a group is that the individual has some direct experience of the subject matter being discussed.

7. Discourse in five parts There are five parts to a Socratic Dialogue. These are:1. The telling of the original stories; 2. The choosing of one story to explore in detail; 3. The retelling of the story and the detailed conversation and reflection; 4. The compiling of what has been understood and learnt from this detail; 5. The confirmation of how these understandings relate to all the original stories. When the group assembles each participant is asked to recall a real life experience which is directly related to the question or topic. Each story should be told in no more than three minutes and the facilitator should keep the speakers to the time limit. A list of titles or key words for these different stories should be compiled by the facilitator as they are being told. When all the stories have been told the group itself chooses one of these stories/experiences to examine in detail. Specifically the facilitator should not influence the choice of the story. It may take a while for consensus to be reached as to which experience is the most relevant or the most effective to closely examination. The member of the group who has provided this story, and who is referred to as the example giver, needs to be prepared to recount the story in as much detail as he or she can recall, and also be prepared to be questioned about this experience by the other members of the group. The story is now retold by the example giver in five to ten minutes. This provides a broader and a more in-depth perspective of the experience being discussed. Members of the group are encouraged to ask questions which come into their mind without too much analytical thought. These questions may seek background information or details of the 7

A Socratic Dialogue is not to be confused with the Socratic Dialogues (Greek Σωκρατικός λόγος or Σωκρατικός διάλογος), which are prose literary works developed in Greece at the turn of the fourth century BCE, preserved today in the dialogues of Plato and the Socratic works of Xenophon - either dramatic or narrative - in which characters discuss moral and philosophical problems, illustrating the Socratic method. Socrates is often the main character. http://en.wikipedia.org/wiki/Socratic_Dialogue viewed 25 March 2006 8 See website http://www.sfcp.org.uk/socratic_dialogue.htm viewed 25 March 2006

758

The 7th European Conference on Knowledge Management

actions, attitudes and values of the individuals involved in the example. This part of the Socratic Dialogue may take between one or two hours. The retelling of the story of the real life experience needs to be transcribed, at least in summary, as do all significant comments made by the other members of the group. The discourse between the members of the group needs to unfold slowly and carefully with as much consensus in understanding as possible being established on a step by step basis. Here any conflicts (differences of opinion or differences in usage of language) should be resolved if possible and a genuine attempt be made to formulate shared insights and knowledge. This will become the overview of the discourse or dialogue and is used to summarise the key issues raised during the dialogue. This activity will need about half an hour. Each member of the group is now asked if their individual story relates to the summary created. The extent to which the summary reflects the essence of each story is discussed and the summary changed if this discussion reveals new dimensions of the story. This process may require up to half an hour.

8. Drawing out research questions Once the overview has been established then the discussion needs to turn to the issue of research questions. The overview as a statement based on a summary of individual experiences will inevitably contain issues which could be further explored. Identifying these issues will produce a list of possible research questions. As mentioned above the Socratic Dialogue cannot be relied on to produce research questions in any mechanistic way. What it does is produce a collective understanding of a topic which will usually lend itself to further exploration and thus to research questions.

9. Making the Socratic Dialogue work The Socratic Dialogue is actually an exercise in learning through exchanging experiential knowledge that leads to new insights or discoveries by reflection. The group members need to be honest in the recounting of their experiences and to be honest in describing their personal reactions to the experiences of other members. The Socratic Dialogue should be as judgement free as possible. Any suggestion that a member of the group is being judged by the other or by the facilitator will reduce the value of the event. Members’ thoughts need to be expressed as clearly and simply as possible. If something is not clear then it should be discussed until everyone understands the point being made. This requires at all times careful attention to the discussion. If members of the dialogue become tired then they should advise the facilitator who may call for a break for refreshments. There should in fact be a number of refreshment breaks as in general running a Socratic Dialogue for more than an hour and a half without a break is not advised. It is important that no one individual or group of individuals dominate the Socratic Dialogue. If this occurs then any member of the dialogue may ask the facilitator for a break for the purposes of having a meta-discussion. A meta-discussion involves talking about how the Socratic Dialogue is proceeding. Normally there is little need to have these metadiscussions. When they do arise they should be concluded as quickly as possible so that the group may return to the main purpose of the event. If any member of the dialogue feels that the discussion is drifting way from its original purpose then he or she may also call for a strategy-discussion. Strategy discussions are used to refocus the dialogue if it has wandered away from the original point.

10. The facilitator and the Socratic Dialogue A skilled facilitator greatly enhances the Socratic Dialogue. The function of the facilitator is to ensure the smooth running of the meeting and to bring all the members of the group into the discussion. The facilitator will normally keep the record of the different experiences and will also act as scribe for the overview of the detailed study on the chosen experience. It is important that the facilitator does not attempt to influence the direction of the discourse. The discussion

759

Dan Remenyi

may stray off the subject and provided these lapses are not for long or too far from the question they may be tolerated. However in the end the facilitator should bring the discussion back to the issue for which the Socratic Dialogue was initiated.

11. Timing and the Socratic Dialogue A Socratic Dialogue may be structured in several different ways. Firstly a Socratic Dialogue may be run as one continuous event. Of course this does not imply that there are no refreshment breaks. In such a case using a large group, say ten people, then a whole day of eight hours would be required to complete an event. Some events may require a little longer, perhaps twelve hours. Smaller groups may require less time say three or four hours. In some circumstances it may not be possible for individuals to make such a long period of time available. In these cases the Socratic Dialogue may be conducted in a number of parts or a number of sessions. The first session might run until the group has listed its experiences and perhaps chosen the one which they wish to study in detail. Then two or three sessions of one hour each could be put aside to elaborating on the one chosen example. In this way the Socratic Dialogue experience could be obtained over an extended period.

12. Some final points The Socratic Dialogue facilitates learning through discovery by reflecting on actual experiences. Therefore the conversation needs to be based on real experiences, which means avoiding what Shakespeare referred to in Othello as, “Mere prattle without practice 9 ”. These experiences need to be discussed actively by all the members of the group who need to say what they really think and in an ideal situation the discussion needs to proceed until a high degree of understanding and consensus is reached. Thus in a Socratic Dialogue there is a need to start with the concrete experience and remain in contact with this experience throughout the entire event. Proper insight is gained only when the link between any statement made and personal experience is explicit 10 . This means that a Socratic Dialogue is a process which concerns the whole person. The members of the Socratic Dialogue should attempt a full understanding of each other. This involves much more than a simple verbal agreement. Participants should try to be clear about the meaning of what has just been said by testing it against their own experiences. In ideal circumstances the limitations of individual personal experience which stand in the way of a clear understanding should be made conscious and thereby it is hoped that this limitation will be transcended. Beware of being distracted by less important questions. Following a subsidiary question until a satisfactory answer is found may be useful, but again it may not be. Groups often bring great commitment to their work and gain self-confidence in the power of their intellect or reason. This may mean not giving up when the work is difficult. Sometimes the discussion has to move on, but it may return to the problematic issue again. An honest examination of one’s thoughts and the thoughts of others is essential. This honesty may help with the striving for consensus although consensus itself may not necessarily arise.

13. Conclusions As a final thought it is worth pointing out that this type of shared learning will not suit all research topics or indeed all researchers. Some researchers find listening to members of a group such as this tedious. There are also situations in which the Socratic Dialogue will be cumbersome – perhaps if there are too many views for the Socratic Dialogue to be effective. However there are many situations in which the Socratic Dialogue will help tease out research questions from a research topic and will thus be of use to a researcher.

Reference Saran R and Neisser B, (2004), Enquiring Minds, Trentham Books, Stoke-on-Trent. 9

Othello Act 1, Scene 1 line 26 See http://www.sfcp.org.uk/socratic_dialogue.htm viewed 25 March 2006

10

760