Chapter 01.pmd

9 downloads 233368 Views 5MB Size Report
Aug 23, 2006 - Implementation of Personalization Service Based On Mobile Web .... employees and shareholders at the expense of suppliers and the community. ...... and find that there are some good mutual fund schemes which are ...... scale industry where there will be mass production but here it is difficult to have.
Key Drives of Organizational Excellence

Key Drives of Or ganizational Ex cellence Organizational Excellence

Edited by S. S. BHAKAR SHILPA SANKPAL SAURABH MUKHERJEE

PRESTIGE INSTITUTE OF MANAGEMENT, GWALIOR EXCEL BOOKS

ISBN: ___________________ First Edition: New Delhi, 2009 Copyright © 2009, Prestige Institute of Management, Gwalior

EXCEL BOOKS A-45, Naraina, Phase-I New Delhi-110 028 Published by Anurag Jain for Excel Books, A-45, Naraina, Phase I, New Delhi - 110 028 and printed by him at Excel Printers, C-205, Naraina, Phase I, New Delhi - 110 028

Contents Prologue Acknowledgments About the Editors Contributors I.

II.

FINANCE 1. Timing Skills of Fund Managers: An Empirical Study of Indian Mutual Funds Anindita Chakraborty, Nishchaya Vaswani 2. Sectorwise Analysis of Weak Form Efficiency at Bse Ashutosh Verma, Nageshwar Rao 3. Bottom of Pyramid & Investment Approach To Indian Financial Market Ashutosh Agarwal, Navita Tripathi 4. Future Perspective in Enterprise Risk Management: A Strategic Approach to Derisking an Enterprise Falguni H. Pandya 5. Non-Performing Assets in Indian Banks Prangali Godbole, Shipra Agrawal, Vandana Jain, Pushpa Negi 6. Environmental Management Accounting: An Overview Anindita Chakraborty, Kavita Indapurkar, Garima Mathur 7. Determinants of Capital Structure Decisions: A Study of Indian Cement Industry Pushpa Negi, Shweta Sharma, Shilpa Sankpal 8. Microfinance Interventions in India: Challenges and Prospects Shagufta Sheikh 9. Cross Sectional Industrial Performance As a Predictor of Investor's Return: A Case Study of NSE Simranjeet Sandhar, Navita Nathani, Umesh Holani 10. Microfinance Soma Sharma 11. Paradigms of Working Capital Management Naila Iqbal MARKETING 12. Selection of Advertising Appeals in Print Media: A Comperative Study of Products & Services S. S. Bhakar, Shailja Bhakar, Amrish Dixit 13. Customer Relationship Management in Insurance Sector: A Comparative Study of L.I.C. & Ing Vysya Life Insurance Alok Mittal, Ruchi Saxena 14. Challenges Faced by Marketing Managers in 21st Century Bharti Venkatesh, Vikas Pandey 15. Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches Shilpa Bhakar, Shailja Bhakar, Neha Pareek

ix xiii xv xvii

3 15 27

39 49 56

65 75

86 104 113

125

137 151

158

Key Drives of Organizational Excellence

vi 16.

III.

The Role of Culture in Consumer Behavior Hitendra Bargal, Nitin Tanted, Ashish Sharma 17. Neuromarketing – Band Wagon Between Brain and Brand Image Ekta Kapur 18. Personality and Purchasing Decisions of Bikes Hitendra Bargal, Ashish Sharma, Gayatri Gupta 19. Emerging Trends in the Indian Retailing B.V.H. Kameswara Sastry, D.V. Chandra Shekar 20. Relationship Marketing: A Key to Customer Retention B.V. H. Kameswara Sastry, A.V. N. Sundar Rao, D. V. Chandra Shekar 21. Retail Transformation - Competition or Conflict! Kanwal Thakkar, Swati Tomar 22. Creating Customer Value through Ecotourism for Development of Sustainability in the Rural Regions Moumita Mitra, Sanjoy Kumar Pal 23. Retail Management in the Growth of Indian Economy Maulik C. Prajapati, Vipul B.Patel 24. Crm to e-Crm: Promises and Pitfalls Seema Mehta, Tarika Singh 25. Demarketing: Applications in Un-selling Shilpa Sankpal, Praveen Sahu 26. Gender Identity: A Marketer's Perspective Shilpa Sankpal, Shaily Anand, Nishchaya Vaswani 27. Factors Affecting Customer Satisfaction Towards Mutual Funds in Gwalior Shruti Suri, Manu Chaturvedi 28. Customer Satisfaction as Key Driver of Excellence in Banking Organisations Shweta Saraswat 29. Reinforcement of Green Marketing as a Sustainable Marketing & Communications Tool & Practice Tanu Narang, Snehal Mistry HUMAN RESOURCE MANAGEMENT 30. Knowledge Management- A Strategic Tool For HRM Alpana Trehan, Shine David, Saurabh Mukherjee 31. Knowledge Management: A Strategic Approach Towards Organizational Effectiveness Babita Agrawal, Vishal Sood, Shikha Upadhyay 32. Trust And Leadership as Correlates of Team Effectiveness: A Study of Manufacturing Units Garima Mathur, Shruti Suri, Silky Vigg 33. Emergence of Transformational Leadership Vision-The Bridge Between Strategic Leadership and Transformational Leadership Style Mobin Ul Haque, Waqar Ahmed 34. Knowledge Management and Business Intelligence: Importance of Integrating to Build Organisational Excellence Jaydip Chaudhari

192 200 207 212 217 227

234 238 245 262 269

276

287

293

301

310

321

334

344

Contents 35.

IV.

vii

Relationship of Organizational Citizenship Behavior of Employee and Opportunity Provided to Employee by Employer: Mediating Effects of Trust of Employer on Employee 354 Khurram Aziz Fani, Tariq Mahmood 36. Value Based Management: A Leadership Approach Towards Organizational Excellence 366 Neera Singh 37. Managing Requirements Through Creative Process Model 375 Satish Bansal, KK Pandey 38. Knowledge Acquisition for Marketing Expert System Based on Problems of Marketing Domain 381 Snehal Mistry 39. Organizational Culture and Climate 389 Prachi Singh 40. Economic Strategies For HRM 397 P. Paramashivaiah, S. Aravind IT APPLICATIONS 41. Data Hiding in Identification and Offset IP Fields 407 B.K.Chaurasia, Kuldeep Singh Jadon 42. Computer Security: A Security Model To Secure an Organization Just Adequately 415 Kanak Saxena, Binod Kumar 43. Mobile Agent Technology in Intrusion Detection 419 B. K. Chaurasia, Robin Singh Bhadoria 44. Implementation of Personalization Service Based On Mobile Web Service Platform 427 Neeharika Sengar, Anil Singh, Jainendra Jain 45. Finite-state Automata Based Classification of News Segments 432 Nitin Shrivastava 46. Unified Communication: Enabling The Knowledge Worker Through Simplicity 436 Somil Mishra 47. Self-Monitoring and Self-Adapting Operating Systems 444 Krishan Kant Yadav, Ghanshyam Yadav, Anil Singh 48. Selection of Software Process Model 453 Satish Bansal, K K Pandey 49. Sensor Network Security 460 Rakesh Prasad Sarang 50. Overview of The Embedded Operating System 469 Krishan Kant Yadav 51. Some Futuristic Trends In Data Mining 477 Virendra Singh Kushwah, Nitin Paharia 52. A Mathematical Study of Effect of Second Degree Burn on Dermal 483 Parts of Human Body Surabhi Sengar 53. Data Quality For Business Intelligence 492 Virendra Singh Kushwah, A.K.Solanki

Key Drives of Organizational Excellence

viii 54.

V.

Bluetooth Technology Versus Wi-Fi Technology Archana Naik, Gurveen Vaseer

GENERAL MANAGEMENT 55. Data Management Issues in The Supply Chain Gazala Yasmin Ashraf 56. Cross-Cultural Communication: A Golden Gate to International Business Girraj Verma, Mala Issac 57. Success: A Methodology to Design Effective E-Commerce Web Sites Hema Banati, Monika Bajaj 58. Value Based Management: A New Way For Organizational Excellence Kulkarni Sharad Raghunath 59. Impact of Foreign Direct Investments on M.P.'S International Trade Nitin Tanted, Hitendra Bargal, S.Mahalati 60. Enterprise Systems in Contemporary Educational Institutes For Administration: An Analysis Praveen Kumar 61. Reverse Logistics: Trends, Practice, and Implications Salma Ahmed 62. Nemmadi: A Peace of Mind Application for Rural People of Karnataka Sameer K. Rohadia 63. Impact of Gats (General Agreement on Trade in Services) on Higher Education System (With Specific Reference To Professional Disciplines Mrs. Ashwini Renavikar, Mr. Abhijeet Tarwade, Mr. V V Jog 64. Strategies For Training in Educational Institutions Raja K. G., Paramashivaiah 65. Impact of E-Commerce in India Mili Singh 66. Resurrecting The Morning Meeting Bharti Venkatesh, Deepa Chaterjee 67. Evaluation of Distance Education With Special Reference to Management Vijay Kumar Pandey, Praveen Sahu, Krishan Kumar Pandey, Gaurav Jaiswal, Vikhyat Singh 68. Role of Information System and Improved Business Decisions K K Pandey, Satish Bansal, Manisha Pandey

501

513 520

525 535 544

553 561 572

583 591 595 601

605

611

Contents

ix

Prologue Excellent organizations are able to identify, monitor and improve the key drivers of excellence and are able to concentrate all their energy on the key drivers and therefore able to outperform the ones that spread their energies on all the issues facing the organization. Thus, ‘Key Drivers of Organizational Excellence’ is indeed a pertinent issue in the fiercely competitive business environment being faced by every firm in every business sphere. The challenge that confronts the leadership of every firm is “how to build and lead a practice that consistently performs at the upper levels of its potential’? Every business is an organizational system of key performance drivers. And the way the key performance drivers function together is what determines the performance and growth of a firm. The bottom line is that all the firms are perfectly designed to get the results they are getting. If they aren’t getting the results they want, then they need to improve the way their key drivers perform. Many firms are preoccupied with performance indicators, but fail to focus on and manage performance drivers. The distinction is critical. Key drivers produce performance, and key indicators measure performance. Key indicators play an important role, to be sure, but they don’t produce performance; they measure it. Well-designed indicators give you critical information and feedback about how your firm is performing, but drivers are the cause of performance. Further, you can’t manage indicators; you can only manage drivers. Many leadership teams fail to make this distinction and as a result focus too much attention on performance indicators and too little attention on performance drivers. It is the alignment of the five drivers that is Organizational Culture, Strategy, Processes, Structure and People that matters most. The best results are produced when there is an organizational culture that aligns and motivates people; an effective strategy that delivers value in response to the priorities of clients; processes and systems that produce efficient, high quality work; an organization structure that empowers people and facilitates workflow; and a people strategy that recruits, develops, and retains the right people. Culture is the foundation on which all the other performance drivers are built and from which they draw their energy and strength. The power of culture lies in its ability to engage and align people. Culture is a key source of the “E Factors” — engagement, energy, enthusiasm, effort, excitement, and excellence. A strong and effective culture is like having additional executives the organization doesn’t have to pay for! An ineffective culture discourages people and weakens the organization. An effective culture engages people and strengthens the organization. Culture supports and feeds everything a firm and its people do. Strategy brings focus, discipline, and passion to a firm and its people. The focus of strategy is to understand the competitive environment, to deliver value in response to client priorities, to achieve strategic and operational objectives, to build deep relationships with clients, and to build loyalty. The discipline of strategy is the ability to execute and follow through. The passion of strategy is a deep and unwavering commitment to the services the organization provides, to the client, and to producing results. Strategy must also drive change in response to client priorities and competitive realities in the market.

x

Key Drives of Organizational Excellence

Processes give a firm the capability to perform and produce results. Business performance can only be improved to the extent that processes allow. It is not the individual or team heroics that will take a firm forward overcoming fundamentally flawed processes. Over time, strong people cannot compensate for weak processes. The best firms work relentlessly within and across business units to drive out non-value-added activity. The design of a firm is fundamental to its success. The purpose of organizational structure is to support people and processes to make sure the right people are in the right jobs doing the right things. Attention must be paid to roles, responsibilities, and rewards, and to the informal structure of trust, respect, and interpersonal connection. High performance firms have great teamwork. They are designed to support their people so that they can work effectively with each other and for clients. People make the critical difference between mediocrity and consistent high performance. High performance firms therefore pay great attention to the way they select, develop, and retain their people. They understand that the right strategy with the wrong people won’t work. Successful firms select and train for life skills and job skills, and they select for people who “fit” with their organizational culture. To remain competitive, firms need to create an environment that brings out the best in their people. The success of the business system is dependent on the effectiveness of the human system that supports it. The firm cannot become what its people are not. The organizations that remain excellent for long periods of time have certain common traits; they devote more resources and energy on certain key areas. The main theme of the conference and the title of this book “Key Drivers of Organizational Excellence” reflect to encompass the process of formulating and implementing these key drivers. Organizational excellence can no more be evaluated in terms of current indicators like financial figures such as ‘profits earned’ by the organization nor it can be reflected in the current marketing results like ‘market share enjoyed’. Excellent organizations are the ones that have high degree of fit between strategy; structure; systems; style; skills; staff; and shared values. There must be complete internal alignment among all the seven Ss. Also since, all Ss are interrelated; a change in one has a ripple effect on all the others. Thus, an excellent organization will ensure that equal attention is paid to all of the seven elements at the same time. Organizational excellence is a sustainable competitive advantage that enables their business to survive against any competition over a long period of time. “When consistent organizational excellence is in place, organizations can achieve industry leadership for decades and generations. High-performing organizations produce extraordinary results that extend beyond customer service and shareholder gains. These companies become agents and models of constructive innovation and create places where people can learn, achieve, and grow. What these companies do consistently display is the ability to sustain performance over time and over changing market circumstances. Their records of achievements have a positive slope over decades. And, even more significantly, they produce benefits all stakeholders inclusively— not for the benefit of management at the expense of employees and shareholders, or for employees and shareholders at the expense of suppliers and the community. Leading multinational companies including some of the Indian multinationals have consistently displayed all the above.

Contents Prologue

xi

The book has Seventy Three chapters classified into five parts: Finance, Marketing, Human Resources, Information Technology Applications and General Management. The First part General Management has fifteen chapters covering the discussion from. The First Section Financial Management has Eleven Chapters Covering The Discussion On Timing Skills Of Fund Managers, Sector Wise Analysis Of Weak Form Efficiency, Bottom Of Pyramid & Investment Approach To Indian Financial Market, Enterprise Risk Management: A Strategic Approach To De-Risking An Enterprise, Non-Performing Assets In Indian Banks, Environmental Management Accounting, Microfinance Interventions In India, Cross Sectional Industrial Performance and Investor’s Return Cross Sectional Industrial Performance As A Predictor Of Investor’s Return, And Working Capital Management, The Second Section - Marketing Management has Fifteen Chapters Covering Advertising Appeals in Print Media, Customer Relationship Management in Insurance Sector, Challenges Faced By Marketing Managers, The Role of Culture in Consumer Behavior, Neuromarketing, Personality and Purchasing Decisions, Relationship Marketing, Retail Transformation, Retail Management and the Growth of Indian Economy, CRM to E-CRM, Demarketing: Applications in Un-Selling, Customer Satisfaction as Key Driver of Excellence, Creating Customer Value Through Ecotourism, Gender Identity: A Marketer’s Perspective, Customer Satisfaction Towards Mutual Funds, and Reinforcement of Green Marketing. Third Section – Human Resource Management has thirteen chapters and covers Knowledge Management, Trust and Leadership as Correlates of Team Effectiveness, Emergence of Transformational Leadership Vision, Relationship of Organizational Citizenship Behavior of Employee and Opportunity Provided to Employee by Employer, Value Based Management: A Leadership Approach Towards Organizational Excellence, Managing Requirements Through Creative Process Model, Knowledge Acquisition For Marketing Expert System Based on Problems of Marketing Domain, Organizational Culture and Climate, Economic Strategies for HRM. Fourth Section - IT Applications having fourteen chapters covers Data Hiding in Identification and Offset IP Fields, Computer Security, Mobile Agent Technology in Intrusion Detection, Implementation of Personalization Service Based on Mobile Web Service Platform, Finite-State Automata Based Classification of News Segments, Unified Communication, Finite-State Automata Based Classification of News Segments, Self-Monitoring And SelfAdapting Operating Systems, Selection of Software Process Model, Sensor Network Security, Embedded Operating Systems, Futuristic Trends in Data Mining, Mathematical Study of Effect on Second Degree Burns, Data Quality For Business Intelligence, Bluetooth Technology Versus Wi-Fi Technology. The Fifth Section – General Management comprises of fifteen chapters covering Data Management Issues in the Supply Chain, Cross-Cultural Communication, A Methodology to Design Effective E-Commerce Web Sites, Value Based Management: A New Way For Organizational Excellence, Impact Of Foreign Direct Investments On M.P.’S International Trade, Enterprise Systems in Contemporary Educational Institutes for Administration: An Analysis Reverse Logistics: Trends, Practice, and Implications, Nemmadi: A Peace of Mind Application for Rural People of Karnataka, Impact of Gats (General Agreement on Trade in Services) on Higher Education System (With Specific Reference to Professional Disciplines),

xii

Key Drives of Organizational Excellence

Strategies for Training in Educational Institutions, Impact of E-Commerce In India, Resurrecting The Morning Meeting, Evaluation Of Distance Education With Special Reference To Management, Role of Information Systems and Improved Decisions. An excellent organization has the right balance between the whole and the parts: the need of the organization as a whole, the needs of its internal groups, and individual needs of its members. This alignment has got to remain in spite of the external changes that the organization operates in. Therefore, an excellent organization needs to master the art of change management to remain excellent for long periods of time. The book is expected to provide the right tools for mastering change for organizational excellence.

Contents

xiii

Acknowledgments The book is an outcome of concerted efforts of a dedicated team. I am thankful to all the contributors who have made this academic endeavor fructify and take shape of a book. It is difficult to name every person who has directly or indirectly contributed in giving this book the current shape. I would like to put on record my sincere thanks to all the authors and coauthors of the research articles included in this book. I would like to put on record my sincere appreciation for the efforts put in by the organizing secretary Prof. Saurabh Mukherjee and the joint organizing secretary Prof. Shilpa Sankpal in organizing the conference and soliciting the contributions from the contributors to this book. Ms Anita Bhadoria deserves a special mention for her untiring efforts for providing the secretarial assistance. Last but not the least I would like to thank Shri Anurag Jain and his team at Excel Books, New Delhi for their co-operation in bringing out this edited volume.

About the Editors Dr. S. S. Bhakar is currently Director, Prestige Institute of Management, Gwalior. Prior to this assignment he has served Prestige Institute of Management and Research, Indore and has been in the profession for the last 15 years. Before joining academics he has worked in Indian Navy for a period of nineteen years. He has conducted a large number of management and faculty development programs for the executives of major business organizations and faculty members of Management Institutions respectively across the country. He has published more than sixty Research Papers, Book Reviews and Cases in national and international refereed Journals. He has published two edited books titled “Organizational Challenges: Insights and Solution”, Excel Books, New Delhi, and “Key Drivers of Organizational Excellence”, Excel Books, New Delhi as member of the editorial team and as sole editor respectively. Shilpa Sankpal is currently working as Lecturer – Marketing with Prestige Institute of Management, Gwalior since January 2007. She holds her MBA degree from DAVV, Indore. Before entering academics, she has worked briefly as a copywriter and a language faculty. She has been teaching undergraduate and post-graduate management classes since 2005. Her research interest includes areas of both Marketing as well as HR. She has ten publications to her credit inclusive of case studies, book reviews and research papers. She has presented several papers in national and international conferences and seminars. Saurabh Mukherjee has done B.Sc (Hons.) in Statistics from Banaras Hindu University, Varanasi. After completing Master degree in Computer Science from Jiwaji University, Gwalior, M.P, he started his teaching and research career. Further he completed Master of Computer Applications from Guru Jambeshwar University of Science and Technology, Hisar, Haryana. He is actively engaged in research and development for more than 10 years. He is recipient of many awards for best research paper presentation in JSRS. His work on Virtual Reality has been published by IEEE Computer Society, USA. He is a reviewer of various international journals and conferences like IEEE Transaction on Fuzzy Systems, ITNG (USA), NISS(China) to name a few. Recently, he has been invited for a session in soft computing for digital image processing, in China. He has written a book in C language. He had been invited to give oral collage presentation in IIM, Ahmedabad. His active research area is in digital image processing, soft computing, advance operating system; advanced computer graphics etc. He has received one international scholarship from Chinese govt. He is a recognized IGNOU counselor. Recently, he is appointed as an Honorable Editorial Board Member of an International peer reviewed Journal (IndJST).

Contributors A.K. Solanki

Institute of Technology & Management, Gwalior

A.V. N. Sundar Rao

TJPS College, Guntur GWN Communications Pvt. Ltd. Hyderabad

Abhijeet Tarwade

Lecturer, Sinhgad Institute of Management, Pune

Alok Mittal

Govindram Seksaria Institute of Management & Research, Indore

Alpana Trehan

Lecturer, I.M.S, Devi Ahilya University, Indore

Anil Singh

Lecturer, Prestige Institute of Management, Gwalior

Anindita Chakraborty

Lecturer, Prestige Institute of Management, Gwalior

Ashish Sharma

University Institute of Management, RDVV, Jabalpur

Ashutosh Agarwal

Lecturer, Jaipuria Institute of Management, Ghaziabad

Ashutosh Verma

Lecturer, Indian Institute of Forest Management, Bhopal

Ashwini Renavikar

Lecturer, Sinhgad Institute of Management, Pune

B.K.Chaurasia

IITM, Gwalior

B.V.H. Kameswara Sastry

TJPS College, Guntur GWN Communications Pvt. Ltd. Hyderabad

Babita Agrawal

Lecturer, Shri Vaishnav Institute of Management, Indore

Bharti Venkatesh

Lecturer, MANIT, Bhopal

Binod Kumar

Samrat Ashok Technology Institute, Vidhisha

D.V. Chandra Shekar

TJPS College , Guntur GWN Communications Pvt. Ltd. Hyderabad

Deepa Chaterjee

Lecturer, MANIT, Bhopal

Deepak Raj

ABV-IIITM, Gwalior

Ekta Kapur

Amity Business School, Amity, Noida, U.P

Falguni H. Pandya

Lecturer, AES Post Graduate Institute of Business Management (HL MBA), Ahmadabad

Garima Mathur

Lecturer, Prestige Institute of Management, Gwalior

Key Drives of Organizational Excellence

xviii Gaurav Jaiswal

Lecturer, Prestige Institute of Management, Gwalior

Gayatri Gupta

Lecturer, Vocational Education, MB. Khalsa College, Indore

Gazala Yasmin Ashraf

Lecturer, Disha Institute of Management & Technology, Raipur

Ghanshyam Yadav

Noida Institute of Engineering & Technology, Gr. Noida (U.P)

Girraj Verma

Lecturer, Prestige Institute of Management, Gwalior

Hema Banati

Lecturer, Dayal Singh College, Agra

Jainendra Jain

Lecturer, Prestige Institute of Management, Gwalior

Jaydip Chaudhari

Department of Business and Industrial Management, Veer Narmada South Gujarat University, Surat

Jitendra Bargal

Lecturer, Patel Institute of Management and Research, Indore

Kanak Saxena

Samrat Ashok Technology Institute, Vidhisha

Kanwal Thakkar

Dayanand Academy of Management Studies, Kanpur

Kavita Indapurkar

Lecturer, Boston College for Professional Studies, Gwalior

Khurram Aziz Fani

GIFT Business School, Pakistan

Krishan Kant Yadav

Lecturer, Prestige Institute of Management, Gwalior

Krishan Kumar Pandey

Lecturer, Prestige Institute of Management, Gwalior

Kuldeep Singh Jadon

IIITM, Gwalior

Kulkarni Sharad Raghunath

Lecturer, Chintamanrao Institute of Management Development and Research Vishrambagh, Sangli

Mala Issac

Lecturer, Institute of Technology & Management, Gwalior

Manisha Pandey

Lecturer, Prestige Institute of Management, Gwalior

Manu Chaturvedi

Lecturer, Institute of Technology and Management, Gwalior

Maulik C. Prajapati

V.M. Patel College of Management Studies, Gujarat

Mihika Kulkarni

Shrimati Hiraben Nanavati Institute of Management & Research for Women, Pune

Contents Contributors

xix

Mili Singh

Lecturer, Prestige Institute of Management, Gwalior

Mobin Ul Haque

University of Management and Technology, Pakistan

Monika Bajaj

Lecturer, Dayal Singh College, Agra

Moumita Mitra.

NSHM Knowledge Campus, Arrah

Nageshwar Rao

Lecturer, Jawaharlal Nehru Institute of Business Management, Ujjain

Naila Iqbal

Lecturer, Maulana Azad National Institute of Technology, Bhopal

Navita Nathani

Lecturer, Prestige Institute of Management, Gwalior

Navita Tripathi

Lecturer, Jaipuria Institute of Management, Ghaziabad

Neeharika Sengar

Lecturer, Prestige Institute of Management, Gwalior

Neera Singh

GIDC Rajju Shroff Rofel Institute of Management Studies, Gujarat

Neha Pareek

Student, Prestige Institute of Management, Gwalior

Nishchaya Vaswani

Student, Prestige Institute of Management, Gwalior

Nitin Paharia

Lecturer, Prestige Institute of Management, Gwalior

Nitin Shrivastava

Aditya Collge, Gwalior

Nitin Tanted

Lecturer, Prestige Institute of Management and Research, Indore

Nivedita Pantawane

SHNIR for Women, Pune

P. Paramashivaiah

Lecturer, Govt. R.C. College of Commerce &Management, Bangalore

Prachi Singh

Disha Institute of Management & Technology, Raipur

Prangali Godbole

Student, Prestige Institute of Management, Gwalior

Praveen Kumar

Lecturer, IES College of Management & Information technology, Mumbai

Praveen Sahu

Lecturer, Prestige Institute of Management, Gwalior

Pushpa Negi

Lecturer, Prestige Institute of Management, Gwalior

Raja K.G. Paramashivaiah

Government R.C. College of Commerce and Management, Bangalore

Key Drives of Organizational Excellence

xx Rakesh Prasad Sarang

Lecturer, Prestige Institute of Management, Gwalior

Robin Singh Bhadoria

IITM, Gwalior

Ruchi Saxena

Govindram Seksaria Institute of Management & Research, Indore

S. Aravind

Lecturer, Arba Minch University, Arba Minch, Ethiopia

S. Mahalati

Lecturer, School of Economics, D.A.V.V., Indore

Salma Ahmed

Professor, AMU, Muslim University, Aligarh

Sameer K. Rohadia

Lecturer, GIDC Rajju Shroff Rofel Institute of Management Studies, Gujrat

Sanjoy Kumar Pal

Kopran Laboratories Ltd. Kolkata

Satish Bansal

Lecturer, Prestige Institute of Management, Gwalior

Saurabh Dixit

Lecturer, IITTM, Gwalior

Saurabh Mukherjee

Lecturer, Prestige Institute of Management, Gwalior

Seema Mehta

Lecturer, Prestige Institute of Management, Gwalior

Shagufta Sheikh

Lecturer, Prestige Institute of Management, Dewas

Shailendra Tripathi

ABV-IIITM, Gwalior

Shailja Bhakar

Lecturer, Prestige Institute of Management, Gwalior

Shaily Anand

Student, Prestige Institute of Management, Gwalior

Shikha Upadhyay

Lecturer, Shri Vaishnav Institute of Management, Indore

Shilpa Bhakar

Lecturer, Prestige Institute of Management, Gwalior

Shilpa Sankpal

Lecturer, Prestige Institute of Management, Gwalior

Shilpa Sankpal

Lecturer, Prestige Institute of Management, Gwalior

Shine David

Lecturer, Prestige Institute of Management, Dewas

Shipra Agrawal

Student, Prestige Institute of Management, Gwalior

Shruti Suri

Lecturer, Institute of Information Technology & Management, Gwalior

Shweta Saraswat

RJIT, BSF Academy, Tekanpur

Shweta Sharma

Research Scholar, Jiwaji University, Gwalior

Contents Contributors

xxi

Silky Vigg

Jagannath International Management School, New Delhi

Simranjeet Sandhar

Lecturer, Prestige Institute of Management, Gwalior

Snehal Mistry

C.K. Pithawalla Institute of Management, Surat

Soma Sharma

Lecturer, NYSS Institute of Management & Research, Nagpur

Somil Mishra

Dr. Gaur Hari Singhania Institute of Management & Research, Kanpur

Sunil V. Deshpande

Mangalmay Institute of Management & Technology, Greater Noida

Surabhi Sengar

Lecturer, Prestige Institute of Management, Gwalior

Swati Tomar

Dayanand Academy of Management Studies, Kanpur

Tanu Narang

B.M. College of Management, Surat

Tarika Singh

Lecturer, Prestige Institute of Management, Gwalior

Tariq Mahmood

GIFT Business School, Pakistan

Umesh Holani

Dean, Commerce & Management Department, Jiwaji University, Gwalior

V V Jog

Lecturer, Sinhgad Institute of Management, Pune

V. K. Pandey

Lecturer, Prestige Institute of Management, Gwalior

Vandana Jain

Student, Prestige Institute of Management, Gwalior

Vijay Kumar Pandey

Lecturer, Prestige Institute of Management, Gwalior

Vikas Pandey

Maulana Azad National Institute of Technology (MANIT), Bhopal

Vikhyat Singh

Student, Prestige Institute of Management, Gwalior

Vipin Soni

Alumini, Prestige Institute of Management, Gwalior

Vipul B. Patel

V.M. Patel College of Management Studies, Gujarat University, Surat

Virendra Singh Kushwah

Lecturer, Prestige Institute of Management, Gwalior

Vishal Sood

Lecturer, Shri Vaishnav Institute of Management, Indore

Wajid Ali

Lecturer, IITTM, Gwalior

Waqar Ahmed

University of Management and Technology, Pakistan

I FINANCE

1

Timing Skills of Fund Managers: An Empirical Study of Indian Mutual Funds Anindita Chakraborty Nishchaya Vaswani

Indian mutual fund industry has grown tremendously since its inception in 1994 and now it is facing volatility due to the up and down in the capital market. Sometime before it was thought to be a safe investment opportunity but due to the volatility in the market it is not too safe now. Therefore it is very important that the fund managers should select the securities very carefully and it calls for their market timing skills. The aim of the study is to evaluate the timing skills of fund managers of public and private AMC by using Treynor & Mazuy Model, then to compare between the different schemes like equity linked saving scheme (ELSS), equity, balanced and gilt. The study used sample of 43 mutual fund schemes of public-sector and private-sector for the period 2004 to 2007. The study indicates equity schemes showed that good performance as 10.576 % according to timing skills of fund managers. According to portfolio performance equity schemes are superior to others schemes by 1.4355 %. Keywords: Market timing, risk-free rate

INTRODUCTION The Indian mutual fund industry has made its debut with the setting up of Unit Trust of India (UTI) in 1964. It has its monopoly for 23 years till the public sector banks and financial institutions were permitted to establish Mutual Funds in 1987. The Industry was brought under the control of SEBI and opened for private sector participation in 1993. The private sector and foreign Institutions began setting up Mutual Funds thereafter. The Indian Mutual fund business has passed through four phases. The first phase was from 1964 to 1987, when there was only one player that was the Unit Trust of India, which had a total asset of Rs. 6,700/- crores at the end of 1988. The second phase was from 1987 to 1993 during which period 8 funds were established (6 by banks and one each by LIC and GIC). The total assets under management had grown to Rs. 61,028/- crores at the end of 1994 and the number of schemes were 167. The third phase began with the entry of private and foreign companies in

4

Key Drives of Organizational Excellence

the Mutual fund industry in 1993. Kothari Pioneer Mutual fund was the first fund to be established by the private sector in collaboration with a foreign company. The fourth phase was since February 2003 where UTI was bifurcated into two separate entities. One is the specified undertaking of Unit Trust of India with the assets of Rs. 29,835 crores and another one was UTI Mutual Fund Ltd sponsored by SBI, PNB, BOB and LIC. It is registered with SEBI and functions under the Mutual Fund Regulations. Range of mutual funds: The mutual funds can be classified according to their investment objective: 1.

Equity funds: The objective is the capital appreciation in the value of investment by investing in the growth oriented securities having a potential to appreciate in the long run.

2.

Income funds: This type of fund is aimed at regular returns and they distribute the income earned by them periodically amongst the investors.

3.

Balanced funds: These funds are a combination of both fixed income securities and equities. They ensure both appreciations in stock as well as regular return to the investors.

4.

Money Market funds: they are set up with the objective of investing exclusively in money market instruments such as treasury bills, commercial papers CDs etc.

5.

Equity Linked Savings Schemes (ELSS): ELSS was created by Income Tax Act, 1961 under section 88 whereby the government offered tax incentives to investors who deposited their money in these funds, although with investment limited.

6.

Gilt Fund: A mutual fund that invests in several different types of medium and longterm government securities in addition to top quality corporate debt. Gilts originated in Britain.

Growth of mutual fund Industry

Source: AMFI (Association of Mutual Funds in India)

Timing Skills of Fund Managers: An Empirical Study of Indian Mutual Funds

5

REVIEW OF LITERATURE Dowd (1999) in this study, the market-timing ability and performance consistency on hedge fund managers were tested. The Sharpe ratio was employed to implement the consistency of performance for mutual fund in the previous literature. Due to the non-normally distributed of returns of financial assets, the adjusted Sharpe ratio is employed in this study. As he suggested, the adjusted Sharpe ratio in which the true risk is evaluated by value at risk (VaR). In the period of Asian flu and (Long Term Capital, LTCM) event, we found the hedge fund returns shocked badly, but it is not significant. Although the hedge funds performance is positive correlation during pre and post-term, but it is not significant. Jensen (1972), Admati and Ross (1985), Dybvig and Ross (1985), and Grinblatt and Titman (1989), (1995) accepted that Jensen alpha fails to detect successful market timing of. Therefore, they recommended not using alpha in external performance evaluation. In the paper, they showed that this conclusion is misleading. They set up a theory of delegated portfolio management in a mean variance framework with asymmetric information. They proved that alpha is an unbiased performance measure even for market timing funds. They showed that the systematic risk for a fund investor consists of two parts: benchmark risk and management risk resulting from the uncertainty about the skills of the fund manager. They also showed that the extent of management risk depends on what fund investors know about the fund manager’s trade record. Therefore, the performance of mutual funds depends not only on the skills of the fund managers, but also on whether they publish their trade record or not. Lockwood and Kadiyala (1988) and Ferson and Schadt (1996) decomposes the conditional expected mutual fund return in five parts. Two parts, selectivity and expert market timing, can be attributed to manager skill, and three to variation in market exposure that can be achieved by private investors as well. The dynamic model was use to estimate the relative importance of the components in the decomposition is a generalization of the performance evaluation. They found that the restrictions imposed in existing models may lead to different inferences about manager selectivity and timing skill. The result of sample of 78 asset allocation mutual funds indicate that several funds exhibit significant expert market timing, but for most funds variation in market exposures does not yield any economically significant return. Funds with high turnover and expense ratios are associated with managers with better skills. Lee and Rahman evaluate the performance of 15 U.S.-based international mutual funds for the period 1980-89. Selectivity and timing skills of mutual fund managers were the primary criteria for performance evaluation. They used Treynor and Mazuy model. They found that many of the international mutual funds outperformed the U.S. market benchmark, perhaps due to the expanded diversification opportunities that they provide. When a world market index is used as the benchmark, fund managers show relatively poor performance in terms of selectivity skills. However, there is strong evidence that some managers rely rather heavily on timing skills in international capital markets. In keeping with the economic frames, several scholars have investigated whether mutual funds out perform the market. Friend, Brown, Herman and Vickers (1962) did the first extensive and systematic study of mutual funds. The study considered 152 mutual funds with annual data from 1953 to 1958. While the study did not adjust the benchmark portfolio for the non-discovered beta, the authors did adjust their market return to be comparable to the funds.

6

Key Drives of Organizational Excellence

Jensen’s (1968) classic study developed an absolute measure of performance based upon the Capital Asset Pricing Model and reported that mutual funds did not appear to achieve abnormal performance when transaction costs were taken into account. Fama (1972) developed a methodology for evaluating investment performance of managed portfolios. He suggested that the overall performance could be broken down into several components. Gupta (2000) has examined the market timing abilities of India fund managers using weekly NAV data for 73 mutual fund schemes from 1994 to 1999. He found that the results do not support the hypothesis that managers of closed ended schemes can time the market easily. Again Gupta (2002) examined the growth, regulatory framework and performance evaluation of Indian mutual funds and reported poor performance. Zakri Y. Bello (2005) matched a sample of socially responsible stock mutual funds matched to randomly select conventional funds of similar net assets to investigate differences in characteristics of assets held, degree of portfolio diversification and variable effects of diversification on investment performance. The study found that socially responsible funds do not differ significantly from conventional funds in terms of any of these attributes. Moreover, the effect of diversification on investment performance is not different between the two groups. Both groups underperformed the Domini 400 Social Index and S & P 500 during the study period. Empirically, a number of studies have analysed the ability of US mutual funds to time the market, and most of these analyses seem to agree that mutual funds do not possess timing ability. Using a quadratic equation, Treynor and Mazuy (1966), for example, find that for only 1 out of 57 mutual funds the hypothesis of no timing ability could be rejected, and Veit and Cheney (1982) conclude that in general mutual funds do not change their characteristic lines in bull and bear markets and in particular for a majority of those funds that did change their characteristic 5 lines, their timing was unsuccessful. In fact, only 3 out of 74 mutual funds obtained a successful timing. These conclusions are confirmed by Henriksson (1984), who applies parametric as well as non-parametric techniques developed by Merton (1981) and Henriksson and Merton (1981). Using an extended version of the Henriksson and Merton model, Connor and Korajczyk (1991) and Hendricks, Patel and Zeckhauser (1993) confirm that US mutual funds do not possess timing ability. Finally, Grinblatt and Titman (1994) analyse performance using the Jensen measure and the Treynor and Mazuy measure, and they conclude that the simple Jensen measure performs as well as the Treynor and Mazuy measure. Also Goetzmann, Ingersoll and Ivkovic (2000) find no evidence of significant timing ability for US mutual funds using an adjusted Henriksson and Merton method. Although, the vast majority of studies of mutual fund performance confirm that performance is neutral, there seems to be evidence in favour of performance persistence, i.e. previous nonperforming funds are also likely to be top-performing funds in the short-term future. This phenomenon has been called “hot hands” in mutual funds by Hendricks et al. (1993). Edelen (1999) indicates that mutual funds are less exposed to the stock market because they need cash in their portfolio in order to accommodate the inflow and outflow of investor’s money. The expected return that to be attributed to this component is the long-term exposure multiplied by the conditionally expected risk premium. The average fund return related to the long-term exposure is 34 bp per month.

Timing Skills of Fund Managers: An Empirical Study of Indian Mutual Funds

7

Wermers (2000), using a holdings-based decomposition, also finds that high turnover funds have higher average returns than low-turnover funds. He found that funds with average turnover have the lowest selectivity as measured by the Carhart (1997) four-factor alpha. In contrast, Elton, Gruber, Das & Hlavka (1993) find that Jensen’s alphas with respect to a three-factor model (market, small-cap, and bonds) are lower for funds with higher turnover or higher expense ratios. In a Bayesian framework, Busse & Irvine (2002) model investor’s prior beliefs about management skills to be cantered around the negative of the expense ratio. Our results indicate that manager skills are positively related to expense ratios and hence provide evidence against investor’s prior beliefs in the model of Busse & Irvine. Glosten & Jagannathan (1994), among others, indicate that there was an economic explanation for the result. Managers might purchase put options, which lead to reduced market exposures when stock returns are low, implying timing ability. Obviously, this type of timing is artificial and is unrelated to manager skill. The cost of buying put options is reflected in lower manager selectivity. They also examined the combination of returns due to selectivity and expert timing to gauge the potential expected return difference that investors in a fund could obtain due to good management. Despite the large literature on mutual fund performance based on monthly data little is known about performance measures with daily data. Bollen and Busse (2001) compare daily and monthly market timing abilities of managers. Their results indicate that the explanatory power of their model is higher with daily data than with monthly data. Furthermore, they show that the proportion of mutual fund managers with timing abilities is higher with daily returns than with monthly returns which could explain why many investors choose to invest their savings in the mutual fund industry. Busse (2001) showed that the auto-correlation of daily returns introduces a bias in the estimation of monthly volatility. In light of these results, using daily returns not only seems more accurate but also changes our assessment of active management in mutual funds. It should be noted that both Bollen and Busse (2001) and Busse (2001) assume that risk measures estimated by the second moments of portfolio returns are unconditional, i.e. constant over time. The general conclusion reached in the literature of Jensen (1968), Malkiel (1995) and Detzler (1999), was that mutual funds in the US net of expenses had not been able to generate excess returns. However, using gross returns superior performance could be identified, but this was just about equal to the expenses implying a cost elasticity of approximately –1, Blake, Elton and Gruber (1993) and Detzler (1999). This conclusion is very much in line with the Grossman and Stiglitz (1980) theory of informationally efficient markets, where informed investors are compensated for their information gathering.

Objective of the Study l

To evaluate the timing skills of the fund managers.

l

To evaluate the timing skills of the fund managers on the basis of peer group comparison.

l

To evaluate the portfolio performance of mutual funds.

8

Key Drives of Organizational Excellence

RESEARCH METHODOLOGY 1.

Risk–free rate of return: The implicit yields on 91-days treasury bills have been used as a surrogate for risk-free rate of return. But, 91-days T-bills yields were available on auction basis and we have taken the data from RBI bulletin website.

2.

Model specification: Treynor and Mazuy Model (1966) was used to test the market timing skills of the fund managers. Algebraically, the model is specified as under: (Rp-Rf) = a + (bp - gp) × (Rm - Rf) Where, Rp = return on mutual fund P for the period T. Rf = return on risk-free asset for the period T. Rm = return on market index for the period T. bp = systematic risk of P for the period T. a = unsystematic risk of P for the period T. gp = gamma parameter for checking timing skills of fund manager. The evidence at market timing is provided by a significant and positive value for ãp, since the last term will make the characteristic like steeper as RM – RF is larger.

3.·

Market Proxy: The CNX Niffty Junior index have been used for comparison of timing sills of fund manager. We take index as market return of portfolio.

4.

Source of NAV: The NAVs of different schemes had been collect from AMFI, Moneycontrol, and Indiainfoline database.

5.

The Monthly NAVs of various schemes were collected and the following formula was applied to calculate monthly excess return: Excess Return=

Today’s NAV – Previous NAV ¥ 100 Previous NAV

For the purpose of scheme-wise excess return average of all monthly values of excess return was calculated. As the yearly sub-periods were also there in the study yearly average of monthly values of excess returns was also calculated.

Data and Their Sources The data consists of monthly Net Asset Value (NAVs) of 43 mutual funds. The analysis is made only by taking into account Equity, ELSS, Balanced and Gilt schemes both open and close-ended funds. Out of the 8 close-ended schemes and 35 open-ended, based on their investment objectives, 6 dividend schemes, 29 growth schemes, 2 saving schemes and 6 tax plan schemes were selected for the study. The choice of the sample was based on the following criteria: 1.

Availability of reasonable information for a given sample scheme.

2.

Sample should represent the mutual funds industry of India.

Timing Skills of Fund Managers: An Empirical Study of Indian Mutual Funds

9

RESULTS AND DISCUSSIONS Analysis of Timing Parameters (Annexure 1) The timing parameter, gp, varied between the highest 12.88079 (Pru ICICI FMCG Fund - (G)) and lowest 1.521354 (BOB ELSS ‘96) indicating large variation of timing abilities of the fund managers of different equity mutual funds. Pru ICICI FMCG Fund - (G) with the highest gp parameter shows superior performance in terms of timing abilities. But in terms of rate of return (annexure 1) it ranks also at number 1. It reveals that superior performance achieved by the fund (in terms of timing abilities) may have been offset by the far inferior performance in terms of stock selection abilities of their fund managers. Similarly, LICMF G-Sec Fund - PF Plan (G) (Rank 28), Reliance Vision Fund - (G) (Rank 32) and Birla Sun Life Govt Sec - Long Term (G) (Rank 42) are inferior performers in terms of rate of return. But amazingly the market timing abilities of their fund managers have shown superior performance. Conversely, Birla Taxplan‘98 (Rank 4), DSP ML Opportunities Fund (G) (Rank 5) and Morgan Stanley Growth Fund (Rank 12) has shown superior performance in terms of rate of return (annexure 1), but in terms of timing abilities, these funds have shown unsatisfactory performance. These facts indicate that fund managers have used their stock selection skills far better than their market timing abilities. None of the sample funds has shown negative timing parameter in the given time period. Thus, from the foregoing analysis related to market timing abilities of fund managers of sample equity mutual funds, it may be inferred that they have been able to generate superior performance in terms of timing abilities.

Analysis of Timing Parameters of Peer Groups (Annexure 2) Balanced Scheme We take 8 balanced schemes in our study. Out of that Magnum Children Benefit Plan is in 7th (-1.640008684) in performance but in market timing it stand 1st (2.430652112). BOB Balance (G) is 3rd (-0.406125808) in rate of return but in market timing it stand 5th (2.211335133) another schemes like Pru ICICI Balanced Fund - (G) stand 1st (-0.081945186) in rate of return but in terms of terms of market timing it stands 2nd (2.376422859).

ELSS (Equity Linked Saving Scheme) We take 11 ELSS scheme in our study. Out of that Birla Taxplan‘98 stands 2nd (0.617447144) in performance but in market timing it stands 3rd (3.402448687) others like Franklin India Tax shield 99, 4th (0.386733108) in rate of return but in market timing it stands 2nd (3.438612057), Pru ICICI Tax Plan - (G) 1st (1.237399191) in rate of return and it also stands 1st in terms of market timings (6.064949643).

Equity Scheme We take 13 equity schemes in our study. Out of that Magnum Index Fund (G) 7th (0.003291184) in performance but in market timing it stands 9th (2.413193877), others like Franklin FMCG

10

Key Drives of Organizational Excellence

Fund - (G) 6th (0.038189116) in rate of return but in market timing it stands 5th (2.849780612), Pru ICICI FMCG Fund - (G) 1st (1.43558768) in rate of return but it stand in terms of market timing 1st (12.88079491), Reliance Vision Fund - (G) 13th (-1.841563357) but in market timing it stands 3rd (3.113782834).

Gilt Scheme We take 11 gilt schemes in our study. Out of that Birla Sun Life Govt. Sec - Long Term (G) 10th (-2.138699423) in performance but in market timing it stand 6th (2.492607847), others like DSP ML G-Sec Fund - A (G) 3rd (-2.079242357) in rate of return but in market timing it stand 11th (2.403697866) and LICMF G-Sec Fund - PF Plan (G) 1st (-1.253848576) in rate of return but it stand in terms of market timing 1st (3.504542355).

Analysis of Portfolio Performance on Stock Selection Annexure 3 shows the performance of portfolio in different schemes like Balanced, Elss, Equity and Gilt fund. Portfolio performance is the difference between excess return of portfolio and market return of portfolio. Through this difference we analyze the performance of portfolio and find that there are some good mutual fund schemes which are performing well in terms of portfolio performance. Pru ICICI FMCG Fund - (G) (1.43558768) is a best performer in all mutual funds. It is also best performer in equity sector funds. Pru ICICI Tax Plan - (G) (1.237399191) is second best performer in all mutual funds but it is best performer in all ELSS schemes. Franklin India Opportunities Fund - (G) (0.820902746) gets 3rd position in performance of portfolio and it belongs to from equity schemes but it is 2nd in all equity schemes. Pru-ICICI Balanced Fund - (G) (-0.081945186) is in 16th position in terms of performance but it is in 1st position in all balanced schemes. LICMF G-Sec Fund - PF Plan (G) (-1.253848576) it is in 28th position in terms of performance of portfolio but it stands 1st in all gilt schemes.

CONCLUSION Today is scenario of more and more earning from financial instruments just Shares, FDs, Bonds and Mutual Funds etc. There is one instrument which is more preferred by the retail investors’ i.e. mutual funds due to the safety of investment. But there is a disadvantage of mutual fund and that is, the fund managers who decide allocation of funds and designing the portfolio sometime aren’t able to take decisions according to trend of capital market e.g. BSE and NSE index trend and the investors have to suffer losses. Through this study the researchers analyzed the timing skills of fund managers. It can be concluded that out of 43 schemes, there were 30 mutual fund schemes that showed the positive results on the basis of timing skills of fund managers. Hence, the first objective of the study to analyse timing skills of the fund managers has been met. The findings of the study indicate that some mutual fund schemes didn’t perform well in term of gamma (market timing) but they performed well on rates of return and other indicators. Market return and gamma of portfolios were compared to evaluate the performance of Mutual fund schemes. The Mutual funds that earned higher rate of return were compared against market return of portfolio. This study is very helpful to those investors who are investing their money in mutual funds and they will be able to understand that timing skills of fund manager are more important basis for selecting mutual funds for investment.

Timing Skills of Fund Managers: An Empirical Study of Indian Mutual Funds

11

References Admati, A., and S. Ross, (1985), Measuring Investment Performance in a Rational Expectations Equilibrium Model, Journal of Business, 58, 1–26. Blake, C.R., Elton, E.J. and Gruber, M.J. (1993), The Performance of Bond Mutual Funds, Journal of Business, 66, 371-403. Busse, J. A. & Irvine, P. J. (2002), Bayesian Alphas and Mutual Fund Persistence, Goizueta Business School, SSRN (342720). Carhart, M. (1997), On Persistence in Mutual Fund Performance, Journal of Finance 52(1): 57–82. Connor, G. and Korajczyk, R.A. (1991), The Attributes, Behaviour, and Performance of U.S. Mutual Funds, Review of Quantitative Finance and Accounting, 1, 5-26. Detzler, M. L. (1999), The Performance of Global Bond Mutual Funds, Journal of Banking and Finance, 23, 1195-1217. Dybvig, P. H., and S. A. Ross (1985), The Analytics of Performance Measurement Using a Security Market Line, Journal of Finance, 40, 401–416. Edelen, R. M. (1999), Investor Flows and the Assessed Performance of Open-End Mutual Funds, Journal of Financial Economics 53, 3, 439–466. Fama, Eugene F., (1972), Components of Investment Performance, Journal of Finance, 27. Ferson W., Schadt R. (1996), Measuring Fund Strategy and Performance in Changing Economic Conditions, Journal of Finance, 51, 425-462. Friend, F. Brown, E. Herman and D. Vickers (1962), A Study of Mutual Funds of U.S. Securities and Exchange Commission, Glosten, L. R. & Jagannathan, R. (1994), A Contingent Claim Approach to Performance Evaluation, Journal of Empirical Finance 1(2), 133–160. Goetzmann, W.N., Ingersoll Jr., J.E. and Ivkovic, Z. (2000), Monthly Measurement of Daily Timers, Journal of Financial and Quantitative Analysis, 35, 257-290. Grinblatt, M. and S.Titman (1994), A Study of Monthly Mutual Fund Returns and Performance Evaluation Techniques, Journal of Financial and Quantitative Analysis, 29, 419-444. Grinblatt, M., and S. Titman (1989), Portfolio Performance Evaluation: Old Issues and New Insights, Review of Financial Studies, 2, 393–421. Grossman, S. and J.E. Stiglitz (1980), On the Impossibility of Informationally Efficient Markets, American Economic Review, 70, 393-408. Gupta, Amitabh, (2000), Market Timing Abilities of Indian mutual Fund Managers: an Empirical Study, The ICFAI Journal of Applied Finance, 6(2), 1243-50. Hendricks, D., Patel, J. and R. Zeckhauser (1993), Hot Hands in Mutual Funds: Short-Run Persistence of Relative Performance, Journal of Finance, 48, 93-130. Henrikson, R.D. (1984), Market Timing and Mutual Fund Performance: An Empirical Investigation, Journal of Business, 57, 73-96. Henrikson, R.D., R.C. Merton (1981), On Market Timing and Investment Performance II: Statistical Procedures for Evaluating Forecasting Skills, Journal of Business, 54, 513-533. Jensen, Michael C. (1968), The Performance of Mutual Funds in The Period 1945-1964, Journal of Finance, 23. K. Dowd (1999), A Value at Risk Approach to Risk-Return Analysis, Journal of Portfolio Management, 25(4), 60-67. Malkiel, B.G. (1995), Returns From Investing in Equity Mutual Funds 1971 to 1991, Journal of Finance, 50, 549-572. Veit, E.T. and J.M. Cheney (1982), Are mutual funds market timers? The Journal of Portfolio Management, Winter, 35-42. Wermers, R. (2000), Mutual Fund Performance: An Empirical Decomposition into Stock Picking Talent, Style, Transactions Costs, and Expenses, Journal of Finance 55(4), 1655–1703.

12

Key Drives of Organizational Excellence

Annexure 1 Analysis of Timing Parameters Showing Scheme Wise Excess Return:

Fund Name

Fund Type

Scheme

Option

Pru ICICI FMCG Fund - (G) Pru ICICI Tax Plan - (G) Franklin India Opportunities Fund - G) Birla Taxplan`98 DSP ML Opportunities Fund (G) BOB ELSS `97 DSP ML Top 100 Equity Fund (G) Franklin India Taxshield 99 Birla Advantage Fund (G) Franklin India Taxshield 98 Franklin India Taxshield - (G) Morgan Stanley Growth Fund Franklin FMCG Fund - (G) Magnum Index Fund (G) Sundaram Taxsaver 97 Pru ICICI Balanced Fund - (G) Reliance Banking Fund - (G) Sundaram Tax Saver 98 DSP ML Balanced Fund - (G) H D F C Equity Fund DSP ML Equity Fund BOB Balance (G) LICMF Index Fund - Nifty Plan (G) LICMF Balanced Fund- (C) Birla Balance (G) LICMF Equity Fund - (G) LICMF Tax Plan - (G) LICMF G-Sec Fund - PF Plan (G) Magnum Children Benefit Plan LICMF Children`s Fund UTI-CCP Balanced Fund Reliance Vision Fund - (G) BOB ELSS `96 Pru ICICI Gilt Fund (Investment) - (G) DSP ML G-Sec Fund - A (G) Magnum Gilt Fund - LTP - PF (G) LICMF G-Sec Fund - (G) Magnum Gilt Fund - Long term (G) BOB Gilt Fund (G) Birla Gilt Plus - PF Plan (G) Magnum Gilt Fund - LTP - PF 1 yr (G) Birla Sun Life Govt Sec - Long Term G) UTI-G-Sec Fund - (G)

Open Open Open Close Open Close Open Close Open Close Open Close Open Open Close Open Open Close Open Open Open Open Open Open Open Open Open Open Open Open Open Open Close Open Open Open Open Open Open Open Open Open Open

Equity ELSS Equity ELSS Equity ELSS Equity ELSS Equity ELSS ELSS ELSS Equity Equity ELSS Balanced Equity ELSS Balanced Equity Equity Balanced Equity Balanced Balanced Equity ELSS Gilt Fund Balanced Balanced Balanced Equity ELSS Gilt Fund Gilt Fund Gilt Fund Gilt Fund Gilt Fund Gilt Fund Gilt Fund Gilt Fund Gilt Fund Gilt Fund

Growth Growth Growth Taxplan Growth Saving Growth Taxplan Growth Taxplan Growth Taxplan Growth Growth Taxplan Growth Growth Taxplan Growth Dividend Dividend Growth Growth Dividend Growth Growth Growth Growth Dividend Dividend Dividend Growth Saving Growth Growth Growth Growth Growth Growth Growth Growth Growth Growth

Average Return Apr 04-Mar 07 3.739935822 3.541747333 3.125250888 2.921795286 2.889692801 2.800195939 2.763321596 2.69108125 2.612132305 2.561009885 2.539162313 2.342757491 2.342537258 2.307639326 2.26945812 2.222402957 2.212146301 2.141836438 2.081854134 1.975532266 1.916072952 1.898222335 1.71433114 1.662457272 1.652678357 1.544663274 1.513469777 1.050499566 0.664339458 0.564047365 0.5395184 0.462784786 0.413261568 0.285975726 0.225105785 0.214766095 0.199444259 0.192797238 0.188363674 0.183937985 0.176637614 0.165648719 0.14571291

Timing Skills of Fund Managers: An Empirical Study of Indian Mutual Funds

13

Annexure 2 Analysis of Timing Parameters of Peer Groups Portfolio Performance & Timing Skills of Peer Group Scheme BOB Balance (G) DSP ML Balanced Fund - (G) LICMF Balanced Fund- (C) LICMF Children`s Fund Magnum Children Benefit Plan Pru ICICI Balanced Fund - (G) UTI-CCP Balanced Fund Birla Taxplan`98 BOB ELSS `96 BOB ELSS `97 Franklin India Taxshield - (G) Franklin India Taxshield 98 Franklin India Taxshield 99 LICMF Tax Plan - (G) Morgan Stanley Growth Fund Pru ICICI Tax Plan - (G) Sundaram Tax Saver 98 Sundaram Taxsaver 97 Birla Advantage Fund (G) Birla Balance (G) DSP ML Equity Fund DSP ML opportunities Fund (G) DSP ML Top 100 Equity Fund (G) Franklin FMCG Fund - (G) Franklin India opportunities Fund - (G) H D F C Equity Fund LICMF Equity Fund - (G) LICMF Index Fund - Nifty Plan (G) Magnum Index Fund (G) Pru ICICI FMCG Fund - (G) Reliance Banking Fund - (G) Reliance Vision Fund - (G) Birla Gilt Plus - PF Plan (G) Birla Sun Life Govt Sec - Long Term-G BOB Gilt Fund (G) DSP ML G-Sec Fund - A (G) LICMF G-Sec Fund - (G) LICMF G-Sec Fund - PF Plan (G) Magnum Gilt Fund - Long term (G) Magnum Gilt Fund - LTP - PF (G) Magnum Gilt Fund - LTP - PF 1 yr (G) Pru ICICI Gilt Fund (Investment) - (G) UTI-G-Sec Fund - (G)

Balanced Balanced Balanced Balanced Balanced Balanced Balanced ELSS ELSS ELSS ELSS ELSS ELSS ELSS ELSS ELSS ELSS ELSS Equity Equity Equity Equity

Portfolio Performance -0.64189087 -1.740300777 -1.640008684 -0.081945186 -1.764829742 0.617447144 -1.891086574 0.617447144 -1.891086574 0.495847797 0.234814171 0.256661743 0.386733108 -0.790878365 0.038409349 1.237399191 -0.162511704 -0.034890022 0.307784163 -0.651669785 -0.38827519 0.585344658

Portfolio Ranking 3 2 4 6 5 1 7 2 11 3 6 5 4 10 7 1 9 8 5 12 10 3

Timing Performance 1.937249567 2.241288165 2.430652112 2.376422859 1.6484576 3.402448687 1.521353715 3.402448687 1.521353715 3.348863815 2.508941652 2.637894949 3.438612057 1.663721456 2.311464137 6.064949643 2.155531073 2.278036393 2.530691861 1.92185533 2.104719038 2.978917465

Timing Ranking 5 3 6 4 1 2 7 3 11 4 6 5 2 10 7 1 9 8 7 13 11 4

Equity Equity

0.458973454 0.038189116

4 6

2.804833009 2.849780612

6 5

Equity Equity Equity

0.820902746 -0.328815876 -0.759684868

2 9 13

3.890968818 2.386266385 1.604094647

2 10 14

Equity Equity Equity Equity Equity Gilt Fund Gilt Fund

-0.590017002 0.003291184 1.43558768 -0.092201842 -1.841563357 -2.120410157 -2.138699423

11 7 1 8 14 8 10

2.042268216 2.413193877 12.88079491 2.46805196 3.113782834 2.421150389 2.492607847

12 9 1 8 3 10 6

Gilt Fund Gilt Fund Gilt Fund

-2.115984468 -2.079242357 -2.104903883

7 3 5

2.498783581 2.403697866 2.452332582

5 11 8

Gilt Fund

-1.253848576

1

3.504542355

1

Gilt Fund

-2.111550904

6

2.50021139

4

Gilt Fund

-2.089582048

4

2.520597426

3

Gilt Fund

-2.127710528

9

2.482452535

7

Gilt Fund Gilt Fund

-2.018372416 -2.158635233

2 11

2.57391504 2.425849667

2 9

14

Key Drives of Organizational Excellence

Annexure 3 Analysis of Portfolio Performance on Stock Selection (*Portfolio Performance = (Rp-Rm) Portfolio Performance on Stock Selection Scheme

Performance*

Rankings

BOB Balance (G)

Balanced

-0.406125808

22

DSP ML Balanced Fund - (G)

Balanced

-0.222494008

19

LICMF Balanced Fund- (C)

Balanced

-0.64189087

24

LICMF Children`s Fund

Balanced

-1.740300777

30

Magnum Children Benefit Plan

Balanced

-1.640008684

29

Pru ICICI Balanced Fund - (G)

Balanced

-0.081945186

16

UTI-CCP Balanced Fund

Balanced

-1.764829742

31

Birla Taxplan`98

ELSS

0.617447144

4

BOB ELSS `96

ELSS

-1.891086574

33

BOB ELSS `97

ELSS

0.495847797

6

Franklin India Taxshield - (G)

ELSS

0.234814171

11

Franklin India Taxshield 98

ELSS

0.256661743

10

Franklin India Taxshield 99

ELSS

0.386733108

8

LICMF Tax Plan - (G)

ELSS

-0.790878365

27

Morgan Stanley Growth Fund

ELSS

0.038409349

12

Pru ICICI Tax Plan - (G)

ELSS

1.237399191

2

Sundaram Tax Saver 98

ELSS

-0.162511704

18

Sundaram Taxsaver 97

ELSS

-0.034890022

15

Birla Advantage Fund (G)

Equity

0.307784163

9

Birla Balance (G)

Equity

-0.651669785

25

DSP ML Equity Fund

Equity

-0.38827519

21

DSP ML Opportunities Fund (G)

Equity

0.585344658

5

DSP ML Top 100 Equity Fund (G)

Equity

0.458973454

7

Franklin FMCG Fund - (G)

Equity

0.038189116

13

Franklin India Opportunities Fund - (G)

Equity

0.820902746

3

H D F C Equity Fund

Equity

-0.328815876

20

LICMF Equity Fund - (G)

Equity

-0.759684868

26

LICMF Index Fund - Nifty Plan (G)

Equity

-0.590017002

23

Magnum Index Fund (G)

Equity

0.003291184

14

Pru ICICI FMCG Fund - (G)

Equity

1.43558768

1

Reliance Banking Fund - (G)

Equity

-0.092201842

17

Reliance Vision Fund - (G)

Equity

-1.841563357

32

Birla Gilt Plus - PF Plan (G)

Gilt Fund

-2.120410157

40

Birla Sun Life Govt Sec - Long Term (G)

Gilt Fund

-2.138699423

42

BOB Gilt Fund (G)

Gilt Fund

-2.115984468

39

DSP ML G-Sec Fund - A (G)

Gilt Fund

-2.079242357

35

LICMF G-Sec Fund - (G)

Gilt Fund

-2.104903883

37

LICMF G-Sec Fund - PF Plan (G)

Gilt Fund

-1.253848576

28

Magnum Gilt Fund - Long term (G)

Gilt Fund

-2.111550904

38

Magnum Gilt Fund - LTP - PF (G)

Gilt Fund

-2.089582048

36

Magnum Gilt Fund - LTP - PF 1 yr (G)

Gilt Fund

-2.127710528

41

Pru ICICI Gilt Fund (Investment) - (G)

Gilt Fund

-2.018372416

34

UTI-G-Sec Fund - (G)

Gilt Fund

-2.158635233

43

2

Sectorwise Analysis of Weak Form Efficiency at BSE Ashutosh Verma Nageshwar Rao

Efficient market hypothesis with its three forms deals with informational efficiency of the stock markets. Efficiency of stock markets is necessary to ensure the allocation of scarce capital resources. It is all the more important for emerging economies like India where efforts are being made to integrate the stock markets with the global markets. This paper examines the weak form efficiency of the companies included in BSE 100 index on a sectoral basis. The companies have been divided in eleven major sectors and the period covered is from 1st April 1996 to 31st March 2001. The findings of serial correlation indicate that the percentage of first order coefficients is significant in all the sectors except cement industry. This means that the prices are correlated on a day to day basis. The percentage of overall significant coefficients is above 10% in all the sectors, the highest being in the media, telecommunication sector. The Z values obtained from run test are also significant in all the sectors, the highest being in automobiles and fertilizers and chemicals sector. Even the percentage of significant Z values at 1% is high indicating that Z is attaining very high values. Therefore, based on the findings it can be concluded that the stock market is not weak in form efficiency in various sectors.

INTRODUCTION Efficient market hypothesis is most extensively researched upon topic in the area of finance. However the focus of research has now shifted to emerging markets. The relationship between emerging financial markets and growth of countries has received renewed academic and research interest in recent years (Greenwood and Jovanovic, 1990; Atje and Jovanovic, 1993; Pagano, 1993; Rajan and Zingales, 1998; Levine and Zervos, 1998). Efficiency of capital markets is indispensable for proper allocation of resources. Market efficiency is closely related to allocational efficiency. It is through the pricing of securities that the stock market performs its allocational function. The firms with higher expected earnings are assigned higher prices in the stock market whereas those firms, which are going to have low profitability, are assigned lower prices. The market, thus, ensures that the efficient firms have cheaper access

16

Key Drives of Organizational Excellence

to funds and are therefore able to make greater use of them. Consequently, the prices assigned to securities by the stock market are critical to the effectiveness of the market as a resource allocator. In an efficient market, there is optimum allocation of resources. Mc Kinnon (1973), Shaw (1973), Fry (1982) and Cho (1988) showed that an orderly and prudent deregulation of the financial sector is critical for efficient allocation of resources in the economy. According to Fama (1965), “The primary role of the stock market is allocation of ownership of the economy’s capital stock. In general terms, the ideal is a market in which prices provide accurate signals for resource allocation: that is, market in which firms can make productioninvestment decisions and investors can choose among the securities that represent ownership of firms’ activities under the assumption that securities prices at any time fully reflect all available information.” In a country like India, where resources are scarce it is necessary that the regulators must strive to make the markets efficient to ensure the optimum allocation of resources. Indian capital markets are in the process of integrating with the global markets due to the economic reforms initiated in 1991(Agrawal, 2001). Furthermore, an emerging economy that wishes to attain high and sustainable rates of economic growth needs an active stock market to help fuel and finance this growth (Shachmurove, etal 2001). The various structural changes in the economy in general and in the financial sector in particular have influenced the capital markets largely. These changes, which have been introduced gradually, have affected the information flow and operational efficiency of the stock markets. During the years 1991 and 1992, stock markets have witnessed an unprecedented boom. Investors were thrilled with fabulous rates of return and the Government flushed with excitement at the fund mobilizing capacity of the market preferred to brush aside all dangerous signals temporarily. Curiously, the cost of ignoring irrational behavior of market was strongly felt only after the unearthing of the ever-largest financial scandal in the history of India popularly known as securities scam. It revealed how different government agencies along with many other existing structural as well as operational inefficiencies of the market contributed additional fuel to the violent fluctuation in share prices. It shattered the confidence of general investors on the capital and money markets (Karmakar, 1997). Within two months following the discovery of the scam the stock prices dropped by over 40%, wiping out the market value to the tune of Rs. 10,000 crores. The Ketan Parikh incident once again brought forth the need to bring in capital market reforms in a big way. This also showed that the SEBI has not been able to regulate the markets in an effective manner. The considerable part of the price movement in both the above cases was due to fads, waves of irrational optimism and pessimism, noise trading, feed- back trading or other market inefficiencies. The noise-traders thus, simply destabilize the market and move security prices away from fundamentals. With the opening of the economy, the horizon of the Indian capital markets has widened. As a result, in terms of investing, population wise India ranks third in the world next only to the United States and Japan. However, in spite of such widespread interest of the Indian public in the capital markets, investment knowledge seems to be very much lacking in them (Verma, et al 2000). This necessitates the formulation of sound strategies for investment based on a thorough assessment of the share price behaviour. Thus, markets should be efficient in various forms, namely weak, semi-strong and strong form. There have been number of changes which have taken place during the period covered under the study as far as infrastructure and information technology applications to the stock market are concerned. The automation process has put in place an effective monitoring

Sectorwise Analysis of Weak Form Efficiency at BSE

17

and surveillance mechanism contributing to the efficiency and integrity of the stock markets (Avadhani, 2000). These include the trading on BOLT at BSE, on-line trading in the debt segment at NSE, the arrival of internet to smaller places of the country, introduction of online trading, setting up of cable networks with business news channels all across the country, display of share prices on the various channels during the trading sessions, online advises from investments consultants and more stringent norms for disclosure prescribed by SEBI. The settlement procedure at Bombay stock market classified shares into two categories during the period of study, i.e., specified and non-specified (Singh, 2000). It may be noted that from 1st of July 2001, SEBI has made it compulsory to have rolling settlement in both the categories. The change over to the rolling settlement has had its impact on the volume. However, this is a move towards bringing the procedure at par with the exchanges of the developed world. The specified shares have a high investor fancy, good market capitalization, high liquidity and good lineage. Typically, shares of the specified group attract higher price-earning ratios than their counterparts listed in the non-specified category. Prior to the rolling settlement, there was a settlement system for specified shares whereby each of these scrips could attract financing in the form of badla. It is generally perceived that shares belonging to the specified category are of blue chip companies and investors are more interested in these shares, as such, the information assimilation for such shares is much more as compared to the non-specified group. Analysis of share price behaviour has important implications pertaining to a number of economic problems especially relating to security analysis, portfolio management, and allocative efficiency of the security market. Consequently, policy makers, corporate managers, financial analysts, investors and other participants are interested in understanding the behaviour of the stock market. The companies trading on the stock markets belong to the different sectors of the economy. Certain industries have associations, which provide voluntary disclosure norms for the member companies. There are certain industries, which are having periodical publications that disseminate information about the current performance and future prospects on a regular basis. Therefore, these sectors are able to communicate their information to the common public more effectively. Furthermore, there is a general tendency on the part of the media both print and electronic to concentrate on those sectors of the economy where there is boom, like in the last few years the print and electronic media have been providing more information on media, telecommunication and information technology companies’ shares. This information concentration was because of spurt of growth in these sectors. Thus, to find out whether the share prices of the companies belonging to different sectors of the economy are assimilating the historical information uniformly or not this paper attempts to study the weak form efficiency of the Bombay stock market in various sectors of the economy.

Methodology It is a descriptive study which seeks to study the weak form efficiency of various sectors of the economy.

Sample The sample consists of adjusted daily closing prices of ninety three companies included in BSE 100 Index as on 31st of March 2001. The prices have been adjusted for bonus issues, splitting of shares and dividends. The natural log of daily prices has been computed and

18

Key Drives of Organizational Excellence

then tests have been applied. The reasons to take logarithm prices are justified theoretically and empirically. Theoretically, logarithmic returns are analytically more tractable techniques when linking together period returns to form returns over longer intervals. Empirically logarithmic returns are more likely to be normally distributed which is prior condition of standard statistical technique. The reason for selecting these companies only in sectoral analysis is that they constituted more than seventy percent of market capitalization as on 31st March 2001 and these companies have been divided into eleven sectors. A rational rather than an absolute sector wise classification has been adopted. Companies which are having diversified business are included in those industries where their main business is concentrated. This is to ensure that the samples are large enough and representative of the sector to draw meaningful conclusions. The reason for not including seven companies is that their number is not large enough to represent the industry, like in hotel industry there are only two companies. The conclusions about the industry based on such small sample will not be valid. Therefore, the sample of companies in industry wise analysis is less than the total sample of 100 companies. The companies not included in the industry wise analysis but included in BSE 100 index are: Asian Paints

Castrol India

EIH

Essel Packing

G. E. Shipping

Indian Hotel

ITC

Tools For data collection: The share prices have been taken from the prowess database of CMIE. Bombay stock exchange directory and its website have also been referred to collect information. For data analysis: Two tests have been used to analyze the data 1)

Serial Correlation Test 2) Run Test

FINDINGS AND DISCUSSION Cement Industry The percentage of significant first order coefficients is below the benchmark (33%) and lowest of all the sectors of economy examined in the study. At lag fifteen there are three companies whose coefficients are significant. The highest coefficient is .150 in case of Grasim Industries and out of total 112 coefficients, 48 are negative. The cement industry has seen a lot of activity in the period covered in the study. There has been decontrolling of cement prices, accompanied by heavy demand for cement due to increased emphasis on infrastructure development. The slowdown in the economic growth, which commenced in 1997-98 continued through the year 1998-99. The GDP growth, which was 5% in the year 1997-98 marginally, improved to 5.8% in 1998-99 that was mainly because of agriculture. Industrial production grew only by 3.9% but cement industry registered a 6.4% growth during the year 1998-99. Mergers and acquisitions continued in the industry resulting in large volumes on the bourses. The total number of significant coefficients at various lags is also lowest in this sector. Thus, results indicate that the null hypothesis Bombay stock market is weak form

Sectorwise Analysis of Weak Form Efficiency at BSE

19

efficient in the cement industry is accepted. The highest Z value is 7.963 of Larson and Tubro, six Z values are negative indicating excess of expected runs as compared to observed runs, only one value is positive. The hypothesis is rejected based in the findings of run test.

Information Technology In this sector, all the twelve companies have significant first order coefficients. The second highest number of significant coefficients is at lag nine where six out of twelve companies have significant values. Overall, percentage of significant coefficients is highest in this sector. The highest coefficient is .254 of Hughes Software and out of 192 coefficients at various lags, 63 are negative. The shares prices in this sector have been through a boom as well as burst and a number of dotcom companies came into existence and faded away during the period covered by the study. However, number of companies with strong fundamentals started actively trading on the bourses. These companies now are included in the BSE sensitive index as well as BSE 100 index. In fact, the heavy trading activity in these shares necessitated the formulation of BSE IT index. The findings reveal that prices are serially correlated in this sector in all the companies under study. The highest Z value is 19.084 and of the twelve Z values, only two are positive. 91.66% Z values are significant at 5% indicating that prices are not following a random pattern in this sector.

Automobile Industry The percentage of significant first order coefficients is 75%. The second highest number of coefficients is at lag nine where 37.5% of coefficients are significant. The highest coefficient is .152 and out of 128 coefficients 64 are negative. In this sector all the Z values are significant at 5%. The highest Z value is 7.662 of Mahindra and Mahindra and all the Z values are negative

Banking and Financial Services The government in the wake of liberalization process permitted setting up of private sector banks. In the rural and semi-urban areas, competition is being encouraged through Local Area Banks. Modest diversification of ownership of select public sector banks has helped the process of autonomy and contributed to strengthening the competitive pressures. Over a period, this has resulted in a gradual reduction of spreads (defined as net interest income to total assets) and a tendency towards their convergence across all bank-groups, except foreign banks. Reduced spreads have been supported by improved efficiency reflected in a decline in the intermediation costs as percentage to total assets, especially for public sector banks and new private sector banks, due largely to a decline in their wage costs. Public sector banks were permitted to go in for their maiden public issues and subsequently Bank of India and Bank of Baroda went in for public issues. Thus, a number of participants came in the banking and financial sector of the share market which earlier used to be dominated by a few financial institutions and State Bank of India. In this sector, five out of seven (62.5%) companies have significant first order coefficients. The percentage of significant coefficients is higher at lag six as compared to lag one. This could be due to trading cycle or speculative activities; it is not possible to conclude with certainty without further research. The percentage at this lag is 75%. The highest coefficient is .142 of ICICI Bank and of the total 128 coefficients 65 are positive and 63 are negative. The results of run test show that the highest Z value is 6.422 in case of HDFC bank, three Z values are positive and five are negative and the percentage of significant Z values at 5% is 37.5% which is the lowest of all the sectors.

20

Key Drives of Organizational Excellence

Media, Telecommunications and Electronics The government permitted participation of private sector companies in both the basic telecom services as well as the cellular services. The telecom regulatory authority was set up to facilitate and regulate the operation of telecom companies. The government opted for corporatization of its department of telecommunication by forming Bharat Sanchar Nigam. Though the government was thinking of disinvestment in Videsh Sanchar Nigam and Mahanagar Telephone, up to 31st March 2001 no such disinvestment proposal had been approved. All the first order coefficients are significant in this sector, i.e., 100%. The percentage of significant coefficients is highest in this sector. It has 50% significant coefficients at lag two. The highest coefficient is .222 of Global Telecommunications and of the total 128 coefficients, 81 are positive and 47 are negative. 75% Z values are significant at 5% and the highest Z value is -6.028 of Global Telecommunication and only one Z value is positive. Therefore, the null hypothesis Bombay stock market is weak form efficient in the media, telecommunications and electronics sector is rejected.

Aluminum, Steel and Heavy Metal Industry It has the second lowest percentage of first order as well as total significant coefficients, i.e., 33.33% and 12.5% respectively. Out of six companies, three have significant correlations at lag ten. Thus, the percentage is just above the benchmark (33%). The highest coefficient is .149 of Indian Aluminum and out of the total 96 coefficients, 50 are positive, 46 are negative. 83.33% of Z values are significant at 5%, the highest Z value is -8.583 of Hindalco and of the six Z values four are negative and two are positive.

Petroleum and Refinery In this sector, there are two categories of companies, namely, ones that are owned by the government and the other two companies belong to the Reliance group. These companies dominate the BSE 30 Index. In this sector, the first order coefficients are significant in four out of six companies. The highest coefficient is .114 of Bharat Petroleum and of the total, 45 coefficients are positive, 51 coefficients are negative. The overall percentage of significant coefficients is lowest in this sector, indicating that at longer lags the prices are not serially related. The findings of run test also lead to rejection of the hypothesis as 66.67% Z values are significant at 5%. The highest Z value is -5.541 of IPCL and out of six Z values, only one is positive.

Consumer Durables and Non-Durables For a better analysis the consumer durables and non-durables sector have been combined. This sector has the second highest number of companies under study, i. e., twelve and includes Colgate and Hindustan Lever. As compared to other sectors of the economy, the number of significant first order coefficients is lower. The percentage of significant first order coefficients is 41.66%. The highest coefficient is .177 in case of BPL and of the total coefficients 97 are positive, and 95 are negative. 58.33% Z values are significant at 5% and the highest Z value is -9.312 of Hindlever and of the twelve Z values, only one is positive.

Pharmaceuticals It has thirteen companies under study, which is the highest number. This indicates that the BSE 100 index is dominated by the companies from this sector. The significant coefficients at

Sectorwise Analysis of Weak Form Efficiency at BSE

21

lag one, nine and ten are 69.23%, 23.03% and 23.03% respectively. The highest coefficient is .151 in case of Cipla and of the total 208 coefficients, half are positive and half are negative. The highest Z value is 12.693 of Cipla and of the total thirteen Z values seven are significant at 5% with only two values being positive.

Heavy Engineering and Power There are seven companies in this sector and the first order coefficients are significant in all the cases. The highest coefficient is .107 in case of ABB and of the total 112 coefficients, 65 are positive, and 47 are negative. The hypothesis Bombay stock market is weak form efficient in this sector is rejected. 71.43% Z values are significant at 5% and the highest Z value is -9.627 of Siemens and all the Z values are negative.

Chemicals and Fertilizers In this sector, six companies are covered in the study, of which four have significant first order coefficients. The highest first order coefficient is .112 of Gujarat Narmada, and of the total 96 coefficients, half are positive and half are negative. As far as run test is concerned, all the Z values are significant with Nirma having the highest Z value of -4.995 and all the Z values are negative. The hypothesis Bombay stock market is weak form efficient in chemicals and fertilizer sector is rejected.

Other Companies Asian Paints, which has not been included in any sector, has significant first order coefficient. It has significant coefficient at sixth and thirteenth lag. In case of Essel Packing, it is not possible to predict the price. This is because the first order coefficient is not significant for this company. G. E. Shipping does not have significant first order coefficient but the coefficient is significant at second and fourth lag. Indian Hotel, which has a coefficient of .151 at lag one, is significant at 5% as well as 1% level of significance. In case of other companies not included in any sector, the Z value is significant in case of Asian Paints, Essel Packing and Indian Hotel. Most of the studies where companywide analysis has been done revealed that Indian stock markets are not weak form efficient. Gupta and Gupta (1997) carried out one of the most extensive studies in the Indian context, using a sample of fifty most actively traded shares covering a period from 1988 to 1996. They found that the Bombay stock market was not weak form efficient market. Karmakar (2003) used ARIMA process to investigate the market index S&P CNX Nifty as well as fifty underlying shares. Market index was inconsistent with the assumption of random walk but individual companies manifested mixed behavior of return generating process.

CONCLUSION The markets are not weak form efficient in any sector of the economy. There have been number of changes in the Indian stock markets subsequent to the period covered under study. One of the notable changes is the increase in the depth of the market and the constitution of mid-cap and small-cap indices go to prove that the concentration of total market capitalization in the scrips covered in BSE Sensex and BSE 100 index has declined and

22

Key Drives of Organizational Excellence

therefore studies covering larger samples in various sectors can be undertaken. Some of the areas of research can be: The other most active stock exchange in India is National stock exchange. It was setup in June 1994 with a screen based two tier trading, one for equity and the other for debt. The efficiency of this exchange has not been investigated and offers scope for further research. A number of anomalies exist even in the most developed markets. This includes the holiday effect, the weekend effect, the January effect, the small firm size effect, the non-trading day effect. All these anomalies can be investigated in the Indian stock markets. The present study as well as other studies in India has applied the linear tests. Further, research can be initiated by applying non-linear tests like BDS statistics, Lyupanov exponent to capture the non-linearity in the share prices. Application of chaos theory to the Indian stock market can be investigated. Event studies like dividend declaration, split off shares and their impact on the prices of shares can be undertaken. Long memory which determines whether at long horizons the prices are correlated can be examined. Existence of long memory amounts to refutal of efficient market hypothesis. The other forms namely, semi-strong and strong form can be studied in Indian context.

References Agrwal, S. (2001), Investor’s Guide to Stock Markets, New Delhi: Bharat, pp. 36. Atje, R. and Jovanovic, B. (1993), Stock Markets and Development, European Economic Review, 37(2-3), 63240. Avadhani, V. A., (2000), Investment Management, Mumbai: Himalya, pp. 238. Cho, Y. J., (1988), The Effect of Financial Liberalisation on the Efficiency of •. F. (1965), “The Behavior of Stock Market Prices, Journal of Business, vol. 28, pp. 34-105. Fry, M., (1982), Models of Financially Repressed Developing Economies, World Development, vol. 9, pp. 731750. Greenwood, J. and Jovanovic, B., (1990), Financial Development, Growth, and the Distribution of Income, Journal of Political Economy, 98 (5), 1076-1107. Gupta, O. P. and Gupta, V., (1997), A Re-examination of Weak-Form Efficiency of Indian Stock Market, Finance India, 11(3), 619-632. Karmakar, M., (2003), Tests of Random Walk and Predictability of the Indian Stock Market, The ICFAI Journal of Applied Finance, 9(3), 24-37. Karmakar, M. (1997), Share Price Volatility and Efficient Market Hypothesis, Finance India, XI(3), 685-688. Levine, R. and Zervos, S., (1998), Stock Markets, Banks, and Economic Growth, American Economic Review, vol. 88, pp. 537-558. Mckinnon, R. I., (1973), Money and Capital in Economic Development, Washington DC: The Brookings Institution, pp. 27. Pagano, M., (1993), Financial Markets and Growth: An Overview, European Economic Review, 37(2-3), 613622.

Sectorwise Analysis of Weak Form Efficiency at BSE

23

Rajan, R. G. and Zingales, L. (1998), Financial Dependence and Growth, American Economic Review, 88(3), 559-586. Shaw, E. S., (1973), Financial Deepening in Economic Development, New York: Oxford University Press, pp. 1265. Singh, P., (2000), Invsetment Management: Security Analysis and Portfolio Management, Mumbai: Himalya, pp. 332. Shachmurove, Y., Benzion, U., Klein, P. and Yagil, J., (2001), A Moving Average Comparison of the Tel-Aviv 25 and S & P 500 Stock Indices, Working Paper. Department of Economics, University of Pennsylvania, pp. 1-36. Verma, A., Phatak, Y. and Kothari, N., (2000), A Review of Ahmedabad Stock Exchange: Possibilities for Growth, in U. Dhar, S. Dhar, M. Srivastava and S. Rangnekar (eds.), People, Process and Organisations: Emerging Realities, New Delhi: Excel, pp. 1-13.

24

Key Drives of Organizational Excellence

Annexure Table 1: No of Companies with Sign. Coefficients at Various Lags

INDU ¯

CE M

IT

AU T

BFS

MTE

SHM

P&R

CON

PHR

HEP

F&C

2

12

6

5

8

2

4

5

9

7

4

4

2

4

1

1

2

1

2

1

1

1

6

LAG¯ 1 2 3

2

2

4 5

1

6

1

2

1

1

1

8

1

1

9

6

3

3

1

4

1

2

3

2

1

10 1

12 13

2 1

1

14

2

1

1

2

4

2

2

1

1

1

4

1

1

1 2

1

1

1

7

11

2

1

1 1

2

1

2

1

1 1

5

1

1

3

1

2

3

1

1

1

1

1 1

2

1

2

1 1

15

3

1

1

16

1

1

1

T

12

36

18

2

1

26

12

11

25

1 29

1

2

19

12

Table 2: Percentage of Sign. First-order and Overall Coefficients % Signif. First-Order coefficients

% Sign. Overall Coefficients

28.57

10.71

100

18.75

75

14.06

Bank.& Financial Services

62.5

17.96

Media , Telecomm. & Elect.

100

20.31

Steel & Heavy Metal

33.33

12.5

Petroleum And Refinery

66.67

11.45

Industry Cement Information Technology Automobiles

Cons Dur. & Non-Durables

41.66

13.02

Pharmaceuticals

69.23

13.94

100

16.96

66.67

12.5

Heavy Eng. And Power Fertilizers And Chemicals

1

1

1 23

1

Sectorwise Analysis of Weak Form Efficiency at BSE

25

Table: 3 Results of Run Test Industry

Signific. ‘Z’ Value At 5%

Signific. ‘Z’ Value At 1%

No

%

Code No.

No

%

Code No.

Cement

4

57.14

1, 29, 30, 57

3

42.85

1, 29, 57,

Information Technology

11

91.66

3, 32, 33, 43, 52, 66, 69, 79, 82, 97, 98

11

91.66

3, 32, 33, 43, 52, 66, 69, 79, 82, 97, 98

Automobiles

8

100

4, 6, 24, 35, 59, 72, 89, 94

7

87.5

6, 24, 35, 59, 72, 89, 94

Bank.& Financial Services

3

37.5

34, 42, 45

3

37.5

34, 42, 45

Media , Telecomm. & Elect.

6

75

9, 28, 36, 60, 85, 100

6

75

9, 28, 36, 60, 85, 100

Steel & Heavy Metal

5

83.33

10, 39, 48, 63, 91

4

66.66

10, 39, 48, 91

Petroleum and Refinery

4

66.67

38, 54, 75, 76

4

66.67

38, 54, 75, 76

Cons Dur. & Non-Durables

7

58.33

13, 14, 37, 64, 71, 80, 95

7

58.33

13, 14, 37, 64, 71, 80, 95

Pharmaceuticals

7

53.84

18, 22, 27, 68, 70, 73, 87

6

46.15

18, 22, 27, 68, 70, 73

Heavy Eng. and Power

5

71.43

2, 12, 15, 20, 78

5

71.43

2, 12, 15, 20, 78

Fertilizers and Chemicals

6

100

31, 40, 51, 62, 67, 88

5

83.33

31, 40, 62, 67, 88

Table 4: Details Of Companies Included In Various Industries

F Code

Cement (Cem) Name Of Companies

Aluminum, Steel And Heavy Metal (SHM)

F Code 10

Name Of Companies

1

A.C.C.

Bharat Forg

29

Grasim Ind

39

Hindalco

30

Guj Amb Cem

48

Indian Alum

47

India Cement

63

National Alu.

57

Larsen & Tubro

84

Steel Authority

58

Madras Cement

85

Sterlite Ind.

74

Raymond

91

Tata Steel

F

Automobiles (Aut)

F

Banking And Financial Services (BFS)

4

Ashok Leyland

7

Bank Of Baroda

6

Bajaj Auto

8

Bank Of India

24

Escorts

34

Hdfc Bank

35

Hero Honda

42

Hous.Dev. Fin

59

Mah & Mah

44

Icici Bank

72

Pun Tractors

45

ICICI

89

Tata Engg

46

IDBI contd...

26

Key Drives of Organizational Excellence

94

TVS Suzuki

83

SBI

F

Media, Telcommunications And Electronics (MTE)

F

Petroleum And Refinery (P&R)

9

Bharat Elec

11

Bharat Petro

28

Global Tele

38

Hind Petrol

36

Him. Fut.Comm

50

Indian Oil

60

Mahanagar Tele

54

Ipcl

86

Sterlite Opt

75

Rel Petro

96

Videsh Sanch

76

Reliance

100

Zee Telefilm

F

Pharmaceuticals (PHR)

F

Information Technology ( IT )

18

Cipla

3

Aptech

21

Dabur India

32

Hcl Infosys

22

Dr Reddy

33

Hcl Techno

27

Glaxo India

43

Hughes Soft

41

Hoechst Mari

52

Infosys Tech

56

Knoll Pharma

66

Niit

65

Nicholas Pir

69

Pentmedia Global

68

Novartis (I)

77

Satyam Computers

70

Pfizer

79

Silverline T.

73

Ranbaxy Lab

82

SSI

81

Smith Pharma

97

Visual (I)

87

Sun Pharma

98

Wipro

99

Wockhardt

F

Consumer Durables And NonDurables (CON)

F

Heavy Engineering And Power (HEP)

13

BPL

2

ABB

14

Britania Ind

12

BHEL

16

Cadbury (I)

15

BSES

19

Colgate

20

Cummins Indu

37

Hind Lever

53

Ingersoll

61

Mirc Electr

78

Siemens

64

Nestle

90

Tata Power

71

Procter & Gamb

F

Fertilizers & Chemicals (F&C)

Code

Name Of Companies

80

Smithklin Co.

92

Tata Tea

31

Guj Narmada

93

Titan Indus.

40

Hindlever Chemicals

95

Videocon Int.

51

Indogulf Cor.

62

Nagar Fert & Chem

67

Nirma

88

Tata Chemicals

3

Bottom of Pyramid & Investment Approach To Indian Financial Market Ashutosh Agarwal Navita Tripathi

Population and poverty are two sides of a coin. Approximately, two-third of the world’s population earns less than Rs. 100 (in Indian value) per day and they comprise of the bottom of the pyramid. Therefore, Bottom of Pyramid is a level consists of population, which lies at the bottom level of society. This is the group who belongs to lowest income group. Talking about bottom of pyramid in Indian prospects, it is quite interesting that India is one of the most developing countries but still having more than two third populations at its bottom level of pyramid because of higher population. This is the segment, which invest their money in small avenues and consider the safety while making any monetary transaction. For satisfying the investment objective at this level the people invest in NBFCs or in mutual funds. Investing in Mutual Funds, this segment enjoys the taste of investing in share markets. They enjoy the gains and profits of share market as higher income group investors do. On the contrary of the other side NBFCs and Chit Fund companies investment is also a part of this layer of bottom of pyramid, in which the investment made by very lowest income group people such as Rickshaw puller, Beetle shopkeeper, etc. This paper is focusing on the investment options, which are available for the people at Bottom of Pyramid.

INTRODUCTION Bottom of pyramid is one of the hot topics of discussion in the modern world and an issue that is raised everywhere. Prahalad (2005) worked on this issue and tried to solve global poverty by turning the bottom of the socioeconomic pyramid from victims of globalization into its beneficiaries through consumerism. This issue may be used with or can be correlated with various fields such as marketing, finance, investment, human resource, etc. Here, we have correlated bottom of pyramid with investment approaches in Indian financial market. The objective of this research is to find out the impact on the investments made by the bottom of pyramid in the Indian Financial market and the various investment approaches available with the people in bottom of pyramid in the Indian Financial market.

28

Key Drives of Organizational Excellence

Bottom of pyramid is the segment, which comprises of people lying in the low or the lowest income group. Poverty is everywhere in the world, whether it is a developed country or a developing country. Approximately two third population of the world earns less than Rs. 100 (in Indian value) per day. When we talk about India, here two-third of the population is at the bottom of pyramid because of the large population and improper allocation of resources. Two third of population is a big crowd which lies in bottom of pyramid. In India, only a small segment of people have adequate or sufficient amount of money. This is the only segment that enjoys the investment opportunities of capital market, as they do not have fear for loss. On the other hand one who has a limited source of earning will think many times before putting their money in capital market. Few years ago, investing in capital market and earning profit with good return was a dream for the people living in bottom of pyramid, because of less earning with lots of responsibilities and burdens. These people can only think of fulfilling their basic needs, spend money in marriage and other occasions. Investment was not their cup of tea; it was only in the approach of higher income group.

THE INDIAN BOP SCENARIO According to a report of National Council of Applied Economic Research (NCAER), there are almost 400 million people in India who belong to the BOP group (Table1). Another research agency KSA Technopak in its report states that 26.5% of the income of the rural consumer is spent on purchasing groceries. It means almost 73.5% of their total disposable income is spent on items that are not required for their subsistence needs, which is a big opportunity for the marketer. Table 1: Anatomy of Rural Market Class

Annual Income (Rs.)

Number of People (mm.)

The Very Rich

Above 215000

4

The Consuming Class

45000-215000

115

The Climbers

22000-45000

331

The Aspirants

16000-22000

170

The Destitute

Less than 16000

124

Source: NCAER

There was a time when the poor Indian consumers fulfilled most of their daily requirements from nearby towns and only few selected households consumed branded goods. Today the markets for the BOP are as critical for the marketers as the urban markets. The BOP market is emerging as a large market for a number of goods or services- which would be a consumer good or a financial or telecom service. According to KSA Technopak, rural India accounts for 55% of total private spending in the country, which is more than the total private spending of the Urban India, which stands at 45%. This potential market has been largely ignored till recent past, but when the potential of the Bottom of the Pyramid is understood and assimilated by them; every company of the world is eying a share in the pie, (Prahalad, and Hart, 2002). In the present scenario, investment approach in the financial market has widened and now every individual living in the bottom of pyramid is able to invest more or less according to the earning. They have now many alternatives available and they can choose any one according to their priority. The most suitable or favorable avenues are Mutual Fund and

Bottom of Pyramid & Investment Approach To Indian Financial Market

29

Non Banking Financial Companies (NBFCs). Mutual fund and NBFCs are two important pillars and basic pillars of bottom of pyramid on which investment approach of people belonging to bottom of pyramid mainly depends. Pyramid of Financial Institutions

The above pyramid shows the various levels of poor in the bottom of pyramid and which are the institutions in the financial sector that cover the various levels of bottom of pyramid. This triangle shows a pictorial representation of bottom of pyramid that helps to understand the concept of bottom of pyramid in financial sector.

Best Suited Investment Avenues There are several investment avenues existing and investors select them according to their investment objectives which may include Returns, Capital appreciation, Safety, Security, Liquidity, Tax benefits, Children Future and Post Retirement benefits. These are the common objectives for all the income groups. Next thing that the investor needs to indentify is the investment avenue that is best suited according to the objectives of the investor. Every individual investor selects an avenue based on his or her investment objectives and available amount of money for investment. This is the point of separation where one investor is different from others in terms of money they have and classified under high and low income groups, (Prahalad, 2006).

30

Key Drives of Organizational Excellence

A low-income group person who comes under bottom of pyramid has two best-suited alternatives for investment: mutual Fund and Non Banking Financial Companies (NBFCs). The Post Office deposits and insurance policies are other options for low-income group individual, but these alternatives are concerned with savings rather than investment. Although the post office deposits and insurance are classified as Savings Avenue, they have been included in the investment opportunities for the Bottom of pyramid group as they are part of Indian financial market and within the approach of low-income group people.

MUTUAL FUNDS Mutual funds are the most suitable opportunity to invest as they provide alternative opportunities to the investor to choose fund according to their interest. The mutual funds are having less risk associated as a result of portfolio management through which mutual funds companies offer a well designed and managed fund in which after investment return will be maximum with less or negligible risk associated. Mutual fund SIPs (Systematic Investment Plan) allows investors to investment a fixed sum, usually small, at a frequency chosen by investor. A SIP provides several benefits. These gamuts of benefits along with a sample illustration have been listed below: 1.

2.

Rupee Cost Averaging l·

When NAV rises SIP lowers average cost of purchase and increases the value of overall investment



When NAV falls SIP procures more units

Compounding Benefits l

3.

Compounding allows money to grow exponentially over time.

Convenience l

Disciplined saving habit

l

Auto debt facility in almost every Mutual Fund Scheme

l

Switch / systematic transfer

l

Nil / Lower entry load

Expected Return @ 20%

Number of years Rs.\ month

(Rs. / Lacs)

5 Years

10 Years

15 Years

20 Years

25 Years

60

120

180

240

300

500

0.52

1.91

5.67

15.81

43.13

1000

1.03

3.82

11.34

31.61

86.27

2000

2.07

7.65

22.69

63.23

172.53

3000

3.10

11.47

34.03

94.84

258.80

4000

4.14

15.29

45.37

126.46

345.07

5000

5.17

19.12

56.71

158.07

431.34

Bottom of Pyramid & Investment Approach To Indian Financial Market

31

Illustration: An investment of Rs 2000 per month @ 20% p. a. for 300 months becomes Rs.172.53 lacs at the end of 300 months. The table above shows that how much return an investor can receive by investing a fixed amount for a period of time. Products

Lock - in Period for investment (in years)

Returns

ELSS (Equity oriented scheme)

3

Market linked return (68.21% compounded annualized for 3-year ending 28-02-06.)

PPF (Fixed Return)

15

8%

NSC (Fixed Return)

6

8%

Tax Saving Infrastructure Bonds

3

5%

Life Insurance

2

5% to 6%

This table gives an overview of various alternatives available and their return after a period of time. Analysis these two tables given above makes it clear that MF is an alternative by which an investor of bottom of pyramid can earn good returns and profit. There are so many advantages of Mutual Funds that influence the investor to select this avenue for investment. Some of these benefits are enumerated as Professional Management, Portfolio Diversification, Convenient Administration, Return Potential, Low Costs, Liquidity, Transparency, Flexibility, Reduction / Diversification of Risk, Affordability, Choice of Schemes and Well Regulated, (Prahalad & Hammond, 2002). An individual of bottom of pyramid when invest he or she consider all the factors which can affect their return or the scheme they are selecting is that suitable to them? How much risk is associated with their investment? These are the various types of questions that rise while an individual having low income think for the investment. There are various types of funds available in mutual funds to invest. An investor can select any one of funds on the basis of his or her priority. However, these different types of funds can be grouped into certain classifications for better understanding.

Open-end vs. closed-end funds An Open-end fund is one that has units available for sale and repurchase at all times. An investor can buy or redeem units from the fund itself at a price based on the net asset value (NAV) per unit. The key feature of open-end schemes is liquidity. Unlike an open-end fund, the ‘unit capital’ of a closed – end fund is fixed, as it makes a onetime sale of a fixed number of units. Closed-end funds do not allow investors to buy or redeem units directly from the funds. However, to provide the much needed liquidity to investors, many closed-end funds get themselves listed on stock exchange(s). Trading through a stock exchange enables investors to buy or sell units of a closed-end mutual fund from each other, through a stockbroker, in the same fashion as buying or selling shares of a company. These mutual funds schemes disclose NAV generally on weekly basis.

Load and no-load funds A Load Fund is one that charges a percentage of NAV for entry or exit. That is, each time one buys or sells units in the fund, a charge will be payable. This charge is used by the mutual fund for marketing and distribution expenses. Suppose the NAV per unit is Rs.10. If the entry as well as exit load charged is 1%, then the investors who buy would be required to pay

32

Key Drives of Organizational Excellence

Rs.10.10 and those who offer their units for repurchase to the mutual fund will get only Rs.9.90 per unit. The investors should take the loads into consideration while making investment as these affect their yields/returns. However, the investors should also consider the performance track record and service standards of the mutual fund that are more important. Efficient funds may give higher returns in spite of loads. A no-load fund is one that does not charge for entry or exit. It means the investors can enter the fund/scheme at NAV and no additional charges are payable on purchase or sale of units.

Tax-exempt vs. non tax-exempt funds When funds invest in tax-exempt securities, it is called a tax-exempt fund. In India, after the 1999 Union Government Budget, all of the dividend income received from any of the mutual funds is tax-free in the hands of the investor. However, funds other than Equity funds have to pay a distribution tax, before distributing income to the investors. In other words, equity mutual fund schemes are tax-exempt investment avenues, while other funds are taxable for distributable income.

Money market funds Often considered to be at the lowest rung in the order of risk level, Money Market Funds invest in securities of a short-term nature, which generally means securities of less than oneyear maturity. The typical, short-term, interest –bearing instruments these funds invest in Treasury Bills issued by governments, Certificates of Deposit issued by banks and Commercial Paper issued by companies. In India, Money Market Mutual Funds also invest in the interbank call money market. The major strengths of money market funds are the liquidity and safety of principal that the investors can expect from short-term investments.

Gilt funds Gilts are government securities with medium to long-term maturities, typically of over one year (under one year instruments being money market securities). In India, we have

Growth funds Growth funds are stock funds that invest in stocks with the potential for long-term capital appreciation. They focus on companies that are experiencing significant earnings or revenue growth, rather than companies that pay out dividends. The hope is that these rapidly growing companies will continue to increase in value, thereby allowing the fund to reap the benefits of large capital gains. In general, growth funds are more volatile than other types of funds — in bull markets they tend to rise more than other funds but in bear markets they can fall much lower.

Income funds Value funds invest in companies that are thought to be good bargains — that is to say, they invest in companies that have low P/E ratios. These are the stocks that have fallen out of favor with mainstream investors for one reason or another, either due to changing investor preferences, a poor quarterly earnings report or hard times in a particular industry. Value stocks are often the stock of mature companies that have stopped growing and that use their earnings to pay dividends. Thus value funds produce current income (from the dividends)

Bottom of Pyramid & Investment Approach To Indian Financial Market

33

as well as long-term growth (from capital appreciation once the stocks become popular again). They tend to have more conservative and less volatile returns than growth fund. Mutual fund is only one among several other options. The other options in which generally people in bottom of pyramid invest their money are discussed below.

NON – BANKING FINANCIAL COMPANIES (NBFCS) Non-banking Financial Institutions carry out financing activities but their resources are not directly obtained from the savers as debt. Instead, these Institutions mobilize the public savings for rendering other financial services including investment. All such Institutions are financial intermediaries and when they lend, they are known as Non-Banking Financial Intermediaries (NBFIs) or Investment Institutions, for example, UTI, LIC and GIC etc. Apart from these NBFIs, another part of Indian financial system consists of a large number of privately owned, decentralized, and relatively small-sized financial intermediaries. Most work in different, miniscule niches and make the market more broad-based and competitive. While some of them restrict themselves to fund-based business, many others provide financial services of various types. The entities of the former type are termed as “non-bank financial companies (NBFCs)”. The latter type is called “non-bank financial services companies (NBFCs)”. Post 1996, Reserve Bank of India has set in place additional regulatory and supervisory measure that demand more financial discipline and transparency of decision making on the part of NBFCs. NBFCs regulations are being reviewed by the RBI from time to time keeping in view the emerging situations. Further, one can expect that some areas of co-operation between the Banks and NBFCs may emerge in the coming era of E-commerce and Internet banking. The various types of NBFCs are Government undertakings, in which while depositing or investing money an individual will not be having risk. There are few more companies which are Restricted NBFCs (RNBFCs) because operation of these companies is governed by RBI and its working is restricted to some extent. One of the good examples of RNBFCs is Sahara Para Banking, in which small earning individuals deposit money for getting good return. Deposits in the Sahara Para Banking is comprises of small instalments of daily or monthly payments.

POST OFFICE SCHEMES Post Office schemes give another best option of investment to the individuals in Bottom of Pyramid. This option has negligible risk or no risk involved while making investment in this alternative. Thus, investor will receive a risk free return on his or her money invested in Post Office Schemes. There are various schemes of Post Office:

Post office Recurring Deposit A Post-Office Recurring Deposit Account (RDA) is a banking service offered by Department of post, Government of India at all post office counters in the country. The scheme is meant for investors who want to deposit a fixed amount every month, in order to get a lump sum after five years. The scheme, a systematic way for long term savings, is one of the best investment options for the low-income groups. The post-office recurring deposits offer a fixed rate of interest, currently at 7.5 per cent per anum-compounded quarterly.

34

Key Drives of Organizational Excellence Monthly Investment

Total Investment (60months)

Money returned on Maturity (after 60 months)

10

600

728.90

20

1200

1457.80 3644.50

50

3000

100

6000

7289.00

500

30000

36445.00

1000

60000

72890.00

1375

82500

100224.00

5000

300000

364450.00

Advantages The post office offers a fixed rate of interest unlike banks, which constantly change their recurring deposit interest rates depending on their demand supply position. As the post office is a department of the government of India, it is a safe investment. The principal amount in the Recurring Deposit Account is assured. Moreover Interest earned on this account is exempted from tax as per Section 80L of Income Tax Act.

Time Deposit A Post-Office Time Deposit Account (RDA) is a banking service similar to a Bank Fixed Deposit offered by Department of post, Government of India at all post office counters in the country. The scheme is meant for those investors who want to deposit a lump sum of money for a fixed period; say for a minimum period of one year to two years, three years and a maximum period of five years. Investor gets a lump sum (principal + interest) at the maturity of the deposit. Time Deposits scheme return a lower, but safer, growth in investment. This investment option pays annual interest rates between 6.25 and 7.5 per cent, compounded quarterly. Time deposit for 1 year offers a coupon rate of 6.25%, 2-year deposit offers an interest of 6.5%, and 3 years is 7.25% while a 5-year Time Deposit offers 7.5% return. Duration of Account

Quarterly Compound Interest

1 Year

6.25 %

2 Year

6.5 %

3 Year

7.25 %

5 Year

7.5 %

Advantages In this scheme investment grows at a pre- determined rate with no risk involved. With a Government of India-backing, your principal as well as the interest accrued is assured under the scheme. The rate of interest is relatively high compared to the 4.5% annual interest rates provided by banks. Although the amount invested in this scheme is not exempted as per section 88 of Income Tax, the amount of interest earned is tax free under Section 80-L of Income Tax Act.

Bottom of Pyramid & Investment Approach To Indian Financial Market

35

National Savings Certificate National Savings Certificates (NSC) is certificates issued by Department of post, Government of India and is available at all post office counters in the country. It is a long-term safe savings option for the investor. The scheme combines growth in money with reductions in tax liability as per the provisions of the Income Tax Act, 1961. The duration of a NSC scheme is 6 years. It is having a high interest rate at 8% compounded half yearly. Post maturity interest will be paid for a maximum period of 24 months at the rate applicable to individual savings account. An Rs1000 denomination certificate will increase to Rs. 1601 on completion of 6 years. Interest rates for the NSC Certificate of Rs 1000 Year

Rate of Interest

1 Year

Rs. 81.60

2 Year

Rs.88.30

3 Year

Rs.95.50

4 Year

Rs.103.30

5 Year

Rs.111.70

6 Year

Rs.120.80

Advantages Tax benefits are available on amounts invested in NSC under section 88, and exemption can be claimed under section 80L for interest accrued on the NSC. Interest accrued for any year can be treated as fresh investment in NSC for that year and tax benefits can be claimed under section 88. NSCs can be transferred from one person to another through the post office on the payment of a prescribed fee. They can also be transferred from one post office to another. The scheme has the backing of the Government of India so there are no risks associated with your investment.

KISAN VIKAS PATRA Kisan Vikas Patra (KVP) is a saving instrument that provides interest income similar to bonds. Amount invested in Kisan Vikas Patra doubles on maturity after 8 years & 7 months. Kisan Vikas Patra can be purchased by An adult in his own name, or on behalf of a minor, A minor, A Trust or Two adults jointly. Kisan Vikas Patra is available in the denominations of Rs 100, Rs 500, Rs 1000, Rs 5000, Rs. 10,000 & Rs. 50,000. There is no maximum limit on purchase of KVPs. Premature encashment of the certificate is not permissible except at a discount in the case of death of the holder(s), forfeiture by a pledgee and when ordered by a court of law. No income tax benefit is available under the Kisan Vikas Patra scheme. However, the deposits are exempt from Tax Deduction at Source (TDS) at the time of withdrawal.

POST OFFICE MONTHLY INCOME SCHEME The post-office monthly income scheme (MIS) provides for monthly payment of interest income to investors. It is meant for investors who want to invest a sum amount initially and earn interest on a monthly basis for their livelihood. The MIS is not suitable for an increase

36

Key Drives of Organizational Excellence

in your investment. It is meant to provide a source of regular income on a long-term basis. The scheme is, therefore, more beneficial for retired persons. The post-office MIS gives a return of 8% on maturity. The minimum investment in a Post-Office MIS is Rs 1,000 for both single and joint accounts Deposit Rs

Monthly Interest

Amount returned on maturity

5000

33

5000

10000

66

10000

50000

333

50000

100000

667

100000

200000

1333

200000

300000

2000

300000

600000

4000

600000

Advantages Premature closure of the account is permitted any time after the expiry of a period of one year of opening the account. Deduction of an amount equal to 5 per cent of the deposit is to be made when the account is prematurely closed. Investors can withdraw money before three years, but for a discount of 5%. Closing of account after three years will not have any deductions. Monthly interest can be automatically credited to savings account provided both the accounts standing at the same post office. The interest income accruing from a postoffice MIS is exempt from tax under Section 80L of the Income Tax Act, 1961. Moreover, no TDS is deductible on the interest income. The balance is exempt from Wealth Tax.

SENIOR CITIZEN SCHEME A new savings scheme called ‘Senior Citizens Savings Scheme’ has been notified with effect from August 2, 2004. The Scheme is for the benefit of senior citizens and maturity period of the deposit will be five years, extendable by another three years. Initially the scheme will be available through designated post offices through out the country. The deposit will carry an interest of 9% per annum (taxable).

Advantages This Scheme is most beneficial to Senior citizens and provides a high rate of interest as compared to bank interest of 4.5- 4.75%. Although the interest on the deposit is taxable, the deposits themselves are tax-free. As the post office is a department of the government of India, it is a safe investment. The principal amount is assured.

INSURANCE SCHEMES Insurance is also one option with the investors as investing in this alternative they will get a fixed return with an additional benefit of risk cover for their life. While studying we found that Life Insurance Corporation of India (LIC) is the market leader or dominant insurer of insurance sector even after liberalization in India. A low-income group individual whenever thinks for taking an insurance policy choose LIC. It shows that, a low-income individual thinks for many times and chooses the alternative where no risk or negligible risk involved.

Bottom of Pyramid & Investment Approach To Indian Financial Market

37

LIC being the public sector institution strengthen the faith of people specially having less earning. The Life Insurance Corporation of India was created on 1st September, 1956, with the objective of spreading life insurance much more widely and in particular to the rural areas with a view to reach all insurable persons in the country, providing them adequate financial cover at a reasonable cost. LIC has a variety of policies according to need and the financial ability of individuals who have interested to take insurance. Few policies, which are specially designed by the LIC for low-income group, are:

Janaraksha Plan If is for the lowest income group, in which person gives premium of few rupees and get risk cover

Jeevan Anand It gives whole life cover with very low premium

Jeevan Mitra Whole life endowment plan and gives double and triple cover LIC has few unit linked insurance plans, which give risk cover along with the opportunities to invest money in capital market to the investors. They are Market Plus and Profit Plus. A new Health Plan is also started by LIC that covers health insurance and gives financial assistance to the policyholder after 6 months of taking policy. This policy provides health insurance to the family including husband, wife and two children till the age of 25 years.

CONCLUSION The people in bottom of pyramid can avail opportunities for investment in various alternatives like mutual funds, NBFCs, insurance and post office schemes present in the Indian financial market according to their requirement and income. NBFCs also provide the basic investment feature at bottom of pyramid where very small income group people deposit their money and feel the joy of investment in a banking company. Investment in insurance and post office schemes give them the benefit of good risk free return with low income Hence, these instruments like Mutual fund, NBFCs and other investment alternative has given a pave to the people at the lower level of pyramid and giving a push to the economy by presenting the investment habits of lower income group in front of the Investment environment. Here we can say if these types of investment avenues regularly evolve in the investment environment the dream is not too far when India’s bottom of pyramid will be decided on merit basis not on the basis of lower income group and India become a strong developed economy.

References Prahalad, C.K., The Fortune at the bottom of the Pyramid; Wharton School , Pearson Education combined publishing (2005). Prahalad, C.K., “The Innovation Paradox”, Strategy+Business, 44, 62-71, Booz, Allen & Hamilton, 2006

38

Key Drives of Organizational Excellence

Prahalad, C.K and Hart, Stuart L., “The Fortune at the Bottom of Pyramid”, Strategy + Business, 26; 54-67, First Quarter 2002, Booz, Allen & Hamilton Inc. Prahalad, C.K. & Hammond, Allen, “Serving the Poor Profitably”, Harvard Business Review, September 2002.

4

Future Perspective in Enterprise Risk Management: A Strategic Approach to Derisking an Enterprise Falguni H. Pandya

“A business has to try to minimize risks. But if its behavior is governed by the attempt to escape risk, it will end up by taking the greatest and least rational risk of all: the risk of doing nothing. “Risk Management has become prime concern for business. Bankruptcies & huge losses have reemphasized the importance of identifying & managing risks effectively (Kenneth, 1994). Companies such as P & G, Barings have all burnt their fingers due to faulty risk management practices. Clearly, companies need to develop and apply an integrated risk management. On September11, two airplanes hijacked by terrorists crashed in to the World Trade Center (WTC) in New York and another in to pentagon building in Washington. The incidents shock the USA & indeed the whole world. Hurricane Andrew which hit South Florida in August, 1992 was the biggest liability ($ 16 Billion) the US insurance industry had faced. Same can be understood with earthquake, tsunami etc.

INTRODUCTION Quite clearly, companies could not have done much to prepare for the disasters like WTC crash, tsunami, earth quake etc., except probably take insurance cover (Chakravarthy, 2000). This may seem to have done. Fortunately, for companies not all risks are so unpredictable or unexpected. By clearly monitoring the environment, companies can anticipate risks associated with changing technology, changing customer tastes, changing interest rate, currency rate, changing competitive conditions etc. The range of risks companies have to manage has widened. Intangible, commercial & operational Risks have become more important than insurable risks (Mahanta, 2001). The need to take a company wide view of risks is increasingly important. Enterprise Risk Management (ERM) is now rapidly emerging as a internal consulting practice (headed by senior management, CRO) in many companies.

40

Key Drives of Organizational Excellence

EMERGENCE OF ERM l

Regulators in financial sectors are putting pressure on companies to manage risks more systematically.

l

Shareholders, governments and regulators insisting on better reporting & disclosure practices.

l

Corporate governance is giving boosts to ERM (Kenneth, 1994).

l

Convergence of capital & insurance market are also facilitating integrated view of ERM.

l

It has made many more companies feel confident while dealing with risk.

Major Risks faced by companies Financial Risk

R&D Delays

Foreign macroeconomic issues

Marketing Risk

Misaligned product

Merger, Acquisition & Amalgamation Risk

Human Resource Risk

Cross Business Risk

Cost over runs

Political Risk

Environmental Risk

Law suits

Legal, Ethical & Regulatory Risk

Customer demand short falls

Technological risks

High input commodity prices

A/c irregularities

Project failure due to management ineffectiveness

Diversification Risk

Hazard Risk

Reputation Risk

Vertical integration Risk

Capacity expansion Risk

Strategic Risk

Unfortunately in many firms, much of the focus of RM has been on fluctuating financial parameters. RM had been strongly associated with only treasury, forex & portfolio management. But now it has been realized that RM guides over a vast range of decision making process affecting the entire organization. Few of above are.

FINANCIAL RISK As financial markets are deregulated in many more countries & now days they are interlinked with each other, financial risk management has grown considerably. It can affect the value of the firm by number of ways.

Liquidity Risk J P Morgan raised funds through a variety of instruments such as deposits, commercial papers, bank notes, repurchase agreements, federal funds, long term debts & capital securities. Morgan performed stress test (VAR) on its liquidity profile on a weekly basis to evaluate the accuracy of projections & its ability to raise funds under adverse circumstances. Few years back, one of India’s most visible dotcom Satyam Infoway (Sify) faced liquidity crisis. It had started off as an ISP, depending mostly on retail customers. Later, it diversified in to corporate services. During its peak time, it mobilized massive fund from the stock market. With a mind boggling sum of Rs. 499 crore, it also acquired a portal India World. It

Future Perspective in Enterprise Risk Management: A Strategic Approach to Derisking

41

faced severe liquidity crunch from July 2001 onwards. On NASDAQ, sify was quoting below $ 4 down from $ 110 in January, 2000. Same can be understood with the case of Pertech computers. It accepted orders aggressively & find it did not have the working capital necessary to execute them. Many small scale industries in India are facing crisis (Kenneth,1994 et.al.). It happens because of delayed payments by large customers, who have tremendous bargaining power.

Credit Risk Many banks especially in India, attach too much importance to guarantees. This is a big pit fall. Collateral should not be viewed as a substitute for a comprehensive assessment of the counter party. For that borrower’s repayment capacity must be determined & adequate capital must be set aside to cover the risks of defaults by customers. There was a sharp increase in ICICI’s NPL (Non Performing Loans) form Rs. 867 crores in 1997 to Rs. 4225 crore in 2002. It was accounted because of unprofitable industries like textile, fibers, steel, chemicals etc. In the case of Arvind mills, ICICI has taken even possession of the company’s retail brand as security.

INTEREST RATE RISK Currency Risk It is the uncertainty about the value of foreign currency assets, liabilities & operating incomes due to fluctuations in exchange rates. For ex. German airline Lufthansa signed $ 3 billion contract to purchase aircraft form Boeing. To protect itself form appreciation of dollars, the company booked a forward contract. It ended up making a big loss when the dollar instead of going up, moved down. In this case, Lufthansa pursued a hedge which was unnecessary.

Commodity Risk is the uncertainty about the value of widely used commodities such as gold, silver etc.

Equity Risk is uncertainty about the value of the ownership stake. Companies can eliminate risk by transferring asset/liability to another party (Chndanani, 2000). Alternatively, the asset liability can be retained by the company but the risk can be transferred. Or the company may retain the risk, but in the extent of loss, a third party assumes the liability. At present companies are using derivatives tools like forwards, futures, options, swaps, coupon only swaps, cap options, swaptions etc. Walt Disney which operates theme parks is exposed to weather risks. Disney buys weather derivatives or an insurance policy to hedge the risks arising from inclement weather.

MARKETING RISKS Effective marketing implies balanced & informed decisions that lead to a long term profitability. Many times strategies that focus on short term objectives may look attractive

42

Key Drives of Organizational Excellence

but turn to be risky in the long term. It can be understood by the failure of many dotcoms in recent times. The challenges are

Branding Risk Brands are considered to be among the most valuable assets of a company. The coke brand accounts for 95% of the value of the coca-cola company’s total corporate assets. Similarly it is pioneered assets for HLL, Philip Morris, and Tata etc.

Advertising Risk Advertising without a fundamental decision of a customer’s decision making process may throw money down the drain. P & G is realizing the need to squeeze more out of their advertising expenses. Amul’s competitor spends between 7 – 10 % of their revenues on Ads, while Amul keep advertising expenses just down to 1 % of its revenues.

Building Trust Consumer trust continues to form the core of a value of a brand. The failure of a new coke has adequately brought out the importance of customers trust. In 1985, coke faced a major challenge from Pepsi and changed the formulation of its flagship coca cola brand to give it a sweeter taste. Consumers revolted and the old formulation had to be brought back almost immediately.

Keeping Pace with the Time To remain competitive the companies have to revitalize the brand, product components etc from time to time. Motorola’s persistence with its rich technology heritage proved to be a handicap when it faced competition from Nokia’s user friendly, hip, relaxed image (Chndanani, 2000).

Dealing with Commoditization In India & as well as all over the world, many brand face a competition from cheaper products that are perceived by customers to be functionally on par. Recently, HLL executives used the term down trading to describe the phenomenon of people moving away from premium brand to cheaper brand. Similar other risks are building customer loyalty, pitfalls of listening to customers, corporate social responsibility; stretching the brands, pricing risks, supply chain risks etc. To manage it, company should treat marketing expenses like capital investment. Company can focus on: a)

Examine how to leverage investments to reduce the cost of attracting new customers

b)

Concerned with return on marketing investment

c)

dealing with long term marketing goal

HR RISK In today’s knowledge driven era, it is the quality of the people that determines the competitiveness of an organization. As it is believed that India, China and many more

Future Perspective in Enterprise Risk Management: A Strategic Approach to Derisking

43

countries of the world are facing talent crunch, HR management has become more critical than ever before. The key activities of HR functions are as follows.

Attracting & Retaining Employees Turnover of key employees is big HR risk that companies face today. Attracting & retaining talent involves shaping the whole organization, its vision, values, strategy, leadership, reward & recognition. For example, at Vardhaman, on the job, off the job training program from operating to top level management as well as delegation of responsibility——are the main reasons of their less turnover.

Balancing the Work Force during Slack Time As talented humans are the most non replaceable assets of the firm, proper care must be taken in recruiting & promoting them(Bagchi,2007). For ex. Except for top & middle level management level, Microsoft recruits people on contract basis. So, during slack time, company can reduce cost without affecting moral values & can retain talented manpower within it. Nucor used to put in practice the same theory under the leadership of Kenneth Iverson. In stead of terminating employees during recessionary period, workers’& employees’ no. of days per week were reduced.

Succession Planning Westinghouse which was once at par with General Electric went bankrupt because of series of unfit CEOs. Severity of succession planning problem can be explained by India’ most employee friendly corporation Pune based Thermax. The company faced major crisis after founder Rohinton Agha passed away. Thermax’s market capitalization sharply declined from Rs. 990 crores to Rs. 186 crores. Agha nurtured the company over a period of time but had not paid enough attention to succession planning. In many Tata group companies, employees feel that managers form the Tata. Administrative Services invariably occupy all the plump post.

Managing Ethical, Legal & Social Regulatory Risk It has become important to address the above risk properly. Anti-suit proceeding by the government can take a company’s attention away from its core business. A significant proportion of senior management’s time at Tata Motors had been consumed by the Antitrust suit, which is only now reaching the settlement stage (Judy, 2007). Now a day, companies are expected to maintain high standard of ethics & corporate governance. Unethical practices & low standard of corporate governance can severely erode the reputation of the company & its market capitalization. Lloyds of London has seen severe decline in its business owing to unethical & illegal disclosure practices. It can be managed by safeguarding intellectual property right, managing anti trust issues, balanced scorecard approach, corporate governance etc (Bagchi, 2007). To avoid anti trust attention, companies do well by making standards as broad based as possible by involving many players.

44

Key Drives of Organizational Excellence

POLITICAL RISK It includes actions of governments & political groups that restrict business transactions resulting in loss or eroding profit potential (Agrawal,1996). The experience of McDonald in China illustrate up to what extent political risks can shrink the business. Like MacDonald, experience of Enron in India tells that even in liberalizing economies political risks is always present. Suzuki also faced Considerable hostility form GOI in late 1990s.

Roads to deal with political risks 1.

Macro Political Risk Analysis

2.

Micro Political Risk Analysis

3.

Country Risk Assessment

4.

BOP, GDP, Currency Movement, Inflation, Population, Public Health

Government Policy Political Risks can also be hedged by Political Risk Insurance cover form multilateral organizations, governments and private sector companies like Multilateral Investment Guarantee Agency (MIGA), US Overseas Private Insurance Corporation (OPIC), US Exim Bank etc., can adopt specific methods to deal with political risk 1.

Keeping control of strategic element of operation

2.

Proactive approach to planned divestment

3.

Joint Venture

4.

Local debt

INTEGRATED RISK MANAGEMENT (IRM) It is concerned with the identification & assessment of the risks of the company as a whole & formulation & implementation of a company wide strategy to manage it. IRM looks at macroeconomic factors, industry factors, company wide factors, random factors & their impact on cash flows. It has been found that different risks are dynamically interlinked with each other, for ex. M&A Risk and HR risk; Political & ethical risk; financial risk & reputation risk; environmental & political risk; technological & legal risk etc (Judy, 2007). Technology based companies often need strong competitiveness in legal matter. e. g. patents. When IBM asked Microsoft to develop system for its PC, Microsoft retained ownership of DOS & insisted on licensing rather than outright sale. IRM requires thorough understanding of the company’s operations as well as its financial policies, strategic planning etc. Integrating Finance & information function to business strategy Finance managers must integrate their task with financial, operational and organizational matters of an organization. For that sound Information system must be executed. It must be based on central, integrated database.

Future Perspective in Enterprise Risk Management: A Strategic Approach to Derisking

45

Developing a Risk Policy & Implementing ERM Some uncertain event are of such a nature that it is possible to obtain fairly reliable estimate of the chance of particular event of the outcome occurring. If possible event of an outcome are known, then the probability of a particular outcome occurring can be by employing the mathematical law of probability (Bagchi, 2007). One concept of risk, therefore is the probability (that is the chance) of loss. However, for an individual firm/company, it is a concept of very limited usefulness because as can be shown by the law of large numbers, the actual outcome of uncertain event can be guaranteed to approach the expected result in a large number of trials.

Process of Risk Management 1.

Identification of Risk

2.

Evaluation/ Management of Risk

3.

Handling of Risk

4.

Implementation of Risk Management Decision

Risk Identification Information a)

Asset information such as list of assets, its original cost, book value, replacement value etc.

b)

Process information regarding raw material, process and nature of plant

c)

Product information whether consumer products or industrial products, chances of liability etc.

d)

Liability information such as liability to its employees and to public.

Risk Evaluation/Measurement It requires mathematical approach & considerable data on past losses. The data available from the concern may not be adequate enough to lend itself amenable to analytical exercises. Hence it becomes necessary to resort to data on industry basis, at national & sometimes at international level. It includes determination of a)

The probability or chances that losses will occur the impact the losses would have upon the financial affairs of the firm should they occur.

b)

The ability to predict the losses that will actually occur during the budget period.

A risk matrix can be prepared which essentially classifies the risk according to their frequency & severity.

Risk Handling Nowhere are firms entirely free to decide how they shall handle their risks. In every country there are governmental & official regulations regarding health & safety at work, fire precautions, hygiene, the construction & operation of vehicles, environmental pollution, food & drink, handling & conveyance of dangerous substances & many other matters relating

46

Key Drives of Organizational Excellence

to property, personal injury and other risks. If a firm wishes to earn on certain activities it must comply with the official risk handling regulations relating thereto. There will remain, however, broad areas where it can exercise its own discretion regarding either physical or financial loss control (Judy, 2007). Risk can be handled in four ways. a)

Risk Avoidance : Only rarely it is possible and it is feasible only at the planning stage of an operation.

b)

Risk Reduction: In many ways physical risk reduction is the best way of dealing with any risk & usually possible to take steps to reduce either the probability of loss or its severity, should it occur. Ideal time to think of risk reduction measures is at the planning stage of new project when considerable improvement often can be achieved at little or no extra cost. Risk prevention should be evaluated in a same way as the investment projects.

c)

Risk Retention: This technique is often used to retain losses ranging from minor break down of machinery to devastation of atomic war. There are two types of retention methods as under. I.

Risk retained as part of deliberate management strategy after conscious evaluation of possible losses & causes. This is known as active form of risk retention.

II. Risk retention occurred through negligence. This is known as passive form of risk retention. d)

Risk Transfer: It refers to legal assignment of costs of certain potential losses to another (Shashidha, 1999). Insuring ‘risks’ has now to occupy an important commercial role, as it deals with those risks that could be transferred to organization that specializing in accepting them, at a price. Usually, there are three major means of loss transfer. I.

By tort

II. By contract other than insurance 3 III. By contract of insurance The main method of risk transfer is insurance. The value of the insurance lies in the financial security that of a firm can obtain by transferring to an insurer, in the return for known premium, the risk of losses arising from the occurrence of a specified peril.

Implementation of a Decision The final act in the risk management process is the implementation of the decision. The risk manger would recommend to his organization various methods of tackling the risk & after getting its approval will implement the same.

CASE I: THE COLLAPSE OF UTI In mid 2001, India’s leading mutual fund faced a major crisis. In early July, the UTI Board made the shocking announcement that repurchases of units of unit 64 schemes would be stopped. UTI Chaiman, P S Subramanyam was asked to resign by union finance minister.

Future Perspective in Enterprise Risk Management: A Strategic Approach to Derisking

47

It happened because in the early 1990s, UTI pursued more risky strategies, deciding to invest heavily in the stock market. UTI also made a number of ad hoc decisions. In key departments inexperienced persons were positioned. The fund made highly questionable investments in companies like Welspun, Global E-Commerce and Padmini Polymers. The market got the first hint in 1998 when its reserves went negative. Government appointed a committee under a chairmanship of Deepak Parekh who recommended first time to adopt NAV linked scheme. The government bailed fund by pumping in Rs. 3000 crores. In 1999, the Government created special US-99 scheme and asked the trust to transfer the trust all its holdings in PSUs to the scheme paying Rs. 2727 crore. Again UTI deployed it in a risky way. Subramanyam was influenced by India’s leading stock broker Ketan Parekh and invested in weak scrip like Jain studio, Shonk technologies, Cyberspace Infosys etc. By April, 2001, it crashed below par. However, it continued to repurchase units at Rs. 14.25. Each repurchases inflated prices & mounted losses. UTI missed the opportunity of NAV based scheme (suggested by Deepak Parekh) when the Sensex crossed 6000 in February 2000. It happened because of poor CEO. Moreover, UTI’s reward systems have not been designed to attract & retain good talent. Many talented recruiters form IIMs frustrated by ad hoc investment policy, quit.

CASE II: ERM AT INFOSYS TECHNOLOGIES INFOSYS Technologies is India’s most admired Companies. The company has been a trendsetter in risk management.

On The Mechanism to Manage Risk at Strategic Level 1.

The BOD of the company need to take ultimate bottom line responsibility for risk Management (Bishop, 1996).

2.

The business portfolio of a company needs to be diverse so that vagaries in one segment do not adversely affect the company’s business performance.

3.

A management control system that ensures timely aggregation of inputs in the external & internal environment, enabling quick top management decision making on risk management are required.

Ideal Business Model The business model rests on four pillars, predictability, sustainability, profitability & derisking (Froot, 1994). It helps management to evaluate risk return trade off and make effective strategic choices. Derisking provides the company with the strength & stability to effectively handle variations in the business environment.

On Enterprise Risk Management in India Since the software sector in India has had to compete with global companies, the exposure they have to put global best practices is significant. One area in which global best practices have been implemented is enterprise wide risk management.

48

Key Drives of Organizational Excellence

On Short Term Focus of Risk Management Any successful de-risking model should be balanced keeping in mind long term as well as short term, financial as well as non financial aspects.

On The Infosys Model of De-risking Nandan Nilekani explains how Infosys handles Risk— “we ensure that we do not become overly dependent on any single segment of our business. For example, we had put a cap of 25% on our Y2K revenues. We try to diversify our risk by operating in multiple technologies and multiple market segments (Sadgrove, 2007). We make sure that no one customer provides 10% of our business. We ensure that we operate in a variety of vertical domains. The whole idea is that one should not become overly dependent on any one segment and that we broad base our operations so as to De risk the company. An expansion in to under-penetrated markets is a part of derisking strategy at Infosys. Our aim is to have a multiple operations across globe to respond instantly to our customers’ needs and to take advantage of talent pools available in cost competitive economies (Chndanani, 2000). This strategy also reduces the risk to our operations due to changes in geo-political equations.”

References Chakravarthy Gautham (1997), Risk Management 2000, Business Today, November-December, pp. 115-121. Agrawal, V.K. (1996), Risk Management: A Conceptual Framework, Chartered Secretary, November, pp. 1170-1174. Shashidha K.S. (1999), Risk Management & Corporate policy, Chartered Secretary, November, pp. 1287-1288. Chndanani L.R. (2000), Risk Management Process, Chartered Secretary, October, pp. 1276-1282. Sarin, K. Rakesh and Weber Martin (1999), Risk Value Model, European, Journal of Operational Research, October, pp. 135-149. Capoor, Jagdish (2000), Risk Management & Financial Institutions, Reserve Bank of India Bulletin, December, pp. 1323-1328. Gupat N.D. (2004), Corporate Governance & Risk Management, Chartered Accountant, December, pp. 121125. Carson, Robert, Barry (1985), Enterprise: an Introduction to Business, Harcourt Brace: Jovanovich (San Diego). Bishop, Mathew, (1996), Corporate Risk Management Survey, The Economist, February10. Froot, Kenneth A., Scherfstein, Davis S. and Stein, Jeremy C. (1994), A Framework for Risk Management, Harvard Business Review, November-December, pp. 91-102; The Next Big Surprise, The Economist, October 13, pp. 60. Mahanta, Vinod (2001), Net Impasse, Business Today, April 6, pp 76-78. Regester, Michael, Judy, Larkin (2007), Risk Issues & Crisis Management: A Case Book of Best Practices, Charted Institute of Public Finance, August. Bagchi S. K. (2007), Operational Risk Management, Jaico Publishing House, December. Sadgrove Kit (2007), The Corporate Guide to Business Risk Management, Jaico Publishing house, December.

Non-Performing Assets in Indian Banks

49

5

Non-Performing Assets in Indian Banks Prangali Godbole Shipra Agrawal Vandana Jain Pushpa Negi

A strong banking sector is important for flourishing economy. The failure of the banking sector may have an adverse impact on other sectors. Non-performing assets are one of the major concerns for banks in India. NPAs reflect the performance of banks. A high level of NPAs suggests high probability of a large number of credit defaults that affect the profitability and net-worth of banks and also erodes the value of the asset. The NPA growth involves the necessity of provisions, which reduces the overall profits and share holders value. This paper analyzed the Non-Performing Assets (NPAs) of public, private, and foreign sector banks in India. The NPAs are considered as an important parameter to judge the performance and financial health of banks. This paper aims to find the fundamental factors which impact NPAs of banks and the affect of NPAs on Bank’s performance.

INTRODUCTION A Man without money is like a bird without wings the Rumanian proverb insists the importance of the money. A bank is an establishment, which deals with money. The basic functions of Commercial banks are the accepting of all kinds of deposits and lending of money. In general there are several challenges confronting the commercial banks in its day to day operations. The main challenge facing the commercial banks is the disbursement of funds in quality assets (Loans and Advances) or otherwise it leads to Non-performing assets. The NPAs are considered as an important parameter to judge the performance and financial health of banks. The level of NPAs is one of the drivers of financial stability and growth of the banking sector.

50

Key Drives of Organizational Excellence

Non Performing Assets Non Performing Asset means an asset or account of borrower, which has been classified by a bank or financial institution as sub-standard, doubtful or loss asset, in accordance with the directions or guidelines relating to asset classification issued by RBI. An amount due under any credit facility is treated as “past due” when it has not been paid within 30 days from the due date. Due to the improvement in the payment and settlement systems, recovery climate, up gradation of technology in the banking system, etc., it was decided to dispense with ‘past due’ concept, with effect from March 31, 2001. Accordingly, as from that date, a Non performing asset (NPA) shell be an advance where 1.

Interest and /or installment of principal remain overdue for a period of more than 180 days in respect of a Term Loan,

2.

The account remains ‘out of order’ for a period of more than 180 days, in respect of an overdraft/ cash Credit(OD/CC),

3.

The bill remains overdue for a period of more than 180 days in the case of bills purchased and discounted,

4.

Interest and/ or installment of principal remains overdue for two harvest seasons but for a period not exceeding two half years in the case of an advance granted for agricultural purpose, and

5.

Any amount to be received remains overdue for a period of more than 180 days in respect of other accounts.

With a view to moving towards international best practices and to ensure greater transparency, it has been decided to adopt the ’90 days overdue’ norm for identification of NPAs, form the year ending March 31, 2004. Accordingly, with effect from March 31, 2004, a non-performing asset (NPA) shell be a loan or an advance where; 1.

Interest and /or installment of principal remain overdue for a period of more than 90 days in respect of a Term Loan,

2.

The account remains ‘out of order’ for a period of more than 90 days, in respect of an overdraft/ cash Credit (OD/CC),

3.

The bill remains overdue for a period of more than 90 days in the case of bills purchased and discounted,

4.

Interest and/ or installment of principal remains overdue for two harvest seasons but for a period not exceeding two half years in the case of an advance granted for agricultural purpose, and

5.

Any amount to be received remains overdue for a period of more than 90 days in respect of other accounts.

FACTORS RESPONSIBLE FOR NPA There are several reasons for an account becoming NPA. The factors can be divided into two parts (1) Internal factors and (2) External factors.

Non-Performing Assets in Indian Banks

51

Internal factors 1.

Funds borrowed for a particular purpose but not use for the said purpose.

2.

Project not completed in time.

3.

Poor recovery of receivables.

4.

Excess capacities created on non-economic costs.

5.

Ability of the corporate to raise capital through the issue of equity or other debt instrument from capital markets.

6.

Business failures.

7.

Diversion of funds for expansion\modernization\setting up new projects\helping or promoting sister concerns.

8.

Willful defaults, siphoning of funds, fraud, disputes, management disputes, misappropriation etc.

9.

Deficiencies on the part of the banks viz. in credit appraisal, monitoring and followups, delay in settlement of payments\subsidiaries by government bodies etc.

External factors 1.

Sluggish legal system - Long legal tangles, Changes that had taken place in labor laws, Lack of sincere effort.

2.

Scarcity of raw material, power and other resources.

3.

Industrial recession.

4.

Shortage of raw material, raw material\input price escalation, power shortage, industrial recession, excess capacity, natural calamities like floods, accidents.

5.

Failures, nonpayment\ over dues in other countries, recession in other countries, externalization problems, adverse exchange rates etc.

6.

Government policies like excise duty changes, Import duty changes etc.

AFFECT OF NON-PERFORMING ASSETS In general NPAs have an adverse affect on the efficiency of the banks, as a bank’s efficiency is reflected mainly by the level of return on its assets, and NPAs have a deleterious effect on the return in several ways:1.

Owners do not receive a market return on their capital. In the worst case, if the bank fails, owners lose their assets. In modern times, this may affect a broad pool of shareholders.

2.

Depositors do not receive a market return on savings. In the worst case if the bank fails, depositors lose their assets or uninsured balance. Banks also redistribute losses to other borrowers by charging higher interest rates. Lower deposit rates and higher lending rates repress savings and financial markets, which hampers economic growth.

52

Key Drives of Organizational Excellence

3.

Nonperforming loans epitomize bad investment. They misallocate credit from good projects, which do not receive funding, to failed projects. Bad investment ends up in misallocation of capital and, by extension, labour and natural resources. The economy performs below its production potential.

4.

Nonperforming loans may spill over the banking system and contract the money stock, which may lead to economic contraction. This spill over effect can channelize through illiquidity or bank insolvency; (a) when many borrowers fail to pay interest, banks may experience liquidity shortages. These shortages can jam payments across the country, (b) illiquidity constraints bank in paying depositors e.g. cashing their pay checks. Banking panic follows. A run on banks by depositors as part of the national money stock become inoperative. The money stock contracts and economic contraction follows (c) undercapitalized banks exceeds the bank’s capital base.

NPAS OF SCHEDULED COMMERCIAL BANKS OF INDIA Non – Performing Assets is an important parameter in the analysis of financial performance of the banks. The total NPA in public sector, private sector and foreign banks in India is shown in table-1. The table reveals that total NPAs of public sector (Rs.38, 601, 80 crore) banks is higher than the private sector (Rs.9, 239.48 crore) and foreign banks (Rs.2, 452 crore). With a view to ensuring flow of credit to the neglected sectors like agriculture and SSI, the concept of priority was evolved and commercial banks of India were advised to grant at least 40 percent of their total advances to borrowers in the priority sector. The priority sector NPA in commercial banks of India is also depicted in the table-1. It is observed that the other sector has higher NPA in the total priority sector compared to SSI and agriculture sector at the end of 2007. Further, public sector banks accounts to 59.46 percent, private sector accounts to 31.22 percent and foreign banks accounts to 13.5 percent in total NPA. The NPA of public sector banks is high in priority sector (59.46%) compare to public (1.27%) and non priority sector (39.27%). On the other side the NPA of private sectors and foreign banks is high in non priority sectors as compare to priority and non public sectors. Hence, banks must take more measure to reduce the NPA in priority and non priority sector banks. Table 1: Non-Performing Assets of Scheduled Commercial banks (Year-2007) Name of the Bank

Agriculture Amt (Rs) in (crores)

Small Scale Industries

Per Amt Rs. Per in cent cent (crores) total

Public Sector Banks

Others Amount (Rs. in crores)

total

Per cent

Priority Sector Public Sector Amount (Rs. in crores)

total

Per Amount Per cent (Rs. in cent crores) total

Non-Priority Sector Amount (Rs. in crores)

total

Total

Per Amount cent (Rs.in crores) total

6,506.34 16.86 5,843.28 15.14 10,604.0 27.47 22,953.62 59.46 490.18 1.27 15,157.99 39.27 38,601.8 1 0

Private Sector Banks

860.51

9.31

644.59

6.98

Foreign Bank





54

2.2

(source: www.rbi.org)

1379.09 14.93 2884.18 31.22

277

11.3

331

13.5

2.79



0.03 6352.51 68.75 9239.48



2,120

86.5

2,452

Non-Performing Assets in Indian Banks

53

The gross and net NPA as a percentage to total assets and advances indicates the effectiveness of banks in recovery of credit. The gross and net NPA as a percentage to total assets is depicted in the table-2. Table 2 depicts that the gross and net NPA as a percentage to total assets in public sector banks declined continuously from 2002-03 to 2006-07, whereas the same fluctuates in private sector and foreign banks in the study period. This indicates that the banks are improving the quality of assets day by day and the NPA of banks are decreasing. Table 2: Non-Performing Assets as Percentage of Total Assets – Scheduled Commercial Banks (In Percent) Name of the Bank

Gross NPAs/Total Assets 200203

200304

200405

200506

200607

Net NPAs/Total Assets 200203

200304

2004-05

200506

200607

Public Sector Banks

4.21

3.50

2.73

2.05

1.60

1.93

1.28

0.95

0.72

0.62

Private Sector Banks

3.97

2.82

2.05

1.37

1.24

2.32

1.32

0.98

0.55

0.54

Foreign Banks

2.44

2.13

1.43

0.97

0.81

0.79

0.66

0.42

0.41

0.33

(Source: www.rbi.org)

Table 3 indicates that the gross and net NPA as a percent to total advances declining regularly from 2002-03 to 2006-07 in public, private and foreign banks. It also clears that public sector and private sector banks are more efficient in controlling the NPA compared to foreign banks in terms of total advances. However, NPA of all the sector banks is decreasing very fast. This may be due to higher provisions, which the commercial banks have been providing. This indicates that banks are controlling the NPA more efficiently. Table 3: Non-Performing Assets as Percentage of Total Advances – Scheduled Commercial Banks (In Percent) Name of the Bank

Gross NPAs/Gross Advances

Net NPAs/Net Advances

200203

200304

200405

200506

200607

200203

200304

200405

200506

200607

Public Sector Banks

9.36

7.79

5.53

3.64

2.66

4.53

2.99

2.06

1.32

1.05

Private Sector Banks

8.07

5.84

3.77

2.45

2.20

4.95

2.84

1.85

1.01

0.97

Foreign Banks in India

5.25

4.62

2.85

1.95

1.77

1.76

1.48

0.86

0.83

0.73

(Source: www.rbi.org)

54

Key Drives of Organizational Excellence

The table 2 & 3 shows, that during initial sage the percentage of NPA was higher. This was due to ineffective recovery of bank credit, lacuna in credit recovery system, inadequate legal provision etc. Various steps have been taken by the government to recover and reduce NPAs. Some of them are: l·

One time settlement / compromise scheme.

l

Lok adalats.

l

Debt Recovery Tribunals.

l

Securitization and reconstruction of financial assets and enforcement of Security Interest Act 2002.

l

Corporate Reconstruction Companies.

l

Credit information on defaulters and role of credit information bureaus.

SUGGESTIONS From the foregoing analysis the following steps can be taken by the Indian commercial bank to reduce the non performing assets:

By improving the recovery management Sound functioning of banks depends on timely recovery of credit. Hence, banks should develop suitable recovery programs for assessing and classifying the over dues, monitoring accounts keeping regular contact with borrower, fixing recovery target, arranging recovery camps, training the personnel and linking marketing of produce and recovery.

By improving the credit management Management of credit is essential for proper functioning of banks. preparation of credit planning, appraisal of credit proposals, timely sanction and disbursements, post sanction follow-up and need based credit are the some area of credit management, needs improvement in order to reduce the NPA.

By making the legal system to be effective Government of India /RBI had initiated many legal measures to bring down NPA in banks. However, there are some flows in each legal measures need improvement in order to bring down NPA in banks.

Inculcating ethics in borrowers Ethics in borrower is necessary to make the banking sector more effective. However, many borrowers are defaulters not because of low income but due to lack of ethics. Hence banks should use NGO’s and other voluntary organization to educate the borrowers regarding the importance of timely repayment of credit.

Non-Performing Assets in Indian Banks

55

CONCLUSION The Indian banking sector is facing a serious problem of NPA. The extent of NPA is comparatively higher in public sectors banks compared to private and foreign banks (Table II&III). Similarly it is observed that the other sector has higher NPA in the total priority sector compared to SSI and agriculture sector at the end of 2007. To improve the efficiency and profitability, the NPA has to be scheduled. Various steps have been taken by government to reduce the NPA. It is highly impossible to have zero percentage NPA. But at least Indian banks can try competing with foreign banks to maintain international standard.

References Anurag (2006), Causes for Non Performing Assets of Banks, available at www.123eng.com/forum/viewtopic.php (Page saved in February 2008). ICRA (2002), Rating of Structured Obligations, Indian Credit Rating Agency 2002 available at http:// www.icraindia.com/services/rating (Page saved in February 2008). Lahiri, Ashok K. (2002), Rising NPAs: Where Has All the Money Gone? available at http://www. rediff.com/ money/2002/aug/01spec.htm (Page saved in February 2008). Monteiro N.J. Mohan and Ananthan B.R. (2007), NPA in Public Sector Banks: Causes and Cure, The Indian Journal of Commerce, vol-60, pp- 1-11. Ramakrishnaiah K., Saraswaty B.C. and Chetty S. Sudhakar (2003), A Study of NPA of Co-operative & Public Sector Banks, Journal of Banking Finance, vol-XVI, pp-14-18. Reddy Prashanth K (2002), A Comparative Study of Nonperforming Assets in India in the Global Context Similarities and Dissimilarities, Remedial Measures, available at www.unpan1.un.org/intradoc/groups/public (Page saved in February 2008). Shiralashetu A.S. and S.B. Akash (2006), Management of Non-Performing Assets in Commercial Bankssome issues, Journal of Banking Finance, vol-XIX, pp-14-16. Vallabh Gourav, Bhatia Anoop and Mishra Saurabh (2007), Non-Performing Assets of Indian Public, Private and Foreign Sector Banks: An Empirical Assessment, The ICFAI Journal of Bank Management, Vol- VI, pp- 7-28. Viswanathan R. (2002), Myth and Reality, available at www.hinduonnet.com/thehindu/biz/2002/02/28/ stories (Page saved in February 2008). Yamaguchi, Yutaka (2001), Bank of Japan Remarks at the Edinburgh Finance & Investment Seminar, available at http://www.boj.or.jp/en/press/koen068.htm (Page saved in February 2008).

56

Key Drives of Organizational Excellence

6

Environmental Management Accounting: An Overview Anindita Chakraborty Kavita Indapurkar Garima Mathur

This paper aims at describing the importance of promoting and implementing Environmental Management Accounting (EMA) for the businesses and the current practices in the EMA. EMA is the generation and analysis of both financial and non-financial information in order to support internal environmental management processes. The aim of EMA is to establish a culture of pollution prevention and waste minimization within industry. With the rise of environmental issues it was felt that financial accounting and management accounting both are not able to account for the environmental issues and help the business in there sustainable development. Therefore the need for promotion and implementation of EMA was felt, which has many benefits such as increased availability of funds, improved environmental performance, higher contribution to sustainable development, effective decision-making, increased market share, and development of environmentfriendly industrial sector, etc. This paper tries to explore these benefits which arise from implementing EMA. However, the success of promoting EMA depends on developing EMA systems that are cost-effective for industry.

INTRODUCTION Environmental issues have risen significantly during the past two decades, which could be seen from the major incidents like the Bhopal gas tragedy (1984), Chernobyl nuclear explosion (1986), Exxon Valdez oil spill (1989), Southern California Forest Fires (2007) and Yangtze River Dolphin Extinction (2007). These events received worldwide media attention and increased concerns over major issues such as global warming, depletion of non-renewable resources, land degradation, forest fires, loss of natural habitats, depletion of ozone layer, deterioration of ecological systems etc. This has led to a general questioning of business practices by organisations such as Friends of the Earth, Greenpeace, the United Nations and others. These questions have recognised that the unrestrained lifestyle poses a threat to mankind and the planet earth. This led to global agreements to prevent future environmental damage. Such agreements include the

Environmental Management Accounting: An Overview

57

Montreal Protocol, the Rio Declaration, and the Kyoto Protocol. Businesses have become increasingly aware of the environmental implications of their operations, products and services. Environmental risks cannot be ignored because poor environmental behaviour in the long run may have an adverse impact on the business and its finances. Punishment includes fines, increased liability to environmental taxes, loss in land value, destruction of brand values, loss of sales, inability to secure finance, loss of insurance cover, contingent liabilities, law suits, and damage to corporate image. However what may be the penalty is definitely affected by the laws of sovereign nations. All the aspects of business, including accounting are affected by environmental issues. From an accounting perspective, the initial pressures were felt in outside reporting, including environmental disclosures in financial reports. However, environmental issues cannot be dealt with solely through external reporting. Environmental issues need to be managed before they can be reported on, and this requires changes to management accounting systems and the way they are applied.

ENVIRONMENTAL MANAGEMENT ACCOUNTING As defined, Environmental Management Accounting (EMA) has been the identification, collection, analysis, internal reporting, and use of information on material and energy flows, environmental costs, and other costs for both conventional and environmental decisionmaking within an organization. Thus, EMA highlights two things: 1.

Physical information on the use and flows of energy, water and materials (including wastes) and;

2.

Monetary information on environment-related costs, earnings and savings.

EMA is based on accounting for environmental costs and gives information on physical flows of materials and energy. Also, EMA information can be used for any sort of management decision-making but it is mainly useful for decisions with significant environmental components or consequences.

Applicability of EMA data Assessment of annual environmental costs/expenditure The unused portion of raw material that is emitted in the form of waste is not usually considered an environmental related cost but these costs tend to be much higher than initial estimates if they are not controlled and minimized initially by the introduction of effective cleaner production. By identifying and controlling environmental costs, EMA systems help in identifying and controlling environmental costs and cleaner production which leads to saving of money and improving environmental performance.

Product pricing The EMA results help in product pricing by re-calculated cost and revaluation of the profit margins and by again designing the processes or products in order to reduce environmental costs.

58

Key Drives of Organizational Excellence

Capital Budgeting EMA helps in better identification, allocation and analysis of environmental cost which helps in the improvement of profitability of the project undertaken. (Fatima Reyes).

Design and implementation of environmental management systems After defining the environmental cost and what the consequences are the designing of the EMS has been done.

Environmental performance evaluation, indicators and benchmarking The environmental performance evaluation means to quantify, understand and track the relevant environmental aspects of a system. It helps in identifying the indicators like environmental, operational and management which can be measured and tracked to facilitate continuous improvements.

Setting quantified performance targets EMA helps in setting the performance targets of the organization and evaluation on the basis of it.

Better organization EMA data is useful in cleaner production, pollution prevention, supply chain management and design for environment projects.

External disclosure EMA data is used in external disclosure of environmental expenditures, investments and liabilities.

External environmental or sustainability reporting EMA helps in preparing environmental or sustainability report which tells about the cost saving, productivity gains and improved sales which are the outcome of the implementing EMS in an organisation. The main steps to implement an environmental management accounting system include Top Management Support, Establishment of standards of the proposed system, determination of the organization’s significant environment impacts and the definition of environmental costs and revenue. It also comprises determination of the review team, review and revision (if necessary) in the existing accounting system and pilot testing of the environmental management accounting system.

Benefits of Environmental Management Accounting Benefits of EMA to Industry a.

Aids in controlling and managing the use and flows of energy and materials, including pollution/waste.

Environmental Management Accounting: An Overview

59

b.

Supports uncovering the new opportunities which might lead to revenue generation through recycling, or use of waste in other activities.

c.

Better informed pricing of the products and improved sales.

d.

More comprehensive information for measurement and reporting of environmental performance which is helpful to the stakeholders.

e.

Increase the competitive advantage of an organization because an effort to reduce the environmental cost is an explicit consideration which is associated with organistion’s reputation.

f.

The organization is trying to manage the environmental implications of its operations which help it in retaining the better staff and improving the staff morale.

g.

The efforts to reduce environmental costs and its impacts on the society help in creating a cleaner environment which will generate societal benefits.

Benefits to the Government regarding the implementation of EMA by the industry a.

If the organization is more concerned about the environment programs, the financial and political burden of the government will lessen.

b.

Implementation of EMA by organization helps in strengthening the effectiveness of existing government policies/regulations.

c.

EMA data provided by the industry helps the Government in voluntary programs which used the financial and environmental metrics.

d.

Assist in framing Government policies and programs.

Benefits of Government Implementation of EMA: a.

Government EMA data can be used for environmental and other decisions within government operations.

b.

Government EMA data is used to estimate and report the financial and environmental performance metrics for government organizations.

ENVIRONMENTAL PRACTICES IN INDIA AND CONCERNED REGULATION For environment protection Government of India has made following laws:

Water (Prevention and Control of Pollution) Act, 1974 This Act is the first attempt to comprehensively deal with environmental issues. The Act prohibits the discharge of pollutants into water bodies beyond a given standard, and lays down penalties for non-compliance. The Act was amended in 1988. It has set up the CPCB (Central Pollution Control Board) which lays down standards for the prevention and control of water pollution. At the State level, the SPCBs (State Pollution Control Board) function under the direction of the CPCB and the state government.

60

Key Drives of Organizational Excellence

Water (Prevention and Control of Pollution) Cess Act, 1977 This Act provides for a levy and collection of a cess on water consumed by industries and local authorities.

Air (Prevention and Control of Pollution) Act, 1981 To solve the problems associated with air pollution, ambient air quality standards were established, under the 1981 Act. The Act provides means for the control and abatement of air pollution. The Act seeks to combat air pollution by prohibiting the use of polluting fuels and substances, as well as by regulating appliances that give rise to air pollution. Under the Act establishing or operating of any industrial plant in the pollution control area requires consent from state boards. The boards are also expected to test the air in air pollution control areas, inspect pollution control equipment, and manufacturing processes.

The Wildlife (Protection) Act, 1972, Amendment 1991 The Act provides protection to listed species of flora and fauna and establishes a network of ecologically-important protected areas. The Act empowers the central and state governments to declare any area a wildlife sanctuary, national park or closed area. There is a ban on carrying out any industrial activity inside these protected areas.

Environment (Protection) Act, 1986 (EPA) Under this Act, the central government is empowered to take measures necessary to protect and improve the quality of the environment by setting standards for emissions and discharges; regulating the location of industries; management of hazardous wastes, and protection of public health and welfare. From time to time the central government issues notifications under the EPA for the protection of ecologically-sensitive areas or issues guidelines for matters under the EPA.

Some notifications issued under this Act are: l

Doon Valley Notification (1989), which prohibits the setting up of an industry in which the daily consumption of coal/fuel is more than 24 MT (million tonnes) per day in the Doon Valley.

l

Coastal Regulation Zone Notification (1991), which regulates activities along coastal stretches. As per this notification, dumping ash or any other waste in the CRZ is prohibited. The thermal power plants require clearance from the MoEF.

l

Dhanu Taluka Notification (1991), under which the district of Dhanu Taluka has been declared an ecologically fragile region and setting up power plants in its vicinity is prohibited.

l

The Environmental Impact Assessment of Development Projects Notification, (1994 and as amended in 1997). As per this notification: i.

All projects listed under Schedule I require environmental clearance from the MoEF.

ii. Projects under the de-licensed category of the New Industrial Policy also require clearance from the MoE.

Environmental Management Accounting: An Overview

61

iii. All developmental projects whether or not under the Schedule I, if located in fragile regions must obtain MoEF clearance. iv. Industrial projects with investments above Rs 500 million must obtain MoEF clearance and are further required to obtain a LOI (Letter Of Intent) from the Ministry of Industry, and an NOC (No Objection Certificate) from the SPCB and the State Forest Department if the location involves forestland. v.

The notification also stipulated procedural requirements for the establishment and operation of new power plants.

l

Ash Content Notification (1997) - Under this notification it is required that the use of beneficiated coal with ash content not exceeding 34% with effect from June 2001, (the date later was extended to June 2002). This applies to all thermal plants located beyond one thousand kilometers from the pithead and any thermal plant located in an urban area or, sensitive area irrespective of the distance from the pithead except any pithead power plant.

l

Taj Trapezium Notification (1998) - It provided that no power plant could be set up within the geographical limit of the Taj Trapezium assigned by the Taj Trapezium Zone Pollution (Prevention and Control) Authority.

l

Disposal of Fly Ash Notification (1999) - The main objective of this notification is to conserve the topsoil, protect the environment and prevent the dumping and disposal of fly ash discharged from lignite-based power plants.

Rules for the Manufacture, Use, Import, Export and Storage of Hazardous Micro organisms/ Genetically Engineered Organisms or Cell The rule was introduced in 1989 with the view to protect the environment, nature and health in connection with gene-technology and micro-organisms, under the Environmental Protection Act, 1986. The government in 1991 further decided to institute a national level scheme for environmentally-friendly products called the ‘ECOMARK’. Besides the above attempts, notifications pertaining to Recycled Plastics Manufacture and Usage Rules, 1999 were also incorporated under the Environment (Protection) Act of 1986.

The Environment (Protection) Rules, 1986 These rules lay down the procedures for setting standards of emission or discharge of environmental pollutants. The Rules prescribe the parameters for the Central Government, under which it can issue orders of prohibition and restrictions on the location and operation of industries in different areas.

The National Environment Appellate Authority Act, 1997 This Act provided for the establishment of a National Environment Appellate Authority to hear appeals with respect to restriction of areas in which any industry operation or process or class of industries, operations or processes could not be carried out or would be allowed to carry out subject to certain safeguards under the Environment (Protection) Act, 1986.

62

Key Drives of Organizational Excellence

The Coal Mines (Conservation and Development) Act (1974) It came up for conservation of coal during mining operations. For conservation and development of oil and natural gas resources a similar legislation was enacted in 1959.

Factories Act, 1948 and its Amendment in 1987 The primary aim of the 1948 Act has been to ensure the welfare of workers not only in their working conditions in the factories but also their employment benefits. While ensuring the safety and health of the workers, the Act contributes to environmental protection. The Act contains a comprehensive list of 29 categories of industries involving hazardous processes, which are defined as a process or activity where unless special care is taken, raw materials used therein or the intermediate or the finished products, by-products, wastes or effluents would cause damage.

Public Liability Insurance Act (PLIA), 1991 The Act covers accidents involving hazardous substances and insurance coverage for these.

National Environment Tribunal Act, 1995 The Act provided strict liability for damages arising out of any accident occurring while handling any hazardous substance and for the establishment of a National Environment Tribunal for effective and expeditious disposal of cases arising from such accident, with a view to give relief and compensation for damages to persons, property and the environment and for the matters connected therewith or incidental thereto.

INTERNATIONAL AGREEMENTS ON ENVIRONMENTAL ISSUES India signed a number of multilateral environment agreements (MEA) and conventions. An overview of some of the major MEAs and India’s obligations under these is presented below:

Convention on International Trade in Endangered Species of wild fauna and flora (CITES), 1973 The aim of CITES is to control or prevent international commercial trade in endangered species or products derived from them. CITES does not seek to directly protect endangered species or curtail development practices that destroy their habitats. Rather, it seeks to reduce the economic incentive to poach endangered species and destroy their habitat by closing off the international market. India became a part of CITES in 1976. International trade in all wild flora and fauna in general and species covered under CITES is regulated jointly through the provisions of The Wildlife (Protection) Act 1972, the Import/Export policy of Government of India and the Customs Act 1962 .

Montreal Protocol on Substances that deplete the Ozone Layer (to the Vienna Convention for the Protection of the Ozone Layer), 1987 The Montreal Protocol to the Vienna Convention on substances that deplete the Ozone Layer came into force in 1989. The protocol set targets for reducing the consumption and production of a range of ozone depleting substances (ODS).

Environmental Management Accounting: An Overview

63

UN Framework Convention on Climate Change (UNFCCC), 1992 The primary goals of the UNFCCC were to stabilize greenhouse gas emissions at levels that would prevent dangerous anthropogenic interference with the global climate. The convention embraced the principle of common but differentiated responsibilities which has guided the adoption of a regulatory structure. India signed the agreement in June 1992, which was ratified in November 1993.

Convention on Biological Diversity, 1992 The Convention on Biological Diversity (CBD) is a legally binding, framework treaty that has been ratified until now by 180 countries. The CBD has three main thrust areas: conservation of biodiversity, sustainable use of biological resources and equitable sharing of benefits arising from their sustainable use. The Convention on Biological Diversity came into force in 1993. Many biodiversity issues are addressed in the convention, including habitat preservation, intellectual property rights, biosafety, and indigenous peoples’ rights.

UN Convention on Desertification, 1994 Delegates to the 1992 UN Conference on Environment and Development (UNCED) recommended establishment of an intergovernmental negotiating committee for the elaboration of an international convention to combat desertification in countries experiencing serious drought and/or desertification.

International Tropical Timber Agreement and The International Tropical Timber Organisation (ITTO), 1983, 1994 The ITTO established by the International Tropical Timber Agreement (ITTA), 1983, came into force in 1985 and became operational in 1987. The ITTO facilitates discussion, consultation and international cooperation on issues relating to the international trade and utilization of tropical timber and the sustainable management of its resource base.

CONCLUSION Environmental accounting is quite a broad term which relates to the environmentalperformance-related information to the stakeholders. It should be appreciated because it gives the company the information about how to use the resources effectively so that they will be cost-effective for the organisation. The EMA explains the financial position of the organisation which has environmental impact and on the other hand it takes into account the impact of organizational operations on the environmental system. The effective implementation of EMA is beneficial to the industry and the Government.

References Deegan, Craig (2002), Report on “Environmental Management Accounting- An introduction and case studies for Australia”, online book available at http://www.icaa.org.au/upload/download/emap_print.pdf. Gale, Robert, (2005), Environmental management accounting as a reflexive modernization strategy in cleaner production, Journal of Cleaner Production , 20, 1-9. International Federation of Accountants, (2005), International Guidance Document- Environmental Management Accounting.

64

Key Drives of Organizational Excellence

Kumpulainen, Anna, (2005), “Environmental Business Accounting in Four Finnish Case Companies Follow-up Study Between 1996 and 2005”, Thesis submitted in partial fulfilment of the requirements for the degree of Master of Science in Engineering. Schaltegger, Stefen, Tobias Hahn, Roger Burritt, (2000), report on “ Environment Management AccountingOverview and Main Approaches”. United Nations Division for Sustainable Development, (2001), Report on “Improving the Role of Government in the Promotion of Environmental Management Accounting”. United Nations Division for Sustainable Development, Department of Economic and Social Affairs (UN DSD/DESA) and the Division of Technology, Industry and Economics, United Nations Environment Programme (UNEP-DTIE), (2001), “ Promoting Environmental Management Accounting Through Government Policies and Programmes and Advancing Information for Decision-Making Through Electronic Networking and Corporate Reporting”.

Determinants of Capital Structure Decisions: A Study of Indian Cement Industry

65

7

Determinants of Capital Structure Decisions: A Study of Indian Cement Industry Pushpa Negi Shweta Sharma Shilpa Sankpal

An appropriate capital structure is a critical decision for any business organization. The decision is important not only because of the need to maximize returns to various organizational constituencies, but also because of the impact such a decision has on an organization’s ability to deal with its competitive environment. This study has been conducted on select listed cement companies operating in India. Based on the available financial data, available the determinants of Capital structure decisions in this industry have been adjudged. The results of the study show that only three of the six independent variables, Tax Rate, Tangibility and Liquidity have their coefficients significant and therefore these three variables emerge as the determinant of the capital structure of the Cement companies in India. Multiple regression, on the capital structure data has revealed that the higher the tangibility, the lower would be the level of debt in the capital structure and the higher the tax rate and liquidity, the higher would be the level debt in the capital structure of the companies.

INTRODUCTION The two principal sources of finance for a company are equity and debt. What should be the proportion of equity and debt in the capital structure of the firm? One of the key issues in the capital structure decision is to evaluate the relationship between the capital structure and the value of the firm. An appropriate capital structure is a critical decision for any business organization. The decision is important not only because of the need to maximize returns to various organizational constituencies, but also because of the impact such a decision has on an organization’s ability to deal with its competitive environment. The prevailing argument, originally developed by Modigliani and Miller (1958) is that an optimal capital structure exists which balances the risk of bankruptcy with the tax savings of debt. Once established, this capital structure should provide greater returns to stockholders than they would receive from an all-equity firm.

66

Key Drives of Organizational Excellence

There are several views on how this decision affects the value of the firm. David Durand identified the two extreme views-(a) Net Income (NI) Approach and (b) Net Operating Income (NOI) Approach. According to Net Income (NI) Approach the cost of debt and the cost of equity do not change with a change in the leverage ratio. As a result the average cost of capital declines as the leverage ratio increases. This is because when the leverage ratio increases, the cost of debt, which is lower than the cost of equity, gets a higher weightage in the calculation of the cost of capital. Under the Net Operating income (NOI) Approach the overall capitalization rate remains constant for all levels of financial leverage; the cost of debt also remains constant for all levels of financial leverage and the cost of equity increases linearly with financial leverage. Besides these two approaches, there is another approach, termed as the Traditional Approach and Modigliani and Miller (MM) Approach. The Traditional Approach is midway between the NI and the NOI approach. The main propositions of this approach are the cost of debt remains almost constant up to a certain degree of leverage but rises thereafter at an increasing rate. The cost of equity remains more or less constant or rises gradually up to a certain degree of leverage and rises sharply thereafter, and the cost of capital due to the behavior of the cost of debt and cost of equity decreases up to a certain point and remains more or less constant for moderate increases in leverage thereafter. According to Modigliani and Miller (MM) approach, the capital structure decision of a firm is irrelevant. This approach supports the NOI approach and provides a behavioral justification for it. This approach indicates that the capital structure is irrelevant because of the arbitrage process which will correct any imbalance i.e. expectations will change and a stage will be reached where further arbitrage is not possible.

VARIABLES Dependent Variables Measures of Capital Structure: This study uses one measures of capital structure namely Debt – Equity Ratio (DER). The DER is computed as the ratio of debt-long term and short term debt borrowings and the sum of debt and total equity. Book values are used for measurement of debt and equity. The same measurement is used by Mallikajunappa and Goveas (2007) in their analysis. This ratio is used because it clearly affects the firm’s dependence on debt and equity in the lending of its instruments.

Independent Variables Measures of Tax Rate DeAngelo and Masulies (1980) hypothesized a positive relationship between the corporate tax and the amount of debt employed by corporations. The tax rate for each company is measured by dividing its tax provision by profit before tax.

Measures of Tangibility Lenders require assets that can be used as collateral to compensate for the chance of the asset-substitution problem occurring. For firms that cannot provide collateral, lenders may require higher lending terms. Therefore, debt financing is more costly than equity financing.

Determinants of Capital Structure Decisions: A Study of Indian Cement Industry

67

Moreover the asset substitution problem is less likely to occur when firms have more assets already in place (Myers, 1977). The positive relationship between a firm’s liquidation value and the level of debt is predicted by both tax model and the agency model. The higher the value of tangible assets, the more likely that a firm will have a high leverage ratio. The proxy used in this study to measure the value of tangible assets of the firm is the ratio of net fixed assets to total assets.

Measures of Profitability The pecking order theory suggests that firms use first internal funds and then move to external funds. This means that high profit firms should have a smaller debt ratio. This positive relationship is also supported when considering the supply side. Rajan and Zingales (1995) argue that creditors prefer to give loans to firms with high current cash flow. The proxy used is cash operating profit (profit before interest, depreciation, taxes, and amortization) to total assets.

Measures of Debt Service Capacity The high debt service capacity means that the firm can meet its interest burden even if Earnings Before Interest and Taxes suffer a considerable decline. In other words, the higher the debt coverage, the greater the likelihood of a firm having a higher debt component in its financial structure (Mittal and Singla, 1992). The study proxies for debt with the ratio between profit before depreciation, interest and taxes to total interest.

Measures of Liquidity Firms with greater liquid assets may use their assets to finance their investments. Therefore, the firm’s liquidity position should exert a negative impact on its leverage ratio. Ozkan (2001) reports the results showing the negative impact of liquidity on firms borrowing decisions. The study proxies by scaling current assets by current liabilities.

Measures of Uniqueness According to Titman and Wessels (1988), firm with relatively unique products are expected to advertise more and in general spend more in promoting and selling their products. Firms can create uniqueness through R&D efforts, and marketing. Uniqueness may result in specialized skill of workers and suppliers and supply of unique products and services. The study proxies for uniqueness with the ratio of selling and distribution expenses to total assets.

REVIEW OF LITERATURE Several studies have been conducted to ascertain the determinants of financial structure. Pandey (2001) examined the determinants of capital structure of Malaysian companies utilizing data from 1984 to 1999. He classified data into four sub periods that correspond to different stage of Malaysian capital market. He concluded that Profitability, Growth, Size, Risk and Tangibility variables have significant influence on all types of debt. Frank and Goyal (2003) studied the relative importance of 39 factors in the leverage decisions of publicly traded U.S firm. They identified top-tier factors and second-tier factors and distinguished from those factors that do not have reliable relationship with leverage.

68

Key Drives of Organizational Excellence

Rao and Lukose (2000) conducted a study on empirical evidence on the determinants of the capital structure of non-financial firms in India based on firm specific data. They have done a comparative analysis for pre-liberalization and post-liberalization periods. The study period for pre-liberalization period was 1990-1992 and the sample size was of 498 firms. The same for post-liberalization period are 1997-1999 and 1411. They concluded that tax effect and signaling effect play a role in financing decisions where as agency costs effect financing decision of big business houses and foreign firms. It is also concluded that size of the firm and business risk became significant factors influencing the capital structure during post-liberalization period. Bharath, Pasquariello and Wu (2006) have tested that Does Asymmetric Information Drive Capital Structure Decisions? They find that information asymmetry does affect the capital structure decisions of U.S. firms over the sample period 1973-2002. There findings are robust to controlling for conventional leverage factors (size, Q ratio, tangibility, profitability) and several firm attributes, such as funding needs, sales growth, real investment, stock return volatility, stock turnover, and intensity of insider trading. Schauten and Spronk (2006) have given an overview of the different objectives and considerations. They find that capital structure decision can be framed as multiple criteria decision support tool that are widely available and the capital structure decision has to deal with more issues than the maximization of the firms market value alone. Serrasqueiro and Nunes (2007) has investigated which capital structure theories –Pecking Order, Trade-off, Agency, and signaling theories could explain the determinants of debt for a panel data covering 162 Portuguese companies for the period 1999-2003. They found a negative relationship between profitability and debt confirms the pecking order theory while a positive relationship between size and debt, confirms the trade-off and signaling theories. The opposite relationship between tangibility and short-term debt and long-term debt, suggest that the determinants of debt vary depending on the analyzed form of debt. Nandy (2008) has analyzed the impact of the macroeconomic factors on the capital structure of some selected companies in India. She has taken four macroeconomic factors rate of interest, stock market performance, prevailing interest rate and growth rate of GDP. She suggested that the factors which were not at all considered by the companies can be taken into consideration and they will be able to take more strong, acceptable and profitable capital structure decisions. Pratomo and Ismail (2007) tried to prove the agency cost hypothesis of Islamic Banks in Malaysia, under which high leverage firm tend to reduce agency costs. They set the profit efficiency of a bank as an indicator of reducing agency cost and the ratio equity of a bank as an indicator of leverage. The higher leverage or a lower equity capital ratio was associated with higher profit efficiency. Mallikajunappa and Goveas (2007) tested the important determinants of the capital structure of companies. And for that they took Profitability, Collateral value of assets, Growth, Debt service capacity, Size Tax rate Non debt tax shield, Liquidity Uniqueness and Business risk as the determinants and the Debt-Equity Ratio as the dependent variable. There result indicated that Profitability, Collateral value of assets, Growth, Size Tax rate and Uniqueness were not the significant determinants of the capital structure of companies. The coefficient of the variables Debt service capacity, Non-debt tax shield, Liquidity and Business Risk were the important determinants of the capital structure of pharmaceutical companies in India.

Determinants of Capital Structure Decisions: A Study of Indian Cement Industry

69

Bhayani (2006) has examined the empirical effects of several factors like Profitability, Liquidity, and Growth in various assets, sales and in earnings and cost of capital on the financial leverage ratio of the 22 units of Indian cement industry. To evaluate the effect of these characteristics of a firm affect the leverage ratio, the relationship between financial leverage and some accounting variables, which represent the characteristics of the firm, are examined through correlation and regression analysis. Mefteh and Oliver (2005) have analyzed the significance of manager confidence on capital structure for a sample of French firms. They decompose a publicly available measure of industry sentiment into two components: a component common with investor confidence and a component unique to manager confidence. They found that investor confidence is negatively related to leverage and that the unique component of manager confidence is positively related to leverage. Ross (1996) Show how active risk management can be optimal for levered firm ex-ante leverage, this paper’s model shows how such behavior can be optimal ex-post leverage. Leland and Toft find that, with static volatility and dividend choice, credit spreads and leverage are monotonically increasing in maturity of debt issued for optimally levered firms. This model shows why short maturities, through their ability to credibly signal intended risk management, can actually better facilitate the optimal issuance of junk debt, thus explaining frequently observed issuances of junk debt at short maturities. Sogorb-Mira and López-Gracia (2000) explored two of the most relevant theories that explain financial policy in small and medium enterprises (SMEs): pecking order theory and trade-off theory. The results suggest that both theoretical approaches contribute to explain capital structure in SMEs. They found evidence that SMEs attempt to achieve a target or optimum leverage (trade-off model) there is less support for the view that SMEs adjust their leverage level to their financing requirements (pecking order model). Dimitrov and Jain (2003) provided an alternate hypothesis based on the firms’ operating Performance. They argue that if the managers have private information that the firm’s future operating performance may deteriorate, they will increase the debt level to prepare for it. Therefore, leverage increase is a negative signal for future operating performance. Korajczyk and Levy (2003) analyzed that macroeconomic conditions are important for the issue choice. Firms tend to time their issues to periods of favorable macroeconomic conditions, i.e., periods of higher relative security prices. Most important, firms issue equity when the stock market experienced large run-ups and when economic prospects are good, as indicated by popular business cycle variables (e.g., interest rates, term spread or credit spread). However, the findings were not uniform across their sample.

Objectives of the Study 1.

To analyze the Capital Structure of Indian Cement Industry.

2.

To determine the relationship between Debt Equity Ratio and Tax rate, Tangibility, Profitability, Debt Service Capacity, Liquidity and Uniqueness.

3.

To find out the effect of Tax rate , Tangibility, Profitability, Debt Service Capacity, Liquidity and Uniqueness on Capital Structure of Indian Cement Industry.

70

Key Drives of Organizational Excellence

RESEARCH METHODOLOGY The study was descriptive in nature. The total population was Indian cement companies which are listed in NSE and BSE. The sample sizes were fourteen Indian cement companies selected on the basis of the purposive sampling technique. The period of study cover the five years from 2002 to 2007. The data was collected through secondary sources i.e. websites of NSE, BSE and moneycontrol.com.

Hypotheses of the Study The study considered the following hypotheses:1.

H01: There is no relationship between Tax Rate and Debt-Equity Ratio.

2.

H02: There is no relationship between Tangibility and Debt-Equity Ratio.

3.

H03: There is no relationship between Profitability and Debt-Equity Ratio.

4.

H04: There is no relationship between Debt Service Capacity and Debt-Equity Ratio.

5.

H05: There is no relationship between Liquidity and Debt-Equity Ratio.

6.

H06: There is no relationship between Uniqueness and Debt-Equity Ratio.

Tools for data analysis Data was analyzed through Multiple Regression. Multiple Regressions has been used to quantify the effect of the various factors, which determine capital structure of companies.

The Empirical Model The following regression model is used for testing the hypothesis: Y = a + bSx + cSy+ dSu+ eSv………. Where, Y = dependent variable; x, y, u, v = independent variables; a = constant term in the equation; b, c, d, e = coefficients of the independent variables.

RESULTS AND DISCUSSION In the multiple regressions the data of dependent variable and independent variables of different cement companies for five years are averaged and considered for the study. The multiple regression model with debt – equity ratio as dependent variable has a coefficient of determination of 0.715 with a standard error of the estimate of 0.077. The regression was found to be significant with ANOVA (F=2.934, P= 0.093). This indicates that all the independent variables have a significant influence on the debt-equity ratio. The regression coefficient relates to Tangibility, Profitability, Liquidity and Uniqueness are negative while those of Tax Rate and Debt Service Capacity are positive. The regression

Determinants of Capital Structure Decisions: A Study of Indian Cement Industry

71

coefficients of Tax Rate, Tangibility and Liquidity are significant at 5% level, while the remaining coefficients are not significant at 5% level.

H01: There is no relationship between Tax Rate and Debt-Equity Ratio The findings indicate that at 5 percent level of significance (t = 2.522, significant at 4%) the null hypothesis is not accepted. Hence the alternate hypothesis Ha1 is accepted. The beta value (1.062) indicates significant positive relationship between the Tax rate and Debtequity ratio. Result of the multiple regressions clearly shows that Indian cement companies with high tax liabilities utilize greater amount of debt to take advantage of the deductibility of interest expenses.

H02: There is no relationship between Tangibility and Debt-Equity Ratio The findings indicate that at 5 percent level of significance (t = -2.920, significant at 2%) the null hypothesis is not accepted. Hence the alternate hypothesis H02 is accepted. The beta value (-1.156) indicate significant negative relationship between Tangibility and Debt-equity ratio. Result of the multiple regression clearly show that the Indian cement companies with higher value of tangible assets collect less amount of debt.

H03: There is no relationship between Profitability and Debt-Equity Ratio The findings indicate that at 5 percent level of significance (t = -.847, significant at 42%) the null hypothesis is accepted. Hence the alternate hypothesis H03 is not accepted. The beta value (-.30) indicate insignificant negative relationship between Profitability and Debt-equity ratio. Results of the multiple regressions clearly show that, the profitability does not have significant influence on the capital structure of Indian cement companies.

H04: There is no relationship between Debt Service Capacity and Debt-Equity Ratio. The findings indicate that at 5 percent level of significance (t = 1.566, significant at 16%) the null hypothesis is accepted. Hence the alternate hypothesis H04 is not accepted. The beta value (.61) indicates insignificant positive relationship between Debt service capacity and Debt-equity ratio. Results of the multiple regressions clearly show that, the Debt Service capacity does not have significant influence on the capital structure of Indian cement companies.

H05: There is no relationship between Liquidity and Debt-Equity Ratio The findings indicate that at 5 percent level of significance (t = 3.333, significant at 1%) the null hypothesis is not accepted. Hence the alternate hypothesis H05 is accepted. The beta value (.776) indicates significant positive relationship between Liquidity and Debt-equity ratio. Results of the multiple regressions clearly show that the Indian cement companies have a positive impact on the level of debt.

H06: There is no relationship between Uniqueness and Debt-Equity Ratio The findings indicate that at 5 percent level of significance (t = 1.949, significant at 9%) the null hypothesis is accepted. Hence the alternate hypothesis H06 is not accepted. The beta

72

Key Drives of Organizational Excellence

value (.59) indicates insignificant positive relationship between Uniqueness and Debt-equity ratio. Results of the multiple regressions clearly show that, the large expenditure on research or development or selling and distribution expenses does not have significant influence on the capital structure of Indian cement companies.

CONCLUSION The result of the study shows that only three of the six independent variables, Tax Rate, Tangibility and Liquidity have their coefficients significant and therefore these emerge as the determinant of the capital structure of the Cement companies in India. The variable Tangibility and Liquidity have opposite signs than what was expected and Tax rate has the same sign as expected. The overall result shows that Tangibility has inverse relationship with debt-equity ratio, while Tax rate and Liquidity have direct relationship with debt equity ratio. Therefore the study concluded that the higher the tangibility the lower would be the level of debt in the capital structure and the higher the tax rate and liquidity, the higher would be the level debt in the capital structure of the companies.

References Buferna Fakher, Bangassa Kenbata and Lynn Hodgkinson (2005), Determinants of Capital Structure Evidence from Libya, available at www.liv.ac.uk/managementschool/research (Page saved in January 2008). Bhayani J. Sanjay (2002), Financial Leverage and its Impact on Shareholder’s Return: a study of Indian Cement Industry, Global Journal of Finance and Economics, vol-14, pp-31-42. Christopher J. Green, Kimuyu Peter, Manos Ronny, and Murinde Victor (2002), How do Small Firms in Developing Countries Raise capital? Evidence from a Large-Scale Survey of Kenyan Micro and Small Scale Enterprises, available at www.lboro.ac.uk/department/ac/research (Page saved in January 2008). DeAngelo, H. and Masulis, R.W. (1980), Optimal Capital Structure Under Corporate and Personal Taxation, Journal of Financial Economics, vol-8(1), pp- 3-29. Drobetza Wolfgang, Pensab Pascal and Wanzenriedc Gabrielle (2007), Firm Characteristics, Economic Conditions and Capital Structure Adjustments, available at www.papers.ssrn.com (Page saved in January 2008). Francisco Sogorb-Mira and José López-Gracia (2004), Pecking Order Versus Trade-Off: An Empirical Approach to the Small and Medium Enterprise Capital Structure, available at www.papers.ssrn.com (Page saved in January 2008). Gowda M. Ramachandra, Sharma V.V.S. and Muzher Syeda Hafsa (2007), Regression Analysis on the Capital Structure of Selected Diversified Companies, Journal of Accounting and Finance, vol-20, pp-27-33. Hall, G., Hutchinson, P., and Michaelas, N. (2004), Determinants of the Capital Structures of European SMEs, Journal of Business Finance & Accounting, vol- 31, pp-711-728. Harris, M., and Raviv, A. (1991), The Theory of Capital Structure, the Journal of Finance, vol-46, pp-297-355. Homaifar, G., Zietz, J., and Benkato, O. (1994), An Empirical Model of Capital Structure: some new evidence, Journal of Business Finance and Accounting, vol-21, pp-1-14. Ivo Welch (2004), Capital Structure and Stock Returns, Journal of Political Economy, vol-112, pp-106-131. Jie Cai and Zhe Zhang (2006), Capital Structure Dynamics and Stock Returns, available at www.papers.ssrn.com (Page saved in January 2008). Mallikarjunappa T and Goveas Carmelita (2007), Factors Determining the Capital Structure of Pharmaceutical Companies in India, Journal of Applied Finance, vol-13, pp-56-70.

Determinants of Capital Structure Decisions: A Study of Indian Cement Industry

73

Murray Z. Frank and Goyal K. Vidhan (2003), Capital Structure Decisions, available at www.papers.ssrn.com (Page saved in January 2008). Myers, S. C. (1977), Determinants of Corporate Borrowing, Journal of Financial Economics, vol- 5, pp-147-175. Nandy Monomita (2008), The Impact of Macroeconomic Environment Factors on Capital Structure of India Companies, Journal of Management Research, vol-7, pp-37-49. Otavio R. De Medeirosa and Cecílio E. Dahe (2004), Testing Static Tradeoff against Pecking Order Models of Capital Structure in Brazilian Firms, available at www.papers.ssrn.com (Page saved in January 2008). Pratomo, Wahyu Ario and Ismail Abdul Ghafar (2007), Islamic Bank Performance and Capital Structure, Global Journal of Finance and Economics, vol. 4, pp 139-145. Rajan, R. G. and Zingales, Luigi (1995), What Do We Know about Capital Structure? Some Evidence from International Data, Journal of Finance, vol-50, pp- 1421-1460. Rao S Narayan and Jijo Lukose P. J (2004), An Empirical Study on the Determinants of the Capital Structure of Listed Indian Firms, available at www.papers.ssrn.com (Page saved in January 2008). Serrasqueiro Zelia and Nunes Paulo Macas (), The Explanatory Power of Capital Structure Theories: a Panel Data Analysis, Journal of Applied Finance, vol-13, pp-23-36. Sreedhar T. Bharath, Paolo Pasquariello and Guojun Wu (2006), Does Asymmetric Information Drive Capital Structure Decisions? Available at www.papers.ssrn.com (Page saved in January 2008). Titman, S. and Wessels, R. (1988), The Determinants of Capital Structure Choice, The Journal of Finance, volXLIII, pp-1-19.

74

Key Drives of Organizational Excellence

Annexure Regression Results Model Summary

Model

R

R Square

Adjusted R Square

Std. Error of the Estimate

1

.846(a)

.715

.472

.07709

a Predictors: (Constant), VAR00006, VAR00001, VAR00005, VAR00003, VAR00004, VAR00002 ANOVAb Model 1

Regression Residual Total

Sum of Squares .105 .042 .146

df 6 7 13

Mean Square .017 .006

F 2.934

Sig. .093a

a. Predictors: (Constant), VAR00006, VAR00001, VAR00005, VAR00003, VAR0000 VAR00002 b. Dependent Variable: VAR00007 Coefficientsa

Model 1

(Constant) VAR00001 VAR00002 VAR00003 VAR00004 VAR00005 VAR00006

Unstandardized Coefficients B Std. Error 1.161 .098 .025 .010 -.247 .085 -.448 .529 .015 .009 -.041 .012 -.953 .489

a. Dependent Variable: VAR00007

Standardized Coefficients Beta 1.062 -1.156 -.300 .612 -.776 -.591

t 11.805 2.522 -2.920 -.847 1.566 -3.333 -1.949

Sig. .000 .040 .022 .425 .161 .013 .092

Microfinance Interventions in India: Challenges and Prospects

75

8

Microfinance Interventions in India: Challenges and Prospects Shagufta Sheikh

The Indian economy has grown rapidly over the past decade, with real GDP growth averaging 6% annually. Social indicators, such as poverty, literacy and infant mortality have also improved during the last ten years. Despite this somewhat positive outlook, over 300 million Indians are classified as living below the poverty line. The Government of India continues to make huge investments in rural India, in an effort to improve the quality of life of the rural masses. Microfinance today only reaches around 20 million people through 7000 MFIs. In India about 240 million people are in need for microfinance. From the perspective of developing effective strategies and Program management, the government supports focused initiatives in two broad areas, Land and Water Development and Microfinance. This paper attempts to evaluate the government’s efforts towards this objective. The major thrust areas include main challenges for microfinance schemes, contribution of SIDBI, NABARD, and NGOs in this regard etc. This paper will provide an opportunity for various organizations working in this area to be aware of each other, so that an effective strategy for microfinance programs can be planned. It will facilitate the objective of sustainable development

INTRODUCTION Since India resides in her villages; it comes as no surprise that more than three fourths of the ‘below the poverty line’ population is based in rural areas. Improving the quality of their life would require meaningful funding, besides appropriate measures to improve the health and socio-economic conditions of the rural people. In view of this backdrop, enhancing rural livelihoods remains a key test of the State, besides continuing to be a major theme in the government’s grant portfolio. Poor people don’t have access to bank loans. Private moneylenders charge very high interest rates. Many people believe that within two to three years of their first loan, people come above the poverty line. Microfinance is thus regarded as the dignified way of crossing the poverty line. Microcredit came to prominence in the 1980s, although early experiments date back 30 years in Bangladesh, Brazil and a few

76

Key Drives of Organizational Excellence

other countries. Theoretically microfinance encompasses any financial service used by poor people, including those they access in the informal economy, such as loans from a village moneylender. Technically, micro finance is defined as provision of thrift, credit and other financial services and products of very small amounts to the poor in rural areas, semi-urban and urban areas. Any one availing micro-finance has to engage in some productive activities that will generate some income. Microfinance programs extend small loans to very poor people for selfemployment projects. Key principles of microfinance were developed in 2004 by Consultative Group to Assist the Poor (CGAP) and endorsed by the Group of Eight leaders at the G8 Summit on June 10, 2004. Poor people need a variety of financial services, not just loans. Microfinance can pay for itself, and must do so if it is to reach very large numbers of poor people. Microfinance is about building permanent local financial institutions. The job of government is to enable financial services, not to provide them. The key bottleneck is the shortage of strong institutions and managers.

Non-government Organizations (NGOs), Self Help Groups (SHGs) in Microfinance In India, there exist a variety of micro finance organizations in government as well as nongovernment sectors. Leading national financial institutions like the Small Industries Development Bank of India (SIDBI), the National Bank for Agriculture and Rural Development (NABARD) and the Rashtriya Mahila Kosh (RMK) have played a significant role in making micro credit a real movement. There are a few exceptions like PRADAN, ICECD, MYRADA, and SEWA etc., who have been successful in replicating their experiences in other parts of the country and act as Resource Organizations.

Self-Employed Women’s Association (SEWA) SEWA is a trade union registered in 1972. It is an organization of poor, self-employed women workers. SEWA’s main goals are to organize women workers for full employment. Full employment means employment whereby workers obtain work security, income security, food security and social security (at least healthcare, childcare and shelter). SEWA organizes women to ensure that every family obtains full employment. Supportive services like savings and credit, health care, child care, insurance, legal aid; capacity building and communication services are important needs of poor women.

BASIX BASIX is a new generation livelihood promotion institution established in 1996, working with over 436,807 households in 70 districts in the states of Andhra Pradesh, Karnataka, Tamil Nadu, Orissa, Jharkhand, Maharashtra, Madhya Pradesh, Rajasthan, Bihar, Chattisgarh, West Bengal, Delhi and Assam. Its mission is to promote a large number of sustainable livelihoods, including for the rural poor and women, through the provision of financial services and technical assistance in an integrated manner. BASIX will strive to yield a competitive rate of return to its investors so as to be able to access mainstream capital and human resources on a continuous basis.

Microfinance Interventions in India: Challenges and Prospects

77

The corporate structure of BASIX comprises a range of companies to address a diverse set of tasks: 1.

Bhartiya Samruddhi Investments and Consulting Services Ltd (BASICS Ltd), the holding company, through which equity and debt investments are made in the group companies.

2.

Bhartiya Samruddhi Finance Ltd (Samruddhi), an RBI registered NBFC, owned by major financial institutions and engaged in micro-credit and retailing insurance and providing technical assistance services to some of its borrowers.

3.

Krishna Bhima Samruddhi Local Area Bank Ltd, an RBI licensed bank, providing micro-credit and savings services in three districts.

4.

Indian Grameen Services (IGS), a section 25, not-for-profit Company engaged in research and development and training related to livelihoods.

5.

Sarvodaya Nano Finance Ltd (Sarvodaya), an RBI registered NBFC, owned by women’s self-help groups, and managed by BASICS Ltd.

SKS Microfinance empowers the poor to become economically self-reliant by providing financial services in a sustainable manner. Launched in 1998, SKS having provided over $432 million and has maintained loans outstanding of $182 million in loans to over 1, 45, 000 women members in poor regions of India. Borrowers take loans for a range of income-generating activities, including livestock, agriculture, trade (such as vegetable vending), production (from basket weaving to pottery) and new age businesses (Beauty Parlor to photography). Its NGO wing SKS foundation runs the Ultra Poor Program. SKS currently has 631 microfinance branches in 15 states across India In the last year alone, SKS Microfinance has achieved nearly 170 % growth, with 99% on-time repayment rate.

RMK The most prominent national level microfinance apex organization providing microfinance services for women in India is the National Credit Fund for Women or the Rashtriya Mahila Kosh. The National Credit Fund for Women or the Rashtriya Mahila Kosh (RMK) was set up in March 1993 as an independent registered society by the Department of Women & Child Development in Government of India’s Ministry of Human Resource Development. The office of the Kosh is situated in New Delhi. The Kosh does not have any branch offices. It acts as a wholesaling apex organization for channelizing funds from government and donors to retailing Intermediate Microfinance Organizations (IMOs).

ASA The Activists for Social Alternatives (ASA) is a not-for-profit Non-Governmental Organisation (NGO) registered as a public charitable trust, working for the development of poor in the drought prone, poverty ridden area of central Tamil Nadu (TN). ASA started its operations in 1986 in Marungapuri block with the objective of addressing the rights of the downtrodden and the exploited, most of whom belonging to the Dalit community. ASA formed Sanghas and/or societies of such people and built sustainable institutions out of such groups through education, skill based training and capacity building and lobbying and advocacy. Watershed was the entry point activity during the initial years.

78

Key Drives of Organizational Excellence

BISWA Bharat Integrated Social Welfare Agency (BISWA) was established as a philanthropic organization in 1994. In pursuance of its objectives, in later stages it has incorporated various means and methods to achieve desirable results. Promotion of Self Help Groups (SHGs), extending Microfinance, encouraging Microenterprise, ensuring social justice for disabled, socio-economic rehabilitation of leprosy cured persons, creating avenues for alternative livelihood for poor have been adopted since a long time and have proven to be effective tools for poverty alleviation.

Bandhan Bandhan meaning “Togetherness” – offers microfinance services to poor women in India’s state of West Bengal. Founded by Mr. Chandra Shekhar Ghosh in November 2000, Bandhan literally helps its clients work their way out of poverty. Bandhan offers micro credit loans to self-employed women living in both rural and urban areas of West Bengal. Their average client earns less than $46 a month and holds less than half an acre of land. Bandhan is India’s flagship microfinance institution (MFI) for ASA Bangladesh’s lending methodology – commonly referred to as an individual lending methodology.

MFI Microcredit Foundation of India is a not-for-profit Section 25 Company in Tamil Nadu dedicated to promoting entrepreneurship and community level action in rural areas as a means to sustainable economic prosperity. Today MFI works primarily with women. Through its field staff, MFI helps them form Self Help Groups (SHGs), trains them in good financial practice, facilitates access to micro credit loans, equips them with business skills and facilitates access to new markets for their products.

SAADHANA Is a non- profit organization established in the year 2001 to reach out to the urban and rural poor women with the specific mandate to catalyze the ‘Endeavor of the Poor for SelfSufficiency’. The founder secretary and the CEO, Mr. Ernest Paul, with close to a decade and half experience in the domain of micro finance, set out operations in the urban slums of Kurnool District, A.P. on 12th December 2001, using a fast track model – drawn upon the positive features of ‘Grameen’ methodology. SAADHANA from its humble beginning in 2001 has now reached out to more than 20,000 poor women within a short span of four years scaling up its operations to surrounding towns with innovative partnerships.

GRAM VIKAS Gram Vikas is a rural development organization, working with poor and marginalized communities of Orissa since 1979, towards making sustainable improvements in the quality of life of the rural poor. Founded by a group of student volunteers from Chennai who came to Orissa under the umbrella of the Young Students Movement for Development (YSMD), Gram Vikas was registered as a society on January 22, 1979, under the Societies Registration Act, 1860. The organization currently serves a population of over 200,000 (38,000 households) across 542 villages in 17 districts of Orissa. Gram Vikas’ mission is realized through the program and process of MANTRA - Movement and Action Network for the Transformation of Rural Areas.

Microfinance Interventions in India: Challenges and Prospects

79

CONTRIBUTION OF SIDBI AND NABARD IN MICRO FINANCING Sidbi Foundation for Micro Credit (SFMC) SIDBI Foundation for Micro Credit (SFMC) was launched in January 1999 for channelizing funds to the poor in line with the success of pilot phase of Micro Credit Scheme. SFMC is the apex wholesaler for microfinance in India providing a complete range of financial and nonfinancial services such as loan funds, grant support, equity and institution building support to the retailing Micro Finance Institutions (MFIs) so as to facilitate their development into financially sustainable entities, besides developing a network of service providers for the sector. Supports offered to MFIs are:

Liquidity Management SFMC has introduced a special short term loan scheme, Liquidity Management Support (LMS) for the long term partners. Equity Provision of equity capital to the NBFC-MFIs is perceived as an emerging requirement of the micro finance sector in India. SIDBI provides equity capital to eligible institutions not only to enable them to meet the capital adequacy requirements but also to help them leverage debt funds.

80

Key Drives of Organizational Excellence

Quasi-equity The Transformation Loan (TL) product is envisaged as a quasi-equity type support to partner MFIs that are in the process of transforming themselves / their existing structure into a more formal and regulated set-up for exclusively handling micro finance operations in a focused manner. Being quasi-equity in nature, TL helps the MFIs not only in enhancing their equity base but also in leveraging loan funds and expanding their micro credit operations on a sustainable basis. The product has the feature of conversion into equity after a specified period of time subject to the MFI attaining certain structural, operational and financial benchmarks.

Direct Credit to clients / members of MFIs SFMC would be providing direct credit to SHGs/ solidarity groups/ individual clients of the select MFIs. However, these borrowers would be supported/ supervised by the MFI. The scheme is targeted on larger MFIs, which have strong credit and recovery mechanism, MIS and internal control. Under the arrangement, SFMC would assess the MFIs ability to manage the projected micro credit portfolio and extend credit to the borrowers of MFI.

Micro Enterprise Loans Institutions/ MFIs with minimum fund requirement of Rs.25 lakh p.a. and having considerable experience in financial intermediation/ facilitating or setting up of enterprises/ providing escort services to SSI/ tiny units/ networking or active interface with SSIs etc. and having professional expertise and capability to handle on-lending transactions shall be eligible under the dispensation. The institutions would be selected based on their relevant experience, potential to expand, professional management, transparency in operations and well laid-out systems besides qualified / trained manpower.

On Lending SIDBI Foundation identifies, nurtures, and develops selected potential MFIs as long term partners and provides credit support for their micro credit initiatives. The eligible partner institutions of SIDBI Foundation, therefore, comprise large and medium scale MFIs having minimum fund requirement of Rs.10 lakh per annum. In all, around 100-125 MFIs are planned to be developed as long term partners over the next 4 years. Large and medium scales MFIs having considerable experience in managing micro credit programmes, high growth potential, good track record, and professional expertise and committed to viability are provided financial assistance for on lending.

Capacity Building SFMC has decided that need-based capacity building support in the form of grant be provided to the partner MFIs, in the initial years, to enable them to expand their operations, cover their managerial, administrative and operational costs and provide technical support besides helping them achieve self-sufficiency in due course. The grant support is being provided both as technical assistance as well as operational support. The technical assistance component is directed at helping the MFIs to strengthen their microfinance programmes through inputs such as human resource development etc.

Microfinance Interventions in India: Challenges and Prospects

81

NABARD Strengthening of rural financial institutions, which deliver credit to the sector, has been identified by NABARD as a thrust area. In order to reinforce the credit functions and to make credit more productive, NABARD has been undertaking a number of developmental and promotional activities such as: 1.

Help cooperative banks and Regional Rural Banks to prepare development actions plans for themselves

2.

Enter into MOU with state governments and cooperative banks specifying their respective obligations to improve the affairs of the banks in a stipulated timeframe

3.

Help Regional Rural Banks and the sponsor banks to enter into MoUs specifying their respective obligations to improve the affairs of the Regional Rural Banks in a stipulated timeframe

4.

Monitor implementation of development action plans of banks and fulfilment of obligations under MOUs

5.

Provide financial assistance to cooperatives and Regional Rural Banks for establishment of technical, monitoring and evaluations cells

6.

Provide Organisation Development Intervention (ODI) through reputed training institutes like Bankers Institute of Rural Development (BIRD), Lucknow www.birdindia.com, National Bank Staff College, Lucknow www.nbsc.in and College of Agriculture Banking, Pune, etc.

7.

Provide financial support for the training institutes of cooperative banks

8.

Provide training for senior and middle level executives of commercial banks, Regional Rural Banks and cooperative banks

9.

Create awareness among the borrowers on ethics of repayment through Vikas Volunteer Vahini and Farmer’s clubs

10.

Provide financial assistance to cooperative banks for building improved management information system, computerization of operations and development of human resources.

NABARD Microcredit Innovations: 1.

Kisan Credit Card

2.

R & D Fund

3.

Swarojgar Credit Card

4.

Farmer’s Club Programme

5.

Government Sponsored Schemes

82

Key Drives of Organizational Excellence

SHG -NABARD linkages Program With a total of more than 140,000 retail outlets in the commercial, cooperative and regional rural bank sectors, the rural banking network in India provides at least one outlet for every 4 villages or for about every 1000 households. The programme provides a supportive subsystem to this impressively large bank infrastructure. The very poor have felt need to save. They also develop mature credit habits through SHGs. Optimum size of 15 to 20 members provides economy of scales, to make banking with them a viable proposition. The SHG – NABARD bank linkages programme is one of the largest Micro Finance initiatives in the world today. By December 2001, it benefited 6 million plus poor households.

COMMERCIAL BANKS IN MICROFINANCE: ROLE OF BANKS Evolving an additional delivery mechanism for providing financial services to the rural poor by combining the service ethos, grassroots link and familiarity with rural milieu possessed by micro Finance Institutes with the financial resources of the formal banking system; encouraging thrift and credit activity in a segment of the population which could not be reached by the institutional credit delivery system; creating future quality clients for the banking system; and generating healthy competition among institutions in the rural areas for promoting sustainability among them.

Types of Banks in Microfinance In general, there are four main types of intermediaries: 1.

Full-service private commercial banks: Most have a national presence and offer a host of financial products and services through an extensive branch network.

2.

State-owned banks: These large banks provide multiple services according to government priorities. They often act as a channel for government transfers, payments, or receivables and usually serve a large number of deposits.

3.

Finance companies and specialized banks: These smaller financial institutions focus on a particular sector, such as housing or consumer lending, and generally have a regional rather than a national presence.

4.

Micro-lending NGOs transformed into regulated banks or specialized financial institutions: These small institutions have limited regional presence and highly specialized programmes.

FINANCIAL PRODUCTS AND METHODOLOGIES Micro lending Over the years, NGOs in micro-finance have developed innovative lending methodologies to reach poor clients with micro-loans. Some of the principal characteristics of micro lending are: 1.

Short-term, working capital loans.

2.

Lending based on character, rather than collateral.

Microfinance Interventions in India: Challenges and Prospects

83

3.

Sequential loans, starting small and increasing in size.

4.

Group loan mechanisms as a collateral substitute.

5.

Quick cash-flow analysis of businesses and households, especially for individual loans.

6.

Prompt loan disbursement and simple loan procedures.

7.

Frequent repayment schedules to facilitate monitoring of borrowers.

8.

Interest rates considerably higher than those for larger bank customers to cover all costs of the micro-finance program.

9.

Prompt loan collection procedures.

10.

Simple lending facilities, close to clients.

11.

Staff drawn from local communities, with access to information about potential clients.

12.

Computerizing with special software to allow loan tracking for larger programs.

Micro deposits The new micro-finance bankers knew relatively little about deposit mobilization methodologies that reach the low income and/or micro-enterprise client. The benefits of micro deposits to the micro-clients are: 1.

Liquid passbook savings accounts and low minimum balances.

2.

Depositories conveniently located.

Secure deposits Operational features of the program: Savings accounts with very low minimum balances. 1.

Lower levels of interest, compared with commercial banks, because of higher administrative costs.

2.

Simple, hospitable buildings and mobile units with low overhead.

3.

Simple administrative forms and procedures.

4.

Incentives for savings, such as lotteries

Regulation and Supervision Legal reserve requirements In many developing countries, legal reserves on deposits are extremely high, discouraging deposit mobilization. Banks are less likely to utilize their own, scarcer funds for microenterprise programs in this environment.

84

Key Drives of Organizational Excellence

Reporting requirements Bank regulatory and supervisory authorities generally require frequent and detailed reports from commercial banks. These reporting requirements were originally designed for institutions with fewer, larger transactions.

GROWTH OF MICROFINANCE IN INDIA Microfinance is fast emerging as a hot opportunity for global players with an estimated $20 billion to be invested globally and around $3 billion in India, by 2010. The volume of total microfinance loans globally rose from $4 billion in 2001 to around $25 billion in 2006, according to a research recently conducted by Deutsche Bank.” The potential client base for microfinance in India is estimated at around 75 million households. The progress report submitted by Microcredit summit campaign indicates that as of Dec.31, 2004, 3,164 microcredit institutions have reached 92.27 million clients translating into micro credit. As a result, between 1961 and 2000 the average population per bank branch fell tenfold from about 140 thousand to 14000 and the share of institutional agencies in rural credit increased from 7.3% 1951 to 66% in 1991. The programme has come a long way since 1992 passing through stages of pilot (1992-1995), mainstreaming (1995-1998) and expansion phase (1998 onwards) and emerged as the world’s biggest microfinance programme in terms of outreach, covering 1.6 million groups as on March, 2005. It occupies a pre-eminent position in the sector accounting for nearly 80% market share in India.

CHALLENGES AHEAD OF MICRO FINANCE ORGANIZATIONS Efficiency refers to the ability to use scarce resources most effectively to reach thousands of customers, deliver quality services, and close the biggest gaps between the supply and demand of basic financial products for the poor. In microfinance, efficiency means using the least amount of inputs - particularly staff time and capital - to produce the greatest number of loans, reach under-banked clients, and delivers a range of valued services. But there are big challenges in front of MFIs in India, which cause inefficiency in the operations. Bernd Balkenhol (2006) examined these questions in detail and listed the challenges being faced by Microfinance Institutions. He based his findings on a survey of 45 well-established microfinance institutions in 24 countries, carried out by the ILO, the Universities of Geneva and Cambridge and the Institute of Development Studies in Geneva. The major challenges before MFIS are: Existing challenges

Demanding scenario

Addicted from subsidies

De-addicted from capital & subsidies

Communities not aware of rights and responsibilities

Aware of rights and responsibilities

Inaccessibility and corruption

Accessibility and fair practices

Inefficiencies

Efficiencies

Less productive staff

Increased staff productivity by training & development

Grant based (Foreign/GOI)

Regular fund sources(Borrowings/deposits)

Not linked with mainstream

Part of mainstream (banks/FIs)

contd...

Microfinance Interventions in India: Challenges and Prospects Mainly focused for credit

Add savings and insurance

Dominated

Reduce dominance of informal unregulated suppliers

85

CONCLUSION The high interest rates and forced loan recovery practices of micro-finance institutions have been held responsible for the suicide of several farmers in Andhra Pradesh. It is evident that poverty makes good business sense to MFIs, writes Sudhirendar Sharma (2006). Data from the Micro Banking Bulletin reports that 63 of the world’s top MFIs had an average rate of return, after adjusting for inflation and after taking out subsidies programs might have received, of about 2.5% of total assets. This compares favorably with returns in the commercial banking sector that microfinance can be sufficiently attractive to mainstream into the retailbanking sector. Some people also think that MFIs are exploiting the poor people. In short, we have every reason to expect that programs that reach out to the very poorest micro clients can be sustainable once they have matured, and if they commit to that path. The evidence supports this position. The lessons that can be drawn from the study are that: Firstly, there should be more coordination between the various organizations working in this area. Secondly, more awareness should be created among the target public. Thirdly, MFIs must increase their operational efficiency by adopting new technologies and try to become less reliable on subsidies and government funding. Some valuable lessons can be drawn from the experience of successful Microfinance operation. Fourthly, the poor save and hence microfinance should provide both savings and loan facilities. However, attaining financial viability and sustainability is the major institutional challenge. Careful research on demand for financing and savings behavior of the potential borrowers and their participation in determining the mix of multi-purpose loans are essential in making the concept work.

References Bernd Balkenhol ed. (2007), Microfinance and Public Policy: Outreach, Performance and Efficiency, Palgrave Macmillan. Krishna Kumar Agrawal, Aman Gupta (2007), Glimpses of Microfinance, The Journal of Accounting and Finance, vol.21, April September, pp.31- 39. Sharma Sudhirender (2006), Are microfinance institutions exploiting the poor?, Last updated, August 23, 2006, www.worldproutassembly.org

86

Key Drives of Organizational Excellence

9

Cross Sectional Industrial Performance As a Predictor of Investor's Return: A Case Study of NSE Simranjeet Sandhar Navita Nathani Umesh Holani

Industry analysis accompanies the examination of competitive behavior and allows manager to gain better understanding of the playing field on which a group of firms compete. Without such analysis it is impossible to explain performance differences or discover opportunities/advantages for firms competing in the same industry. Performance measured in terms of growth in sales, profits, market capitalization and the dividends offered by various industries. The industry classification is economy specific. The study analyze the cross-sectional differences in exchange risk sensitivity are linked to key firm-specific operational values (i.e., foreign operating profits, sales, and assets). It also examines the importance of industry competition for stock returns and reveals cross-sectional and time-dependent return regularities. Industry analysis studies employ a cross-sectional approach, examining a large sample of heterogeneous firms for indicators of future profitability. The present study is intended to analyze the relationship between total returns of investor with annual sales, market capitalization and volume. Casual observation of the structure of the investment analysis industry, however, reveals that individual analysts tend to specialize in the securities of one or more specific industries or sectors. The study came to the conclusion that the annual sales do not affect the total returns but sometime market capitalization and volume do affect the total returns. The results of the study are time dependent as any changes in stock prices may change the findings of the study.

INTRODUCTION Fundamental analysis is the examination of the underlying forces that affect the well being of the economy, industry groups, and companies. As with most analysis, the goal is to derive a forecast and profit from future price movements. To forecast future stock prices, fundamental analysis combines economic, industry, and company analysis to derive a stock’s current fair

Cross Sectional Industrial Performance As a Predictor of Investor's Return

87

value and forecast future value. At the industry level, there might be an examination of supply and demand forces for the products offered. Industry analysis is a type of business research that focuses on the status of an industry or an industrial sector (a broad industry classification, like “manufacturing”). The role played by industrial analysis in an investment decision situation is highlighted first. A discussion of various tools used in industry analysis i.e., cross-sectional industry performance and industry performance overtime, differences in industry risk, data needs for an industry analysis, prediction about market behavior and competition over the industry life cycle is intended to help in the understanding of industry analysis. Analysis of industries helps both maximization of returns and minimization of risk inherent in investment. There are two very strong reasons to do an industry analysis. First, it provides an awareness of the market performance and ability to anticipate the future of the industry. Second, it is an important part of any company’s business plan. Capital providers such as financial markets and financial institutions hence require in-depth industry analysis before agreeing to participate in a company’s capital structure.

Cross-sectional Industry Performance Cross-sectional industry performance usually compare the performance measured in terms of growth in sales, profits, market capitalisation and the dividend of various industries. Similar performances during specific time periods for different industries would indicate that such type of industry analysis is unnecessary. As an example, assuming the stock market registered a growth of 10% and an analysis of all industries showed a uniform growth of around 5% to 8%. It might seem futile to find out an individual industry that is a best performer. On the other hand, a wide variation in growth across industries, ranging from 80% to -20% to a stock market growth rate of 50%, would require the examination of those industries that contribute heavily towards a stock market up trend.

REVIEW OF LITERATURE Papadogonas (2007) in his paper attempted to specify possible differences in the main factors that determine firms’ profitability, using data from Greek manufacturing for 1995—1999. The econometric results indicate that size, managerial efficiency, debt structure, investment in fixed assets and sales growth affect a firm’s profitability significantly. By discriminating firms according to size/classes, it is possible to identify how the determinants of financial performance differentiate by firm size. Agiomirgianakis et al (2006) stated that the key financial determinants of firm profitability and employment growth are identified by using a panel of 3094 Greek manufacturing firms for 1995 and 1999. The results show that size, age, exports, debt structure, investment in fixed assets and profitability of assets and sales contribute significantly to firm growth. Econometric results also reveal that firm size, age, exports, sales growth, reliance on debt on fixed assets and investment growth, as well as efficient management of assets, influence profitability. Beaver (1968) investigated the relation between financial analyst earnings forecast revisions and two independent variables: (1) a measure of management earnings forecast news issued prior to analyst revisions, and (2) measures derived from the security market price reaction to that news. Results indicated that security price reactions to management forecasts are

88

Key Drives of Organizational Excellence

useful in predicting subsequent analyst forecast revisions. Bernard and Stober (1989) have reported empirical evidence on the use of these statements in predicting stock market returns. These performance indicators are also linked to the share though ratio analysis to evaluated performance for investment purposes. The important measures are earring per share, dividend per share, yield on share, price earning multiple and so on. Park and Lee (January 2003) empirically investigated the relevance of relative valuation models in the Japanese stock market. Using various multiples such as Price Earnings Ratio (PER), Price Book Value Ratio (PBR), Price Sales Ratio (PSR), and Price Cash Flow Ratio (PCR), they studied which valuation model was the best in forecasting stock prices, and in identifying portfolios which generate higher returns. They reviewed that in terms of prediction accuracy, PBR is the best, while in portfolio selection results vary across the industry. Griffin and Stulz (2001) in his article systematically examined the importance of exchange rate movements and industry competition for stock returns. Common shocks to industries across countries are more important than competitive shocks due to changes in exchange rates. Weekly exchange rate shocks explain almost nothing of the relative performance of industries. Amir and Lev (1996) described that many fundamental analysis studies employed a crosssectional approach, examining a large sample of heterogeneous firms for indicators of future profitability. Casual observation of the structure of the investment analysis industry, however, reveals that individual analysts tend to specialize in the securities of one or more specific industries or sectors. Attempting to identify systematic predictors of future earnings common to manufacturers, retailers, service firms, etc. may prove to be a difficult task.

Objectives of the Study 1.

To study the relationship between annual sales and total return

2.

To study the relationship between market capitalization and total return.

3.

To study the relationship between volume and total return.

RESEARCH METHODOLOGY Rather than using large samples and following a rigid protocol to examine a limited number of variables, case study methods involve an in-depth, longitudinal examination of a single instance or event: a case. They provide a systematic way of looking at events, collecting data, analyzing information, and reporting the results. As a result the researcher may gain a sharpened understanding of why the instance happened as it did, and what might become important to look at more extensively in future research. This research was a case study of NSE where Nifty had been selected for the purpose of analysis.

Scope of the study The data considered for this study was the daily closing values of the S&P CNX NIFTY, which is a 50-stock market-capitalization weighted index of the National Stock Exchange of India, the nation’s leading exchange in terms of volume and turnover and financial statement of the companies from its own website. The selected indices used in the study were profit, dividend, market capitalization, Volume and sales from 1st Jan. 2002 to 31st Dec.2006.

Cross Sectional Industrial Performance As a Predictor of Investor's Return

89

Tools for Data analysis Regression method was applied to study the effect of independent variables on dependent variables. 1.

Independent variables: Annual sales, Market capitalization and Volume.

2.

Dependent variable: Total return

RESULT AND DISCUSSIONS Annual sales and total return (2006) Y=104.322 + (-.019) x ANOVA table summary (Table 1 ) indicates the value of F (.010) is significant at 92.1% level of significance and the F value is not significant at 5% level of significance (t= -.100, significant at 92.1%). The beta value (-0.019) indicates insignificant negative relationship between the total return and annual sales. Results of the regression clearly show that annual sales do not affect level of return. Therefore the regression equation explains all the variations in dependent variable Table 1: Showing Regression between Annual sales and Total Return (2006)

ANOVAb Model 1

Regression Residual Total

Sum of Squares 86.090 231973.3 232059.3

df 1 27 28

Mean Square 86.090 8591.602

F .010

Sig. .921a

t 4.890 -.100

Sig. .000 .921

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002 Coefficientsa

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 104.322 21.332 -7.3E-005 .001

Standardized Coefficients Beta -.019

a. Dependent Variable: VAR00002

Market capitalization and total return (2006) Y=79.897 + .178x ANOVA table summary (Table 2) indicates the value of F (.589) is significant at 45.3% level of significance and the F value is not significant at 5% level of significance (t= .768, significant at 45.3%). The beta value (0.178) indicates significant positive relationship between the total return and market capitalization. Results of the regression clearly show that level of market

90

Key Drives of Organizational Excellence

capitalization does not affect level of return and the effect is still not significant. Therefore the regression equation explains all the variation in dependent variable. Table 2: Showing Regression between Market capitalization and Total Returns (2006)

ANOVAb

Model 1

Regression Residual Total

Sum of Squares 5891.542 180013.6 185905.2

df 1 18 19

Mean Square 5891.542 10000.756

F .589

Sig. .453 a

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002 Coefficientsa

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 79.897 33.482 .000 .001

Standardized Coefficients Beta .178

t 2.386 .768

Sig. .028 .453

a. Dependent Var iable: VAR00002

Volume and Total return (2006) Y=71.946 + .402x ANOVA table summary (Table 3 ) indicates the value of F (4.809) is significant at 3.8% level of significance and the F value is not significant at 5% level of significance (t= 2.193, significant at 3.8%). The beta value (0.402) indicates significant positive relationship between the total return and volume. Results of the regression clearly show that level of volume affect level of return and the effect is significant. Therefore the regression equation explains all the variation in dependent variable Table 3: Showing Regression between Volume and Total return (2006)

ANOVAb Model 1

Regression Residual Total

Sum of Squares 30406.255 158081.4 188487.6

df 1 25 26

Mean Square 30406.255 6323.255

F 4.809

Sig. .038 a

t 3.710 2.193

Sig. .001 .038

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002 Coefficients a

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 71.946 19.393 3.39E-005 .000

a. Dependent Variable: VAR00002

Standardized Coeff icients Beta .402

Cross Sectional Industrial Performance As a Predictor of Investor's Return

91

Annual sales and Total return (2005) Y=111.061 + (-.146) x ANOVA table summary (Table 4 ) indicates the value of F (.766) is significant at 38.7% level of significance and the F value is not significant at 5% level of significance (t= -.875, significant at 38.7%). The beta value (-0.146) indicates insignificant negative relationship between the total return and annual sales. Results of the regression clearly show that level of annual sales does not affect level of return. Therefore the regression equation explains all the variation in dependent variable. Table 4: Showing Regression between Annual sales and Total return (2005) ANOVAb Model 1

Regression Residual Total

Sum of Squares 8982.198 410518.0 419500.2

df

Mean Square 8982.198 11729.086

1 35 36

F .766

Sig. .387a

t 5.033 -.875

Sig. .000 .387

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002 Coef ficients a

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 111.061 22.066 -.001 .001

Standardized Coefficients Beta -.146

a. Dependent Variable: VAR00002

Market Capitalization and Total Return (2005) Y=84.599 + .170x ANOVA table summary (Table 5 ) indicates the value of F (.688) is significant at 41.6% level of significance and the F value is not significant at 5% level of significance (t= .829, significant at 41.6%). The beta value (0.170) indicates significant positive relationship between the total return and market capitalization. Results of the regression clearly show that level of market capitalization does not affect level of return and the effect is still not significant. Therefore the regression equation explains all the variation in dependent variable. Table 5: Showing Regression between Market Capitalization and Total Return (2005)

ANOVAb

Model 1

Regression Residual Total

Sum of Squares 11328.509 378944.0 390272.5

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002

df 1 23 24

Mean Square 11328.509 16475.827

F .688

Sig. .416 a

92

Key Drives of Organizational Excellence Coefficientsa

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 84.599 35.266 .001 .001

Standardized Coefficients Beta .170

t 2.399 .829

Sig. .025 .416

a. Dependent Variable: VAR00002

Volume and Total Return (2005) Y=93.437 + .058x ANOVA table summary (Table 6) indicates the value of F (.107) is significant at 74.5% level of significance and the F value is not significant at 5% level of significance (t= .328, significant at 74.5%). The beta value (.058) indicates significant positive relationship between the total return and volume. Results of the regression clearly show that level of volume does not affect level of return and the effect is still not significant. Therefore the regression equation explains all the variation in dependent variable. Table 6: Showing Regression between Volume and Total Return (2005)

ANOVAb

Model 1

Regression Residual Total

Sum of Squares 1328.065 395993.8 397321.9

df 1 32 33

Mean Square 1328.065 12374.807

F .107

Sig. .745a

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002

Coefficients a

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 93.437 25.603 4.03E-006 .000

Standardized Coefficients Beta .058

t 3.650 .328

Sig. .001 .745

a. Dependent Variable: VAR00002

Annual sales and Total Return (2004) Y=240.297 + (-.113) x ANOVA table summary indicates (Table 7) the value of F (.470) is significant at 49.8% level of significance and the F value is not significant at 5% level of significance (t= -.685, significant at 49.8%). The beta value (-.113) indicates insignificant negative relationship between the total return and annual sales. Results of the regression clearly show that level of annual sales does not affect level of return. Therefore the regression equation explains all the variation in dependent variable.

Cross Sectional Industrial Performance As a Predictor of Investor's Return

93

Table 7: Showing Regression between Annual sales and Total Return (2004)

ANOVAb Model 1

Regression Residual Total

Sum of Squares 78003.735 5979718 6057722

df 1 36 37

Mean Square 78003.735 166103.273

F .470

Sig. .498a

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002

Coefficientsa

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 240.297 80.446 -.003 .004

Standardized Coefficients Beta -.113

t 2.987 -.685

Sig. .005 .498

a. Dependent Variable: VAR00002

Market Capitalization and Total Return (2004) Y=147.390 + .093x ANOVA table summary (Table 8)indicates the value of F (.228) is significant at 63.7% level of significance and the F value is not significant at 5% level of significance (t= .477, significant at 63.7%). The beta value (0.093) indicates significant positive relationship between the total return and market capitalization. Results of the regression clearly show that level of market capitalization does not affect level of return and the effect is still not significant. Therefore the regression equation explains all the variation in dependent variable. Table 8: Showing Regression between Market Capitalization and Total Return (2004) ANOVAb Model 1

Regression Residual Total

Sum of Squares 21004.904 2399130 2420135

df 1 26 27

Mean Square 21004.904 92274.248

F .228

Sig. .637 a

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002

Coefficientsa

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 147.390 74.413 .001 .002

a. Dependent Variable: VAR00002

Standardized Coefficients Beta .093

t 1.981 .477

Sig. .058 .637

94

Key Drives of Organizational Excellence

Volume and Total return (2004) Y=241.045 + (-.131) x ANOVA table summary (Table 9)indicates the value of F (.578) is significant at 45.2% level of significance and the F value is not significant at 5% level of significance (t= -.760, significant at 45.2%). The beta value (-.131) indicates insignificant negative relationship between the total return and volume. Results of the regression clearly show that level of volume does not affect level of return. Therefore the regression equation explains all the variation in dependent variable. Table 9: Showing Regression between Volume and Total return (2004)

ANOVAb

Model 1

Regression Residual Total

Sum of Squares 103410.8 5902817 6006228

df 1 33 34

Mean Square 103410.825 178873.249

F .578

Sig. .452a

t 3.128 -.760

Sig. .00 4 .45 2

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002 Coeffici ents a

Model 1

(Constant) VAR0000 1

Unstandardized Coefficients B Std. Error 241.045 77.051 -1.7E-005 .000

Standardized Coefficients Beta -.13 1

a. Dependent Variable: VAR00002

Annual sales and Total return (2003) Y=125.493 + (-.077) x ANOVA table summary (Table 10)indicates the value of F (.226) is significant at 63.7% level of significance and the F value is not significant at 5% level of significance (t= -.475, significant at 63.7%). The beta value (-.077) indicates insignificant negative relationship between the total return and annual sales. Results of the regression clearly show that level of annual sales does not affect level of return. Therefore the regression equation explains all the variation in dependent variable. Table 10: Showing Regression between Annual Sales and Total Return (2003) ANOVAb Model 1

Regression Residual Total

Sum of Squares 2260.332 380005.8 382266.1

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002

df 1 38 39

Mean Square 2260.332 10000.152

F .226

Sig. .637a

Cross Sectional Industrial Performance As a Predictor of Investor's Return

95

Coefficientsa

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 125.493 19.420 -.001 .001

Standardized Coefficients Beta -.077

t 6.462 -.475

Sig. .000 .637

a. Dependent Variable: VAR00002

Market Capitalization and Total Return (2003) Y=108.283 + .237x ANOVA table summary (Table 11 )indicates the value of F (1.192) is significant at 28.8% level of significance and the F value is not significant at 5% level of significance (t= 1.092, significant at 28.8%). The beta value (0.237) indicates significant positive relationship between the total return and market capitalization. Results of the regression clearly show that level of market capitalization does not affect level of return and the effect is still not significant. Therefore the regression equation explains all the variation in dependent variable. Table 11 : Showing Regression between Market Capitalization and Total return (2003)

ANOVAb Model 1

Regression Residual Total

Sum of Squares 15445.066 259047.0 274492.1

df 1 20 21

Mean Square 15445.066 12952.350

F 1.192

Sig. .288a

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002

Coefficients a

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 108.283 33.882 .002 .002

Standardized Coefficients Beta .237

t 3.196 1.092

Sig. .005 .288

a. Dependent Variable: VAR00002

Volume and Total return (2003) Y=138.213 + (-.253) x ANOVA table summary (Table 12)indicates the value of F (2.402) is significant at 13% level of significance and the F value is not significant at 5% level of significance (t= -1.550, significant at 13%). The beta value (-.253) indicates insignificant negative relationship between the total return and volume. Results of the regression clearly show that level of volume does not affect level of return. Therefore the regression equation explains all the variation in dependent variable.

96

Key Drives of Organizational Excellence Table 12: Showing Regression between Volume and Total return (2003)

ANOVAb Model 1

Regression Residual Total

Sum of Squares 22642.232 329895.3 352537.6

df 1 35 36

Mean Square 22642.232 9425.581

F 2.402

Sig. .130a

t 7.867 -1.550

Sig. .000 .130

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002 Coefficientsa

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 138.213 17.569 -4.3E-006 .000

Standardized Coefficients Beta -.253

a. Dependent Variable: VAR00002

Annual sales and Total return (2002) Y=180.342 + (-.122) x ANOVA table summary (Table 13)indicates the value of F (.572) is significant at 45.4% level of significance and the F value is not significant at 5% level of significance (t= -.756, significant at 45.4). The beta value (-.122) indicates insignificant negative relationship between the total return and annual sales. Results of the regression clearly show that level of annual sales does not affect level of return. Therefore the regression equation explains all the variation in dependent variable. Table 13 : Showing Regression between Annual sales and total return (2002)

ANOVAb Model 1

Regression Residual Total

Sum of Squares 24533.737 1630328 1654862

df 1 38 39

Mean Square 24533.737 42903.365

F .572

Sig. .454a

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002 Coefficients a

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 180.342 41.316 -.002 .003

a. Dependent Variable: VAR00002

Standardized Coefficients Beta -.122

t 4.365 -.756

Sig. .000 .454

Cross Sectional Industrial Performance As a Predictor of Investor's Return

97

Market Capitalization and Total Return (2002) Y=63.593 + .490x ANOVA table summary (Table 14) indicates the value of F (6.311) is significant at 2.1% level of significance and the F value is not significant at 5% level of significance (t= 2.512, significant at 2.1%). The beta value (0.490) indicates significant positive relationship between the total return and market capitalization. Results of the regression clearly show that level of market capitalization affect level of return and the effect is significant. Therefore the regression equation explains all the variation in dependent variable. Table 14: Showing Regression between Market Capitalization and Total return (2002) ANOVA b Model 1

Regression Residual Total

Sum of Squares 136001.6 431023.1 567024.7

df 1 20 21

Mean Square 136001.587 21551.153

F 6.311

Sig. .021a

t 1.414 2.512

Sig. .173 .021

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002

Coefficients a

Model 1

(Constant) VAR00001

Unstandardized Coefficients B Std. Error 63.593 44.986 .006 .003

Standardized Coefficients Beta .490

a. Dependent Variable: VAR00002

Volume and Total return (2002) Y=184.455 + (-.116) x ANOVA table summary (Table 15) indicates the value of F (.463) is significant at 50.1% level of significance and the F value is not significant at 5% level of significance (t= -.680, significant at 50.1%). The beta value (-.116) indicates insignificant negative relationship between the total return and volume. Results of the regression clearly show that level of volume does not affect level of return. Therefore the regression equation explains all the variation in dependent variable. Table 15: showing Regression between Volume and Total return (2002)

ANOVAb

Model 1

Regression Residual Total

Sum of Squares 21568.778 1585541 1607110

a. Predictors: (Constant), VAR00001 b. Dependent Variable: VAR00002

df 1 34 35

Mean Square 21568.778 46633.563

F .463

Sig. .501 a

98

Key Drives of Organizational Excellence Coefficients a

Model 1

(Cons tant) VAR00001

Unstandardized Coefficients B Std. Error 184.455 39.915 -1.2E-005 .000

Standardized Coefficients Beta -.116

t 4.621 -.680

Sig. .000 .501

a. Dependent Variable: VAR00002

Multiple-Regression (2006) ANOVA table summary (Table 16) indicates that for total sales t= -1.159 significant at 26.6 %, for market capitalization t=.738 significant at 47.3% and for volume t=1.708 significant at 11%. The beta value of sales, market capitalization and volume are (-0.364, 0.234, 0.419) indicates insignificant negative relationship between the total return and annual sales but shows a significant and positive relationship between total return and market capitalization, & total return and volume. Table 16: Showing Multiple Regressions between all the variables (2006)

Coefficients a

Model 1

(Constant) VAR00001 VAR00002 VAR00003

Unstandardized Coefficients B Std. Error 75.263 37.702 -.002 .001 .001 .001 3.56E-005 .000

Standardized Coefficients Beta -.364 .234 .419

t 1.996 -1.159 .738 1.708

Sig. .066 .266 .473 .110

a. Dependent Variable: VAR00004

Multiple-Regression (2005) ANOVA table summary (Table 17) indicates that for total sales t= -1.509 significant at 14.9%, for market capitalization t=1.036 significant at 31.4% and for volume t= -.018 significant at 98.6%. The beta value of total sales, market capitalization and volume are (-0.357, 0.249, 0.004) indicates insignificant negative relationship between the total return and annual sales & total return and volume but shows a significant and positive relationship between total return and market capitalization. Table 17: Showing Multiple Regression between all the variables (2005) Coefficients a

Model 1

(Constant) VAR00001 VAR00002 VAR00003

Unstandardized Coefficients B Std. Error 126.791 50.448 -.002 .001 .001 .001 -3.0E-007 .000

a. Dependent Variable: VAR00004

Standardized Coefficients Beta -.357 .249 -.004

t 2.513 -1.509 1.036 -.018

Sig. .022 .149 .314 .986

Cross Sectional Industrial Performance As a Predictor of Investor's Return

99

Multiple-Regression (2004) ANOVA table summary (Table 18) indicates that for total sales t= -0.589 significant at 56.2%, for market capitalization t= 0.543 significant at 59.3% and for volume t= -0.576 significant at 57.1%. The beta value of total sales, market capitalization and volume are (-0.150, 0.134, 0.130) indicates insignificant negative relationship between the total return and annual sales & total return and volume but shows a significant and positive relationship between total return and market capitalization. Table 18: Showing Multiple Regressions between all the variables (2004)

Coefficients a

Model 1

(Constant) VAR00001 VAR00002 VAR00003

Unstandardized Coefficients B Std. Error 207.950 99.086 -.003 .005 .002 .003 -1.1E-005 .000

Standardized Coefficients Beta -.150 .134 -.130

t 2.099 -.589 .543 -.576

Sig. .049 .562 .593 .571

a. Dependent Variable: VAR00004

Multiple-Regression (2003) ANOVA table summary (Table 19) indicates that for total sales t= -0.910 significant at 37.7%, for market capitalization t= 1.206 significant at 24.6% and for volume t= -0.798 significant at 43.8%. The beta value of total sales, market capitalization and volume are (-0.239, 0.310, 0.200) indicates insignificant negative relationship between the total return and annual sales & total return and volume but shows a significant and positive relationship between total return and market capitalization. Table 19: Showing Multiple Regressions between all the variables (2003)

Coefficientsa

Model 1

(Constant) VAR00001 VAR00002 VAR00003

Unstandardized Coefficients B Std. Error 146.510 44.797 -.002 .002 .003 .002 -7.7E-006 .000

Standardized Coefficients Beta -.239 .310 -.200

t 3.271 -.910 1.206 -.798

Sig. .005 .377 .246 .438

a. Dependent Variable: VAR00004

Multiple-Regression (2002) ANOVA table summary (Table 20) indicates that for total sales t= -1.351 significant at 19.7%, for market capitalization t=2.361 significant at 3.2% and for volume t= -0.712 significant at 48.7%. The beta value of total sales, market capitalization and volume are (-0.287, 0.493, 0.151) indicates insignificant negative relationship between the total return and annual sales & total return and volume but shows a significant and positive relationship between total return and market capitalization.

100

Key Drives of Organizational Excellence Table 20: Showing Multiple Regressions between all the variables (2002)

Coefficients a

Model 1

(Constant) VAR00001 VAR00002 VAR00003

Unstandardized Coefficients B Std. Error 120.559 59.273 -.004 .003 .006 .003 -9.7E-006 .000

Standardized Coefficients Beta -.287 .493 -.151

t 2.034 -1.351 2.361 -.712

Sig. .060 .197 .032 .487

a. Dependent Variable: VAR00004

ANALYSIS OF GROWTH RATE NSE 50 companies formed 20 industries from the year 2006-2002. The study analyzed that that particular industry has made a growth or it has been fluctuating or is in decline position in this duration.

Growth Industries Aluminum industry has made 100% growth in i.e. 3.006% in 2003 to 92.52% in 2006, which include Hindalco Industries Ltd and National Aluminium Co. Ltd. Similarly banking sector has shown a tremendous increase in their growth rate from 5.03% in year 2003 to 41.43% in 2006. In between it has faced number of rise and fall in their % of growth. Media and entertainment industry is another sector, which registered a growth of more than 200% in year 2006 from its base 43.6% in the year 2003. Refineries, steel and steel services and telecommunication industry have registered the growth of about 200%, 100% and 100% respectively upto year 2006, from its base value 54.47%, 33.35% and 6.40% in their respective sector in year 2003.

Fluctuating Industries Growth rate affects the decision regarding investment and automobile industry is one of the sensitive sector as technological changes are coming into existence regularly and hence the investors lose their faith to invest in such sectors because the risk involve in investment is higher. The growth rate of automobile industry is not regular but it has shown fluctuation since 2003-2006. Oil exploration industry is also having the fluctuated growth rate because this industry is heavily dependent upon exports. The growth rate of this industry was 1.44% in year 2003 and increased about 25% in the next year and then again registered the declination of about 8% in year 2005, finally in year 2006 it has registered the total growth of 17.09%. Cement industry has shown dramatically changes in the growth rate. In odd years (2003 & 2005) it has registered very higher growth rate i.e. 36.02% & 31.78% respectively and in year 2004 & 2006, the growth rate was 5.70% & 11.50% respectively. It is clear from the above that initially this industry has grown at a rapid rate and suddenly the growth rate decline to a lower value.

Cross Sectional Industrial Performance As a Predictor of Investor's Return

101

In the initial years the growth rate of Personal care industry was high but in year 2006 abruptly it has fallen to a very low value. In its initial years the growth rate of this industry was volatile but the rate of change was very less. Rest of the industries listed in NSE have faced downfall in their growth rate. Some industries have shown drastic growth in 2003-2004 but since then declined to the very lower value in year 2006. Declining Industries: The companies that have shown decline in their growth rate are as under, with their respective growth rate from year 2003 to 2006: Cigarette industry

18.83% - 9.33%

Computer application

46.30% - 3.17%

Diversified

18.75% - 15.46%

Electrical equipments

28.80% - 0.50%

Gas industry

70.97% - 24.41%

Petrochemical

24.81% - 12.46%

Pharmaceuticals

13.11% - 22.94%

Power

33.85% - 19.45%

Implications of the Study Cross-Sectional industry performance is aimed at finding out if the rates of return among different industries varied during a given time period. The scope for such analysis is that the extent of risk inherent in an investment environment can be known and this will be useful in choosing industries that will prove to be successful investment decision. It provides an awareness of the market performance and ability to anticipate the future of the industry. It is an important part of any company’s business plan. Capital providers such as financial markets and financial institutions hence require an industry analysis before agreeing to participate in a company’s capital.

CONCLUSION Industry analysis usually includes a review of an industry’s recent performance, its current status, and outlook for the future. There is a need to analyze the current status of the industry and forecast the conditions in which business now operates or will operate in the future. There are two very strong reasons to do an industry analysis. First, it provides an awareness of the market performance and ability to anticipate the future of the industry. Second, it is an important part of any company’s business plan. Hence an investor require in industry analysis before agreeing participating in a company’s capital. This study has resulted in the standardized and reliable measure to evaluate the effect of independent variable on dependent variable. In this study, the Annual sales, Market Capitalization and Volume was the independent variable and Total return was the dependent variable. The result of the study has proved that level of annual sales, market capitalization and volume does not affect level of returns.

102

Key Drives of Organizational Excellence

References Amir and Lev; (1993), Accounting fundamentals of the book-to market ratio, Financial Analysts Journal; Vol. 49(6); November-December; pp. 50-56. Bernard Victor L., Jacob K. Thomas (1989), Post-Earnings-Announcement Drift: Delayed Price Response or Risk Premium? Journal of Accounting Research, Vol. 27, pp. 1-36. G. Agiomirgianakis, F. Voulgaris, T. Papadogonas (2006), Financial factors affecting profitability and employment growth: The case of Greek manufacturing, International Journal of Financial Services Management, Inderscience Enterprises Ltd, vol. 1(2-3), pages 232-242, January. JM Griffin and RM Stulz (2001), International competition and exchange rate shocks: a cross-country industry analysis of stock returns, The Society for Financial Studies, Vol 14 No. 1, pp 215 – 241. Theodore A. Papadogonas (2007), Stock price and their prediction, The Journal of Finance, Vol 3. pp. 14-20. WilliamH.Beaver (1968), The information content of annual earning, Journal of Accounting Research, Vol. 6, pp. 67-92. Young S. Park and Jung-Jin lee (2003), Foreign ownership and profitability: Property rights, control, and the performance of firms in Indian industry, The Journal of Law and Economics; Vol. 42(1).

Cross Sectional Industrial Performance As a Predictor of Investor's Return

103

Annexure Table 1: Calculation of Growth Rate Name of The Industry

2006

2005

2004

2003

Aluminium

92.52%

20.35%

33.64%

3.006%

Automobile

42.63%

37.63%

101.92%

26.24%

Banks

41.43%

3.759%

11.66%

5.03%

Cement

11.50%

31.78%

5.70%

36.02%

Cigarette

9.33%

27.13%

34.43%

18.83%

Computer

3.17%

72.08%

158.77%

46.30%

Diversified

15.46%

47.93%

19.42%

18.75%

Electrical equipments

.507%

20.11%

22.85%

28.80%

-

-

-

100%

Gas

24.41%

20.11%

23.31%

70.97%

Media and entertainment

230.7%

Oil exploration /production

17.09%

Engineering equipments

16.20%

290.66%

43.6%

26.09%

1.44%

Personal care

8.44%

46.83%

30.20%

49.58%

Petrochemicals

12.46%

20.80%

27.54%

24.81%

Pharmaceuticals

22.94%

82.14%

303.96%

13.11%

Power

19.45%

19.58%

90.40%

33.85%

Refineries

196.9%

74.21%

40.70%

54.47%

Steel &steel services

100%

40.30%

33.16%

33.35%

Telecom services

100%

18.72%

19.73%

6.40%

-

-

-

100%

Travel and transport

Key Drivers of Organizational Excellence

104

10

Microfinance Soma Sharma

In the development paradigm, microfinance has evolved as a need-based policy and programme to cater to the so far neglected target groups (women, poor, rural, deprived, etc.). Its evolution is based on the concern of all developing countries for empowerment of the poor and the alleviation of poverty. Development organisations and policy makers have included access to credit for poor people as a major aspect of many poverty alleviation programmes. Microfinance programs have, in the recent past, become one of the most promising ways to use scarce development funds to achieve the objectives of poverty alleviation. Furthermore, certain microfinance programmes have gained prominence in the development field and beyond. The basic idea of microfinance is simple: if poor people are provided access to financial services, including credit, they may very well be able to start or expand a microenterprise that will allow them to break out of poverty. There are many features to this seemingly simple proposition which are quite attractive to the potential target group members, government policy makers, and development practitioners. For the target group members, the most obvious benefit is that microfinance programmes may actually succeed in enabling them to increase their income levels. Furthermore, the poor are able to access financial services which previously were exclusively available to the upper and middle income population. Finally, the access to credit and the opportunity to begin or to expand a micro-enterprise may be empowering to the poor, especially in comparison to other development initiatives which often treat these specific target group members as recipients.

INTRODUCTION Microfinance is expected to play a significant role in poverty alleviation and development. The need, therefore, is to share experiences and materials which will help not only in understanding successes and failures but also provide knowledge and guidelines to strengthen and expand microfinance programmes. In India, a variety of microfinance schemes exist and various approaches have been practiced by both GOs and NGOs. In the development sector, credit has been viewed as one of the missing inputs and therefore, a growing emphasis on re-formulating and re-strengthening micro credit programmes is observed. There are examples of spectacular successes and there are also examples of not-so-successful programmes which experienced high default rates and were unable to provide financial

Microfinance

105

services in the long run. Ultimately the aim is to empower the poor and mainstream them into development. Amongst different approaches of microfinance schemes, the process and stages remain more or less the same. The development process through a typical microfinance intervention can be understood with the help of Chart -1. The ultimate aim is to attain social and economic empowerment. Successful intervention is therefore, dependent on how each of these stages has been carefully dealt with and also the capabilities of the implementing organizations in achieving the final goal, e.g., if credit delivery takes place without consolidation of SHGs, it may have problems of self-sustainability and recovery. A number of schemes under banks, central and state governments offer direct credit to potential individuals without forcing them to join SHGs. Compilation and classification of the communication materials in the directory is done based on this development process.

Chart - 1 Æ Development through Microfinance

106

Key Drivers of Organizational Excellence

CLASSIFYING MICROFINANCE INTERVENTIONS There are several Microfinance implementing organizations which provide small loans in India. Some of them have successfully expanded their services to thousands of borrowers. Given the fact that most of these borrowers would not have had access to formal financial institutions, that many of the borrowers utilize the loans to enter and/or expand their informal sector micro enterprises, and that the informal sector continues to be an important source of livelihood for many poor people, these Microfinance Organizations (MFOs) may very well have had a major impact on improving the living standards of millions of poor persons as well as on promoting economic growth. The term MFO has been used for all types of implementing organizations facilitating savings and credit and financial activities at individual and/or group level, not going into details of legal and technical aspects of MFOs. Some of these organizations have evolved from small NGOs to become important providers of financial services. Realizing the potentially important role that MFOs play in deepening the benefits of economic growth, it is necessary that these MFOs should be strengthened by providing them experience-sharing opportunities, materials and training. Furthermore, the relative success of many MFOs soundly refute the claims of some that "the poor are nonbankable" or that MFOs are a waste of scarce development funds. In fact, it would be difficult to find another type of developmental initiative which has been relatively effective on such a large scale in recent years. In India, there exist a variety of microfinance organizations in government as well as non government sectors. Leading national financial institutions like the Small Industries Development Bank of India (SIDBI), the National Bank for Agriculture and Rural Development (NABARD) and the Rashtriya Mahila Kosh (RMK) have played a significant role in making micro credit a real movement. In India, the size and types of implementing organizations range from very small to moderately big organizations involved in savings and or credit activities for individuals and groups. These groups also adopt a variety of approaches. However, most of these organizations tend to operate within a limited geographical range. There are a few exceptions like PRADAN, ICECD, MYRADA, and SEWA who have been successful in replicating their experiences in other parts of the country and act as Resource Organizations. Also, many organizations are involved with SHGs, not only for credit, but for other purposes like watershed, agriculture, etc. Microfinance interventions can be identified based on their span of activity, source of funds, route through which it reaches the poor or the coverage. However, it seems that one of the most common practices and approaches prevalent is providing credit through Self-Help Groups. The approach is to make SHGs the main focal point to route all credit to members. Almost all national funding organizations (NABARD, RMK) as well as other Government schemes advocate forming of Self-Help Groups and thus providing or linking with credit. However, many organizations providing individual finance directly also exist.

THE PARTICIPATING ORGANIZATIONS The preparation of this resource directory covered about 450 organizations involved in microfinance activities in 11 states of India. These organizations are classified in the following categories to indicate the functional aspects covered by them within the microfinance framework. The aim, however, is not to "typecast" an organization, as these have many other activities within their scope:

Microfinance

107

Organizations Implementing Microfinance Activities Organizations implementing microfinance activities can be categorized into three basic groups. a)

Organizations which directly lend to specific target groups and are carrying out all related activities like recovery, monitoring, follow-up etc., some of these organizations are graduating to become exclusive MFOs, but such cases are few.

b)

Organizations who only promote and provide linkages to SHGs and are not directly involved in micro lending operations.

c)

Organizations which are dealing with SHGs and plan to start microfinance related activities.

Resource Organizations or Support Agencies These are the organizations that provide support to implementing organizations. The support may be in terms of resources or training for capacity building, counseling, networking, etc. They operate at state/regional or national level. They may or may not be directly involved in microfinance activities. A few associations to bring such MFOs on one platform have also been initiated in India. Experiences sharing through newsletters and/or meetings/seminars/ training are the methods adopted by the associations/collectives to support implementing organizations.

Formal Financial Institutions - Banks Commercial Banks, Gramin Banks and Rural Banks provide funds to SHGs and also operate their accounts. Funding agencies and development institutions channelise credit through these FIs. Building gender sensitivity and developmental dimensions amongst these agencies is a major need. Banks prefer to route credit through SHGs, though they directly lend to individuals also. Development Agencies/Nodal Agencies in India, development agencies like NABARD, SIDBI and RMK provide funds for credit. They support MFOs and have separate allocations for SHGs and micro-credit. These organizations have developed guidelines and training materials to help MFOs implement micro-credit activities covered under their preview.

BOUNDARIES OF MICROFINANCE Theoretically microfinance encompasses any financial service used by poor people, including those they access in the informal economy, such as loans from a village moneylender. In practice however, the term is usually only used to refer to institutions and enterprises whose goals include both profitability and reducing the poverty of their clients. Microfinance services are needed everywhere, including the economically developed world. However, in developed economies intense competition within the financial sector, combined with a diverse mix of different types of financial institutions with different missions, ensures that most people have access to some financial services. Efforts to transfer microfinance innovations such as solidarity lending from developing countries to developed ones have met with little success. Microfinance can also be distinguished from charity. It is better to provide grants to families who are destitute, or so poor they are unlikely to be able to generate the cash flow required to repay a loan. This situation can occur for example, in war zone or after a natural disaster.

Key Drivers of Organizational Excellence

108

THE MICROFINANCE CHALLENGE Traditionally banks have usually not served poor clients. Banks must incur substantial costs to managing a client account, regardless of how small the sums of money involved. For example, the total revenue from delivering one hundred loans worth $1,000 each will not differ greatly from the revenue that results from delivering one loan of $100,000. But it takes nearly a hundred times as much work and cost to manage a hundred loans as it does to manage one. A similar equation resists efforts to deliver other financial services to poor people. There is a break-even point in loan and deposit sizes below which banks lose money on each transaction they make. Poor people usually fall below it. In addition, most poor people have few assets that can be secured by a bank as collateral. As documented extensively by Hernando de Soto and others, even if they happen to own land in the developing world, they may not have effective title to it. This means that the bank will have little recourse against defaulting borrowers. Seen from a broader perspective, it has long been accepted that the development of a healthy national financial system is an important goal of, and an important catalyst for, the broader goal of national economic development (see for example Alexander Gerschenkron, 1962; Paul Rosenstein-Rodan, 1969; Joseph Schumpeter, 1949; Anne Krueger, 1974 etc.). But national planners and experts focus their attention mainly on developing a commercial banking sector dealing in high-value transactions, and often neglect the delivery of services to households of limited means, even though these households comprise the large majority of their populations. Because of these difficulties, when poor people borrow they often visit their relatives or the ubiquitous local moneylender. Moneylenders often charge over 10% a month, or even a few percentage points 'a day' for their money. While they often demonized and accused of usury, their services are convenient and fast, and they can be very flexible when borrowers run into problems. Hopes of quickly putting them out of business have proven unrealistic, even in places where microfinance institutions are very active. Over the past centuries practical visionaries from the Franciscan monks who founded the community-oriented pawnshops of the fifteenth century, to the founder of the credit union movement in the nineteenth century (Friedrich Wilhelm Raiffeisen) and the founders of the microcredit movement in the 1970s (such as Muhammed Yunus) have tested practices and built institutions designed to bring the kinds of opportunities and risk management tools that financial services offer to the doorsteps of poor people. Much progress has been made, but the problem has not been solved yet, and the overwhelming majority of people who earn less than $1 a day, especially in the rural areas, continue to have no practical access to formal sector finance.

FINANCIAL NEEDS OF POOR PEOPLE In developing economies and particularly in the rural areas, many activities that would be classified in the developed world as financial are not monetized: that is, money is not used to carry them out. Almost by definition, poor people have very little money. But circumstances often arise in their lives in which they need money or the things money can buy. In Stuart Rutherford's recent book ‘The Poor and Their Money’, he cites several types of needs. 1.·

Lifecycle Needs: such as weddings, funerals, childbirth, education, home-building, widowhood, and old age.

Microfinance

109

2.·

Personal Emergencies: such as sickness, injury, unemployment, theft, harassment or death.

3.

Disasters: such as fires, floods, cyclones and man-made events like war or bulldozing of dwellings.

4.

Investment Opportunities: expanding a business, buying land or equipment, improving housing, securing a job (which often requires paying a large bribe), etc.

Poor people find creative and often collaborative ways to meet these needs, primarily through creating and exchanging different forms of non-cash value. Common substitutes for cash vary from country to country but typically include livestock, grains, jewellery and precious metals.

WAYS POOR PEOPLE MANAGE THEIR MONEY Rutherford (2000) argues that the basic problem poor people face as money managers is to gather a 'usefully large' amount of money. Building a new home may involve saving and protecting diverse building materials for years until enough are available to proceed with construction. Children's schooling may be funded by buying chickens and raising them for sale as needed for expenses, uniforms, bribes, etc. Because all the value is accumulated before it is needed, this money management strategy is referred to as 'saving up'. Often people don't have enough money when they face a need, so they borrow. A poor family might borrow from relatives to buy land, from a moneylender to buy rice, or from a microfinance institution to buy a sewing machine. Since these loans must be repaid by saving after the cost is incurred, Rutherford calls this 'saving down'. However, most needs are met through mix of both these strategies. A benchmark impact assessment of Grameen Bank and two other large microfinance institutions in Bangladesh found that for every $1 they were lending to clients to finance rural non-farm micro-enterprise, about $2.50 came from other sources, mostly their clients' savings. Recent studies have also shown that informal methods of saving are very unsafe. For example a study by Wright and Mutesasira in Uganda concluded that "those with no option but to save in the informal sector are almost bound to lose some money - probably around one quarter of what they save there." The work of Rutherford, Wright and others has caused practitioners to reconsider a key aspect of the microcredit paradigm: that poor people get out of poverty by borrowing, building microenterprises and increasing their income. The new paradigm places more attention on the efforts of poor people to reduce their much vulnerability by keeping more of what they earn and building up their assets. While they need loans, they may find it as useful to borrow for consumption as for micro enterprise. A safe, flexible place to save money and withdraw it when needed is also essential for managing household and family risk.

CURRENT SCALE OF MICROFINANCE OPERATIONS No systematic effort to map the distribution of microfinance has yet been undertaken. A useful recent benchmark was established by an analysis of 'alternative financial institutions' in the developing world in 2004. The authors counted approximately 665 million client accounts at over 3,000 institutions that are serving people who are poorer than those served by the commercial banks. Of these accounts, 120 million were with institutions normally

110

Key Drivers of Organizational Excellence

understood to practice microfinance. Reflecting the diverse historical roots of the movement, however, they also included postal savings banks (318 million accounts), state agricultural and development banks (172 million accounts), financial cooperatives and credit unions (35 million accounts) and specialized rural banks (19 million accounts). Regionally the highest concentration of these accounts was in India (188 million accounts representing 18% of the total national population). The lowest concentrations were in Latin American and the Caribbean (14 million accounts representing 3% of the total population) and Africa (27 million accounts representing 4% of the total population). Considering that most bank clients in the developed world need several active accounts to keep their affairs in order, these figures indicate that the task the microfinance movement has set for itself is still very far from finished. By type of service "savings accounts in alternative finance institutions outnumber loans by about four to one. This is a worldwide pattern that does not vary much by region." An important source of detailed data on selected microfinance institutions is the Microbanking Bulletin. At the end of 2006 it was tracking 704 MFIs that were serving 52 million borrowers ($23.3 billion in outstanding loans) and 56 million savers ($15.4 billion in deposits). Of these clients, 70% were in Asia, 20% in Latin America and the balance in the rest of the world. As yet there are no studies that indicate the scale or distribution of 'informal' microfinance organizations like ROSCAs and informal associations that help people manage costs like weddings, funerals and sickness. Numerous case studies have been published however, indicating that these organizations, which are generally designed and managed by poor people themselves with little outside help, operate in most countries in the developing world.

INCLUSIVE FINANCIAL SYSTEMS The microcredit era that began in the 1970s has lost its momentum, to be replaced by a 'financial systems' approach. While microcredit achieved a great deal, especially in urban and near-urban areas and with entrepreneurial families, its progress in delivering financial services in less densely populated rural areas has been slow. Another major goal of the microcredit movement was to put the traditional moneylender, who typically charges at least 10% a month and often much more, out of business. There is little evidence of progress towards this goal. The new financial systems approach pragmatically acknowledges the richness of centuries of microfinance history and the immense diversity of institutions serving poor people in developing world today. It is also rooted in an increasing awareness of diversity of the financial service needs of the world's poorest people, and the diverse settings in which they live and work. Brigit Helms in her book 'Access for All: Building Inclusive Financial Systems', distinguishes between four general categories of microfinance providers, and argues for a pro-active strategy of engagement with all of them to help them achieve the goals of the microfinance movement. Informal financial systems include moneylenders, pawnbrokers, savings collectors, moneyguards, ROSCAs, ASCAs and input supply shops. Because they know each other well and live in the same community, they understand each other's financial circumstances and can offer very flexible, convenient and fast services. These services can also be costly and the choice of financial products limited and very short-term. Informal services that involve savings are also risky; many people lose their money. Member-owned organizations include Self-help Groups, financial cooperatives, and a variety of hybrid organizations like 'financial service associations' and CVECAs. Like their informal

Microfinance

111

cousins, they are generally small and local, which means they have access to good knowledge about each others' financial circumstances and can offer convenience and flexibility. Since they are managed by poor people, their costs of operation are low. However, these providers may have little financial skill and can run into trouble when the economy turns down or their operations become too complex. Unless they are effectively regulated and supervised, they can be 'captured' by one or two influential leaders and the members can lose their money. The Microcredit Summit Campaign counted 3,133 microcredit NGOs lending to about 113 million clients by the end of 2005. Led by Grameen Bank and BRAC in Bangladesh, Prodem in Bolivia, and FINCA International, headquartered in Washington, DC, these NGOs have spread around the developing world in the past three decades. They have proven very innovative, pioneering banking techniques like solidarity lending and mobile banking that have overcome barriers to serving poor populations. However, with boards that don't necessarily represent either their capital or their customers, their governance structures can be fragile, and they can become overly dependent on external donors. In addition to commercial banks, Formal financial institutions include state banks, agricultural development banks, savings banks, rural banks and non-bank financial institutions. They are regulated and supervised, offer a wider range of financial services, and control a branch network that can extend across the country and internationally. However, they have proved reluctant to adopt social missions, and due to their high costs of operation, often can't deliver services to poor or remote populations. With appropriate regulation and supervision, each of these institutional types can bring leverage to solving the microfinance problem. For example, efforts are being made to link self-help groups to commercial banks, to network member-owned organizations together to achieve economies of scale and scope, and to support efforts by commercial banks to 'down-scale' by integrating mobile banking and e-payment technologies into their extensive branch networks.

KEY DEBATES Some of its more ardent proponents have occasionally proposed that microfinance has the power to single-handedly defeat poverty. This has naturally been the source of considerable criticism. Poverty is a complex, multi-dimensional phenomenon that can only be solved through a complex mix of multi-dimensional interventions. Research on the actual effectiveness of microfinance as a tool for economic development remains slim, in part owing to the difficulty in monitoring and measuring this impact. The fact that interest rates charged to borrowers by formal microfinance institutions frequently range from 2.5% to a 4% a month (about 31% to 50% a year) has also been a source of some debate. Muhammad Yunus has recently made much of this point, and in his latest book argues that microfinance institution that charge more than 15% above their long-term operating costs should face penalties. Nevertheless, much empirical evidence supports the claim that interest rates are low compared with those charged by local money lenders (often over 10% a month), and that without access to microfinance, borrowers would often have no access to credit at all. But microfinance institutions often depend on donors to provide with them with much of their loan capital, especially in the early stages of their growth. This leaves them wide open to charges of practicing usury with charitable dollars. Another key debate centers on the appropriate target group for microfinance services. One view is that the most important form of

112

Key Drivers of Organizational Excellence

microfinance is credit targeted to poor people who are also talented entrepreneurs. If these people gain access to credit, they will expand their businesses, stimulate local economic growth and hire their less entrepreneurial neighbours, resulting in fast economic development. While this approach has had significant results in the cities of the developing world, it has failed to reach the majority of poor people, who are rural subsistence farmers with little, if any, non-farm income. As urban-rural income inequities continue to rise in the developing world, this result is increasingly viewed with dissatisfaction.

CONCLUSION In the developing countries, Microfinance is expected to play a significant role in poverty alleviation and upliftment of the economically backward. The imperative, therefore, is to share information and societal innovations which will help not only in understanding successes and failures of micro-credit programs but also provide knowledge and guidelines to strengthen and expand them.

References Rutherford, Stuart (2002), The Poor and Their Money, Delhi 2002. Yunus, Muhammad and Alan Jolis, Banker to the Poor: Micro-Lending and the Battle against World Poverty. Gerschenkron, Alexander (1962), Economic Backwardness in Historical Perspective, a Book of Essays, Cambridge, Massachusettes: Belknap Press of Harvard University Press. Paul Rosenstein-Rodan, (1969), Criteria for Evaluation of National Development Effort, Journal of Development Planning v 1 (1969). Joseph Schumpeter, (1949), The Creative Response in Economic History, 1947, JEH Krueger, Anne O. (1974), Macroeconomic Effects of the Trade Regime, in Anne O. Krueger Ed, Foreign Trade Regimes and Economic Development: Turkey PP 245-265, UMI Rutherford, M. 2000. Institutionalism between the Wars, Journal of Economics Issues 34(2), 291 - 303.

11

Paradigms of Working Capital Management Naila Iqbal

The present paper is on working capital management of an organization. It talks about the current assets, current liabilities, financing, and investment of current assets and business activities of an organization. The paper starts with the description about management of current assets and current liabilities of an organization.

INTRODUCTION For increasing shareholder's wealth a firm has to analyze the effect of fixed assets and current assets on its returns and risk. Working Capital Management is related with the Management of current assets. The Management of current assets is different from fixed assets on the basis of several aspects (Anand, 2001). Current assets are for short period while fixed assets are for more than one Year. The large holdings of current assets, especially cash, strengthens Liquidity position but also reduces overall profitability, and to maintain an optimum level of liquidity and profitability, risk return trade off is involved holding Current Assets. Only Current Assets can be adjusted with sales fluctuating in the short run. Thus, the firm has greater degree of flexibility in managing current Assets (Bhalla, 2005). The management of Current Assets helps a firm in building a good market reputation regarding its business and economic conditions. The concept of Working Capital includes both Current Assets and Current Liabilities. There are two concepts of Working Capital they are Gross and Net Working Capital.

Gross Working Capital Gross Working Capital refers to the firm's investment in Current Assets. Current Assets are the assets, which can be converted into cash within an accounting year or operating cycle. It includes cash, short-term securities, debtors (account receivables or book debts), bills receivables and stock (inventory).

Net Working Capital Net Working Capital refers to the difference between Current Assets and Current Liabilities are those claims of outsiders, which are expected to mature for payment within an accounting year (Burns, 1991). It includes creditors or accounts payables, bills payables and outstanding

114

Key Drivers of Organizational Excellence

expenses. Net Working Copulate can be positive or negative. A positive Net Working Capital will arise when Courtney Assets exceed Current Liabilities and vice versa.

CONCEPT OF GROSS WORKING CAPITAL The concept of Gross Working Capital focuses attention on two aspects of Current Assets' management, they are:

Optimizing investment in Current Assets Investment in Current Assets should be just adequate i.e., neither in excess nor deficit because excess investment increases liquidity but reduces profitability as idle investment earns nothing and inadequate amount of working capital can threaten the solvency of the firm because of its inability to meet its obligation (Bhattacharya, 2004). It is taken into consideration that the Working Capital needs of the firm may be fluctuating with changing business activities which may cause excess or shortage of Working Capital frequently and prompt management can control the imbalances.

Way of financing Current Assets This aspect points to the need of arranging funds to finance Country Assets. It says whenever a need for working Capital arises; financing arrangement should be made quickly. The financial manager should have the knowledge of sources of the working Capital funds as wheel as investment avenues where idle funds can be temporarily invested (Bhalla, 2005).

Concept of Net Working Capital This is a qualitative concept. It indicates the liquidity position of the organization and suggests the extent to which working Capital needs may be financed by permanent sources of funds. Current Assets should be optimally more than Current liabilities. It also covers the point of right combination of long term and short-term funds for financing current assets. For every firm a particular amount of net Working Capital in permanent. Therefore it can be financed with long-term funds (Anand, 2001). Thus both concepts, Gross and Net Working Capital, are equally important for the efficient management of Working Capital. There are no specific rules to determine a firm's Gross and Net Working Capital but it depends on the business activity of the firm. Working capital management is concerned with the problems that arise while managing the current assets the current liabilities and the interrelationship that exist between them (Burns, 1991). Thus, the WC management refers to all aspects of administration of both current assets and the current liabilities. Every business concern should neither have redundant or excess WC nor should it be short of W.C. (Burns, 1991) both conditions are harmful and unprofitable for any business. But out of these two the shortage of WC is more dangerous for the well being of the firms.

IMPACT/HARM OF REDUNDANT OR EXCESSIVE WORKING CAPITAL Excessive WC means idle funds, which earn no profits for the business, cannot earn proper rate of return on its investment. When there is a redundant WC, it may lead to unnecessary purchasing and accumulation of inventories causing more chances if theft, waste and losses.

Paradigms of Working Capital Management

115

Excessive WC implies excessive debtors and defective credit policy, which may cause higher incidences of bad debts (Burns, 1991). It may result into overall inefficiency in the organizations. When there is excessive WC relation with banks and other financial institutions may not be maintained. The redundant WC gives rise to speculative transaction. Due to low rate of return on investments the value of shares may also fall. In case of redundant WC there is always a chance of financing long terms assets from short terms funds, which is very harmful in long run for any organization.

DANGERS OF SHORT OR INADEQUATE WORKING CAPITAL A concern that does not have adequate Working Capital (Anand, 2001), will not be able to pay its short-term liabilities in time. Thus it will lose its reputation and should be not be able to get good credit facilities. It cannot by its requirements in bulk and cannot avail of discounts. It results in stagnated growth. It becomes difficult for the firms to exploit favorable market conditions and undertake profitable projects due to non-availability of WC funds. The firm cannot pay day-to-day expenses of its operations and its credit inefficiencies, increases cost and reduces the profits of the business (Burns, 1991). It becomes impossible to utilize efficiently the fixed assets due to non-availability of liquid funds thus the firm's profitability would deteriorate. The rate of return on investments also falls with the shortage of WC. Operating inefficiency creeps in and it becomes difficult to implement operating plans and achieve the firms profit targets (Bhattacharya, 2004).

NEED FOR WORKING CAPITAL For earning profit and continue production activity, the firm has to invest enough funds in Current Assets in generating sales. Current Assets are needed because sometimes sales do not convert into cash instantaneously and it includes an operating cycle.

Operating Cycle Operating cycle is the time duration required to convert sales, after the conversion of resources into inventories, into cash. Investment in current assets such as inventories and debtors is realized during the firm's operating cycle, which is usually less than a year (Burns, 1991). The operating cycle of a manufacturing company involves three phases: 1.

Acquisition of resources such as raw material, labor, power and fuel etc.

2.

Manufacture of the product which includes conversion into work-in-progress into finished goods.

3.

Sale of the product either for cash or on credit.

These phases affect cash flows because sometimes sale is done on credit and it takes sometimes to realize. The length of the operating cycle of a manufacturing firm is the sum of Inventory Conversion period and Debtors Conversion periods. The total of Debtors Conversion Period and Inventory Conversion Period is referred to as Gross Operating Cycle.

116 1.

Key Drivers of Organizational Excellence Inventory Conversions Period: The Inventory Conversion Period is the total time needed for Producing and selling the product. It includes: a) Raw Material Conversion Period. b) Work-in-progress Conversion Period. c)

2.

Finished Goods Conversion Period.

Debtors Conversion Period: It is the time required to collect the outstanding amount from the customers.

Net Operating Cycle Generally, a firm may resources (raw materials) on credit and temporarily postpones payment of certain expenses. Payables, which the firm can defer, are spontaneous sources of capital to finance investment in Courtney Assets (Bhalla, 2005). The length of the time in which the firm is able to defer payments on various resource purchases is Payables Deferral period. The deference between Gross Operating Cycle and payables Deferral Period is called Net Operating Cycle. If depreciation is excluded from Net Operating Cycle, the computation repercussion represents Cash Conversion Cycle. It is net time interval between cash outflow (Anand, 2001). Operating Cycle also represent the time interval over which additional funds, called Working Capital, should be obtained in order to carry out the firm's operations. The firm has to negotiate Working Capital from sources such as banks. The negotiated sources of Working Capital financing are called non-spontaneous sources. If net Operating Cycle of a firm increases it means further need for negotiated Working Capital.

Calculation of Operating Cycle The calculation of operating cycle helps to know the exact period of WC turnover i.e. how long it takes to convert cash again into cash? Through this calculation one can ascertain the WC period (Bhattacharya, 2004).

FORMULA Raw Material Holding Period

=

Avg. Stocks of Raw Material Avg. cost of consumption per day

Work in progress Conversion Period

=

Avg. work in progress Avg. cost of Production per day

Finished goods holding period

=

Avg. stock of finished goods Avg. cost of goods sold per day

Receivables & Debtors collections Period

=

Avg book debts Avg credit sales per day

Credit period allowed by creditors

=

Avg. creditors Avg. credit purchase

Paradigms of Working Capital Management

117

Duration of Operating Cycle

Where

GOC

= RM + WIP + FG + D + R

NOC

= GOC-C

GOV

= Gross operating cycle.

NOC

= Net operating cycle

RM

= Raw material conversion period.

C

= Credit period available

WIP

= WIP conversion period

FG D&R

= FG holding period = Debtors and receivables collection period.

l

360 working days in a year are taken to calculate per day average.

l

Avg. means opening + closing /2

l

Depreciation is excluded while calculating cost of production & sales as it is a non-fund expense and does not require working capital.

PERMANENT AND VARIABLE WORKING CAPITAL There is always a minimum level of current Assets, which is continuously required by the firm to carry on its business operations. The minimum level of Current Assets is referred to as permanent of fixed Working Capital (Burns, 1991). It is permanent in the same way as the firm's fixed assets are. The extra Working Capital, needed to support the changing production and sales activities are called fluctuating or variable or temporary Working Capital. Both Kinds of Working Capital, permanent and temporary, are necessary to facilitate production and sale through the operating Cycle.

118

Key Drivers of Organizational Excellence

ESTIMATING WORKING CAPITAL NEEDS Working Capital needs can be estimated by three different methods, which have been successfully applied in practice. They are follows:

Current Assets Holding Period To estimate Working Capital requirements on the basis of average holding period of Current Assets and relating them to costs based on the company's experience in the previous years (Anand, 2001). This method is based on the operating cycle concept.

Ratio of Sales To estimate Working Capital requirements as a ratio of sales on assumption that Current Assets change with sales.

Ratio of fixed Investment To estimate Working Capital requirements as a percentage of fixed investment. The most appropriate method of calculating the Working Capital needs of firm is the concept of operating cycle. There are some limitations with all the three approaches therefore some factors govern the choice of method of Working Capital. Factors considered are seasonal variations in operations, accuracy sales forecasts, investment cost and variability in sales price would generally be considered. The production cycle and credit and collection policy of the firm would have an impact on Working Capital requirements (Bhalla, 2005).

CURRENT ASSETS FINANCING A firm can adopt different financing policies for Current Assets Three types of financing used can be Long-term financing such as shares, debentures etc.; Short-term financing such as public deposits, commercial papers etc. and spontaneous financing refers to the automatic sources of short-term funds arising in the normal course of a business such as trade credit (suppliers) and outstanding expenses etc (Burns, 1991). The real choice of financing Current Assets is between the long term and short-term sources of finances. The three approaches based on the mix of long and short-term mix are:

Matching Approach When the firm follows matching approach (also known as hedging approach), long term financing will be used to finance Fixed Assets and permanent Current Assets and shortterm financing to finance temporary or variable Current Assets (Anand, 2001). The justification for the exact matching is that, since the purpose of financing is to pay for assets, the source of financing and the assets should be relinquished simultaneously so that financing becomes less expensive and inconvenient (Bhalla, 2005). However, exact matching is not possible because of the uncertainty about the expected lives of assets.

Paradigms of Working Capital Management

119

Conservative Approach The financing policy of the firm is said to be a conservative when it depends more on longterm funds for financing needs. Under a conservative plan, the firm finances its permanent assets and also a part of temporary Current Assets with long term financing. In the periods when the firm has no need for temporary Current Assets, the idle long-term funds can be invested in the tradable securities to conserve liquidity (Tara, 2006). Thus, the firm has less risk of shortage of funds.

Aggressive Approach An aggressive approach is said to be followed by the firm when it uses more short term financing than warranted by the matching approach. Under an aggressive approach, the firm finances a part of its permanent current assets with short term financing. Some firms even finance a part of their fixed assets with short term financing which makes the firm more risky (Anand, 2001).

120

Key Drivers of Organizational Excellence

Managing Current Assets Management of Current Assets is done in three parts. They are: 1)

Management of cash and cash equivalents.

2)

Management of inventory.

3)

Management of accounts receivable and factoring.

Thus, the basic goal of WC management is to manage the current assets the current liabilities of the firm in such a way that a satisfactory level of WC is maintained, i.e. it is neither inadequate nor excessive (Bhattacharya, 2004). WC management policies of firms have a great effect on its Profitability, Liquidity and Structural health of the organization. WC management is an integral part of overall corporate management. For proper WC management the financial manager has to perform the following basic functions (Burns, 1991): l

Estimating the WC requirement.

l

Determining the optimum level of current assets.

l

Financing of WC needs.

l

Analysis and control of WC.

WC management decisions are three dimensional in nature i.e. these decisions are usually related to three spheres or fields profitability, risk and liquidity; composition and level of current assets; and composition and level of current liabilities.

PRINCIPLES OF WORKING CAPITAL There are four principles of working capital management. They are shown as:

Paradigms of Working Capital Management

P rin cip le o f R is k Va ria tio n

P rin cip le o f C o s t C a pita l

121

P rin cip le o f E q u ity p os itio n

P rin c ip le o f M a turity P a y m e n t

Principle of Risk Variation The goal of WC management is to establish a suitable trade between profitability and risk. Risk here refers to a firm's ability to honor its obligation as and when they become due for payments. Larger investment in current assets will lead to dependence (Tara, 2006). Short term borrowings increases liquidity, reduces risk and thereby decreases the opportunity for gain or loss on the other hand the reserve situation will increase risk and profitability and reduce liquidity thus there is direct relationship between risk and profitability and inverse relationship between liquidity and risk.

Principle of Cost Capital The various sources of raising WC finance have different cost of capital and the degree of risk involved. Generally higher the cost lower the risk, Lower the risk higher the cost (Anand, 2001). A sound WC management should always try to achieve the balance between these two.

Principle of Equity Position This principle is considered with planning the total investment in current assets. As per this principle the amount of WC investment in each component should be adequately justified by a firms equity position (Padarchi, 2006). Every rupee contributed current assets should contribute to the net worth of the firm. The level of current assets may be measured with the help of two ratios. They are: 1.

Current assets as a percentage of total assets.

2.

Current assets as a percentage of total sales.

Principle of Maturity Payment This principle is concerned with planning the source of finance for WC. As per this principle a firm should make every effort to relate maturities of its flow of internally generated funds in other words it should plan its cash inflow in such a way that it could easily cover its cash out flows or else it will fail to meet its obligation in time (Tara, 2006).

CONCLUSION The paper highlights the crucial nature of Working Capital and the need for its management. The paper has also pointed out the ramifications of having excess or deficient working capital in a business. All in all, the paper has talked of the various issues related to working capital management in organizations.

122

Key Drivers of Organizational Excellence

References Anand, M. (2001), Working Capital Performance of Corporate India: An Empirical Survey, Management & Accounting Research, 4(4), 35-65. Bhalla, V. K. (2005), Working Capital Management, Anmol, New Delhi. Bhattacharya, Hrishikes (2004), Working Capital Management: Strategies and Techniques, Prentice-Hall of India Products. Burns, R and Walker, J. (1991), A Survey of Working Capital Policy Among Small Manufacturing Firm, The Journal of Small Business Finance,, 11(1), 61-74 Padachi, Kesseven (2006), Trends in Working Capital Management and its Impacts on Firms Performance: An Analysis of Mauritius Small Manufacturing Firm, International Review of Business Research Papers, Vol. 2, October, p-45-58. Sadri, Sorab & Tara, Sharukh, N. (2006), Understanding Working Capital Management, Rai Business School, Mumbai, March 25, 2006.

II MARKETING

12

Selection of Advertising Appeals in Print Media: A Comperative Study of Products & Services S. S. Bhakar Shailja Bhakar Amrish Dixit

Head or Heart, will reason do the trick or emotion. In the present scenario the market conditions are changing rapidly. May be one is more effective in some market condition while the other in some other market conditions. So, what appeal will work better in which market condition is not easy to pinpoint. Therefore, the current research was undertaken to understand the effectiveness of emotional and rational appeals in different circumstances. The two appeals are also likely to have different levels of effectiveness for products and services. The current research also evaluates the effectiveness of both the appeals for products and services.

CONCEPTUAL FRAMEWORK In today's perspective, advertisement is the most visible marketing tool, which seems to transmit an effective message from the marketer to a group of individuals. If we say that what is advertising in one word then we must say "Communication". Advertising is an Institution, which interprets the want - satisfying qualities of products, service & ideas in terms of the wants & needs of consumers (Sandage, 1960). Advertising is now spoken with the accent on the syllable, but appears to have been anciently accented on the second, to inform another, to give intelligence and to give notice on anything by means of advertisement in the public prints (Samuel Johnson, A Dictionary of the English language 1755). Advertising is the non-personal communication of information usually paid for and usually persuasive in nature about products, services or ideas by identified sponsors through the various media (Bovee, 1992). The American Marketing Association, Chicago, defines advertising as "any paid form of non-personal presentation of ideas, goods & services by an identified sponsor." Advertising is the combination of persuasive or promotional element initially controlled by the advertiser through she/he communicates about a product service or idea with a defined set of consumers or prospects via a clear, concise & easily understandable

126

Key Drivers of Organizational Excellence

ADVERTISING APPEALS At a fundamental level there are primarily only two types of advertising appeals (a) Rational (b) Emotional. Rational appeals are those that rely primarily on calling reason. They are directed at the thinking process of the audience. Emotional appeals are those that rely primarily on evoking emotions. Emotions are those mental agitations or excited states of feeling which prompt us to make a purchase. Emotional appeals are designed to stir up some negative or positive emotions, which will motivate product interest or purchase. An appeal to emotion is a type of argument which attempts to arouse the emotions of its audience in order to gain acceptance of its conclusion (Damer, 1995). All advertising copy consists of two elements: What is said and how it's said. What is said is the rational part of the message -- the claims and benefits that result from careful positioning and strategy. How it's said is the emotional element -- the look of the advertising, and the charm, humor, nostalgia, empathy, sense of security, beauty, or sense of style and quality that is conveyed (Walter Burek). Rational advertising appeals typically refer to the quality, value or performance of the product and seek to elicit cognitive responses from consumers. Rational advertising appeals originate from the traditional information-processing model of persuasion (Miller and Stafford, 1999). These are directed towards the thinking process of the audience. Here the functional benefit of the product is highlighted.

High quality Consumer durables are expected to have high quality. Many consumer groups are also expected to be of high quality.

Low price Purchasing product paying a lower price which functions as good as or slightly less than the higher priced product is considered to be a rational decision.

Long life Inverters and batteries are considered to be a rational appeal because of there longevity and propensity.

Performance Hero Honda bikes are also a rational appeal because of mileage per liter of fuel. The state of action which aggravates us or which arouse our interest to buy a product is known as emotional appeal. There is no deliberate process involved, it may be subconscious too. Emotional (transformational) advertising appeals attempt to elicit negative or positive emotions from consumers (Huang, 1997; Rossiter and Percy, 1987).

REVIEW OF LITERATURE In our everyday experiences, we are exposed to a variety of advertising appeals. These appeals are aimed at influencing our attitudes toward a wide range of consumer products and behaviors. Through television, radio, and the Internet, they reach large numbers of individuals who represent a wide range of cultural and ethnic backgrounds. Is there any

Selection of Advertising Appeals in Print Media

127

relationship between an individual's cultural values and beliefs and how she might respond to different types of advertising appeals? A growing body of research on cultural definitions of the self and advertising appeals is providing some interesting and insightful answers to this question. According to Markus and Kitayama (1991), people in different cultures have strikingly different definitions of the self, of others, and of the interdependence of the two. For example, many Asian, African, and Latin American cultures promote a definition of the self that emphasizes the relatedness of individuals to each other as well as the need to attend to, to fit in with, and to maintain harmony with others. These cultures have been characterized as being collectivistic. In contrast, American culture has been characterized as being individualistic because it promotes a definition of the self that emphasizes maintaining independence from others and encourages individuals to express their unique inner attributes. These different cultural definitions of the self can have a significant impact on the experience of cognitive, motivational, and emotional processes (Fiskeetal 1997; Miller, 1984; Triandis, 1989; Triandis et al., 1984). They can also influence the manner in which individuals respond to different types of advertising appeals Kentucky Fried Chicken has been one of the household international brands in urban China since it opened its first Western-style quick service restaurant in Beijing in 1987. As the present largest fried chicken restaurant company in the world, KFC aims China as the most promising market and succeeds in its localization strategies in the huge China market. The most prominent success of KFC in China is not only the outcome of KFC's persistent tenets "quality, service and cleanliness" but also the achievements of its keen perception of crosscultural marketing and its understanding of Chinese culture. This essay aims to investigate the process of KFC's entry into China's market and analyze its particular localization strategies towards China The emotional/rational framework has been studied extensively in the marketing and advertising literature. Rational advertising stems from the traditional information processing models of decision-making in which the consumer is believed to make logical and rational decisions. Such approaches are designed to change the message receiver's beliefs about the advertised brand and rely on their persuasive power of arguments or reasons about brand attributes. Such appeals relate to the audience's self-interest by showing product benefits "examples are messages showing a product's quality, economy, value or performance" (Kotler and Armstrong, 1999). In contrast, emotional appeals are grounded in the emotional, experiential side of consumption. They seek to make the consumer feel good about the product, by creating a likeable or friendly brand; they rely on feelings for effectiveness. According to Kotler and Armstrong (1999), "Emotional appeals attempt to stir up either negative or positive emotions that can motivate purchase... communicators also use positive emotional appeals such as love, humor, pride and joy". Albers-Miller and Stafford's study (1999) examines advertising appeals for services and goods across four different cultures: Brazil, Taiwan, Mexico and the USA. The results of a content analysis indicate that the use of rational and emotional appeals differs across both product type and country. It is suggested that culture appears to play a significant role in the use of emotional and rational advertisements for services, and anthropological measures of culture provide some insight into the differences in emotional appeals. The commercials of KFC have also resorted to the emotional appeals as advertising appeals. The Chinese virtues and emotions as patriotism, respects to the elder, cherishing

128

Key Drivers of Organizational Excellence

the young, sincere friendship and romantic love are the major subjects of KFC's campaigns. One remarkable example is KFC's emotional appeals strategy by supporting China men's soccer team in the World Cup of 2002. During the promotion for World Cup combo, costumers were free to select a miniature of a soccer star with a purchase of the combo. Along with the world stars as David Beckham, Rivaldo, etc, two Chinese soccer players' miniatures are consumers' favorite. In this case, the patriotism aroused by the China men's soccer team's debut in the history of World Cup was fully employed as an emotional appeal. Another special example is KFC's commercial for its commonweal fund aiming to help students in poverty finish their education. In the 90second-long commercial which is seemingly more like a social responsible advertising, a girl's voiceover tells a story of her own. "It's at the age of 10 that I had my first KFC meal. At that time I didn't expect that KFC would change my life later. Having won the Dawn Scholarship from KFC, I made my dream of going to college come true." Then a series of flashbacks reflecting the girl's experience as an employee at KFC were presented. The whole commercial doesn't portray any images of KFC products but "a spirit encouraging all the people." In this case, the love between people works as emotional appeal to arouse consumers' resonance

RESEARCH METHODOLOGY The study was exploratory in nature and survey method was used to carry out the research.

Research design 2 x 2 factorial design was used. Gender was taken on x-axis and on y-axis product/services were taken Male

Female

Service

25

25

Product

25

25

Sample design All the respondents from the age group of 15-50 at Gwalior formed the total Population. As the data was collected through personal Interaction (face to face interview) the total population became the sampling frame. A purposive sampling technique was used to select sample elements. Sample size included 100 respondents. Individual respondents were treated as sample element.

Tools for Data Analysis Data was collected through two separate self designed measures, to evaluate the emotional and rational appeals used in print media for promoting products/services. In all nine emotional appeals and three rational appeals were included in the measure. The data was collected on a scale of 1 to 5 where 1 indicated low availability of the appeal and 5 indicated high availability of appeals. The data was collected after establishing rapport with the respondents after showing the ads. The researcher carried ten ads for products and another for services. The respondents were shown ads and were asked to evaluate the degree of

Selection of Advertising Appeals in Print Media

129

appeals on a five point scale. Minimum score was given to a appeal if it was absent from the ad and maximum score was given to an appeal if this appeal was predominantly visible in the advertisement. Crone batch alpha, Guttman, Split-half & parallel methods of reliability computation of the two measures separately.

Data Analysis All the items in the Emotional appeals measure had higher Item to Total Correlation than the cutoff Value (0.1942) other than the emotion fear. Second iteration was carried out after deleting this emotion and all the items had higher item to total correlation then the cut off value this time. The measure after dropping fear had high consistency and was therefore finalized. Not a single statement was therefore, dropped from the measure and further original questionnaire was used for further evaluation. Similarly the rational appeal measure was also found to be consistent as all the variables had higher correlation with the total than the cut off value. The value of item to total correlation was above standard value for all the items included in the questionnaire so all the items were found consistent in the questionnaire. Ztest will be applied to find out significance of difference between the variables of the study. ANOVA will be applied to find out the difference in overall measure.

RESULTS AND DISCUSSION Reliability Reliability of the Emotional Appeals measure was computed using SPSS and the reliability values were as given below: Reliability Method

Emotional Measure

Rational Measure

Alpha

0.8622

0.784

Parallel (Scale)

0.9970

0.772

Parallel (Unbiased)

0.9970

0.778

Split Half

0.8002

0.747

The measures are considered reliable if the value of reliability is above 0.7. The reliability values of advertising appeals (both emotional and rational) are above 0.7 indicating that the measure is reliable.

FACTOR ANALYSIS - EMOTIONAL APPEALS The KMO Bartlett test of Sphericity indicates that the data is suitable for factor analysis. The KMO measures the sampling adequacy which should be greater than 0.5 for a satisfactory factor analysis to proceed. Looking at the table below, the KMO measure is 0.764. From the same table, we can see that the Bartlett's test of sphericity is significant. That is, its associated probability is less than 0.05. In fact, it is actually 0.000. This means that the correlation matrix is not an identity matrix. The above facts indicate that the data collected on Brand Image of Wrist watches in suitable for factor analysis.

130

Key Drivers of Organizational Excellence Table 1: Showing the KMO Bartlett's Test Report for Conducting Factor Analysis

Kaiser-Meyer-Olkin Measure of Sampling Adequacy. Bartlett's Test of Sphericity

.764

Approx. Chi-Square

440.395

df

36

Sig.

.000

Principle Component Analysis with Varimax Rotation and Kaiser Normalization was applied on Emotional advertising appeals to identify the common underlying factors. The Principle Component Analysis converged on 2 factors after just three iterations; the factors were named according to the common nature of statements. The details about the factors, the factor name, variables converged and their Eigen values are given in the table below. Table 2: Showing the Factors and Factor Loadings for Advertising Appeals Rotated Component Matrixa

Items

Factors 1

VAR00001 Love

0.894

VAR00008 Affection

0.868

VAR00004 Motherhood

0.861

VAR00009 Care

0.756

VAR00006 Belonging

0.676

VAR00002 Empathy

0.601

VAR00003 Romance

0.522

2

VAR00007 Prestige

0.849

VAR00005 Pride

0.821

Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.

Positive Appeals The appeals that converged on this factor were, Love, Affection, Motherhood, Care, Belonging, Empathy and Romance.

Mixed Appeals Only two emotional appeals, Prestige and Pride converged on this factor. The appeal could evoke positive response by relating it to self esteem or negative emotions if the respondent emphasizes more on pride. The grouping of emotional appeals on two factors having positive and negative connotations is supported by a large number of studies conducted on this topic. Emotional (transformational) advertising appeals attempt to elicit negative or positive emotions from

Selection of Advertising Appeals in Print Media

131

consumers (Huang, 1997; Rossiter and Percy, 1987). They are based on the emotional and experiential side of consumption. As Holbrook and Westwood (1989) suggest, during the consumption of the product consumers may experience any combination of love, hate, fear, anger, joy, sadness, pleasure, disgust, interest and surprise. These feelings represent the "experience values"(Franzen, 1999).

FACTOR ANALYSIS APPLIED TO RATIONAL APPEALS The KMO Bartlett test of Sphericity indicates that the data is suitable for factor analysis. The KMO measures the sampling adequacy which should be greater than 0.5 for a satisfactory factor analysis to proceed. Looking at the table below, the KMO measure is 0.764. From the same table, we can see that the Bartlett's test of sphericity is significant. That is, its associated probability is less than 0.05. In fact, it is actually 0.000. This means that the correlation matrix is not an identity matrix. The above facts indicate that the data collected on Brand Image of Wrist watches in suitable for factor analysis. Table 3: Showing KMO and Bartlett's Test Applied to Rational Appeals

Kaiser-Meyer-Olkin Measure of Sampling Adequacy. Bartlett's Test of Sphericity

.702

Approx. Chi-Square

83.194

df

3

Sig.

.000

Principle Component Analysis with Varimax Rotation and Kaiser Normalization was applied on Emotional advertising appeals to identify the common underlying factors. The Principle Component Analysis converged on a single factor after just three iterations. Table 4: Showing the PCA Component Matrix for Rational Appeals Measure

Variables

Factors 1

VAR00001 Features

.847

VAR00002 Benefits

.843

VAR00003 Attributes

.818

Extraction Method: Principal Component Analysis.

To find out the difference in overall measures ANOVA-Test was applied Table 5: Total Values for Rows and Columns for ANOVA

Emotional

Rational

Service

T=109.44

T=148.67

Product

T=171.44

T=183.33

132

Key Drivers of Organizational Excellence Table 6: ANOVA Summary Table

Sum of Squares

Sum of Squares Values

DF

Mean Sum of Squares

F - Ratio

SSC

2335.789

1

2335.789

MSC/MSE = 12.499

SSR

653.3136

1

653.3136

MSR/MSE =3.496

SSE

186.87

1

186.87

SST

3175.971

3

1058.657

The above table indicates clearly that there is a significant difference between the use of emotional & rational appeals for products and services (indicated by F-ratio for the columns) whereas the difference between products & services is insignificant (indicated by F- Ratio for rows. Two results indicate that use of the two appeals rational and emotional differ significantly and the higher total for the Rational appeals signifying that Rational appeals are used more frequently in Indian advertisements than the emotional appeals. To find out the significant difference between product/services & emotional/rational ZTest was applied. Table 7: Showing the Mean and Standard Deviation of Responses

Product Emotional

Rational

Service

Mean = 3.428889 Sd = 0.390108

Mean = 2.188889 Sd = 0.633431

Mean = 3.666667 Sd = 0.635317

Mean = 2.973333 Sd = 1.018736

The z-values computed between different combination of appeals and products/services are all significant at 5% level of significance, indicating that the use of advertising appeals differ significantly when applied to products vs. services. The results show that the ads used to promote both the products and services have higher content of rational appeals than emotional appeals. Also, the ads promoting services have lower appeals (both emotional and rational) than the products. The services use more rational appeals then emotional as indicated by the z value computed between SE (Service Emotional) and SR Services Rational. Table 8: Showing the Z-Values Computed Between the effect Products and Services of Different Appeals PE

PR

SE

SR

PE

Nil

2.25524

11.78636

2.952919

PR

Nil

Nil

11.64751

4.083449

SE

Nil

Nil

Nil

4.6239

The results have found mixed support in the literature. Research by Pickett, Grove and Laband (2001) indicated that service ads contained more of specific information cues than ads for physical goods. Pickett et al argue that the intangibility of services prompts services advertisers using more factual information in their ads to make their services appear more tangible. On the contrary, Cutler and Javalgi (1993) contend that services advertising use emotional appeals more often than goods advertising. Theoretically speaking, services advertisers tend to use more emotional advertising to overcome the intangible aspect of their services (Ha, 1998).

Selection of Advertising Appeals in Print Media

133

Table 9: Showing the Z-Values Computed Between Emotional and Rational Appeals Emotional Cues

Rational Cues

Z - Test

Mean

SD

Mean

SD

2.809

0.813

3.32

0.914

4.13374

The Z-value computed between the use of Emotional and Rational appeals in promoting the products or services is 4.134, which is significant at 0.00% level of significance, indicating that the use of emotional and rational appeals on acceptance of products/services is different.

IMPLICATIONS AND SUGGESTIONS OF THE STUDY In India today, the use of advertising appeals has become a trend and a perceived winning formula of corporate image building and product/service marketing. Implications can be divided into four parts:

Implications for advertisers As it is come out from this research that customers are more emotionally attached with product\services so in current trend if the advertisers want to make their ads effective they should use more & more emotional appeal in their ads.

Implications for brand owners For brand owners its necessary that, if they want to make their customer loyal towards their brand then they should use some emotional icons with their brand image.

Implications for customers As this research show that advertisers & brand owners blackmailing the customers emotionally so they should think & see rationally also.

CONCLUSION The study has resulted in a standardized measure to analyze the appeals that are used by advertisers to promote their products & services. The internal consistency, reliability & validity of the measure are high. The study has been able to conclusively prove that emotional contents in the ads used by the advertisers are significantly higher than the rational contents used by the advertisers for both products & services. There is insignificant difference between the emotional contents among product & services. However the rational contents in advertisements in products are higher than the rational contents in services.

References Akar, D, Stayman, D., & Vezina, R. (1988), Identifying Feelings Elicited By Advertising, Psychology & Marketing, 5(1), 1-16. Albers-Miller, N. & Stafford, M. (1999), An International Analysis of Emotional and Rational Appeals in Services Vs Goods Advertising, Journal of Consumer Marketing, 16(1), 42-57. Alesandrini, K. (1982), Strategies That Influence Memory for Advertising Communications, Information Processing Research in Advertising, 65-81

134

Key Drivers of Organizational Excellence

Beltramini, R. & Blasko, V. (1986), An Analysis of Award-Winning Advertising Headlines, Journal of Advertising Research, (April/May), 48-51. Chen, H. & Schweitzer J. (1996), Cultural values reflected in Chinese and U.S. television commercials, Journal of Advertising Research, 13,234-249. Culter, B. & Javalgi, R. (1993), Analysis of print ad features: Services versus products, Journal of Advertising Research, (March/April), 62-68. Franzen, G (1999), Brand & Advertising, Henley-on-Thames: Admap Publications. Ha, L. (1998), Advertising Appeals Used by Services Marketers: A Comparison Between Hong Kong and the United States, Journal of Services Marketing, 12(2), 98-112. Hitchon, J. (1991), Headlines Make Ads Work, Advances in Consumer Research, 18, 752-754. Holbrook, M. & Westwood, R. (1989), The Role of Emotion in Advertising Revisited: Testing a Typology of Emotional Responses, in Cafferata, P. and Tybout A. (eds.), Cognitive and Affective Response to Advertising, New York: Lexington Books, 353-370. Holbrook, M. (1978), Beyond Attitude Structure: Toward the Informational Determinants of an Attitude, Journal of Marketing Research, 15 (November), 545-56. Horst, S. (1992), Crisis in Advertising, Marketing Research,, 4(1), 39-46. Huang, M.H. (1997), Exploring a New Typology of Emotional Appeals: Basic, Versus Social, Emotional Advertising, Journal of Current Issues and Research in Advertising, 19(2), 23-37. Johar, J. & Sirgy, M. (1991), Value-Expressive Versus Utilitarian Advertising Appeals: When and Why to Use Which Appeal, Journal of Advertising, 20(3), 23-44. Kotler, P. (1987), Marketing Management (9th ed.), New Jersey: Prentice Hall. Moriarty, S. (1987), A Content Analysis of Visuals Used in Print Media Advertising, Journalism Quarterly, 64(1/2), 550-554. Pickett, G., Grove, S., & Laband, D. (2001), The Impact of Product Type and Parity on the Informational Content of Advertising, Journal of Marketing Theory and Practice, 9(3), 32-43. Pollay, R. & Gallagher, K. (1990), Advertising and Cultural Values: Reflections in the Distorted Mirror, International Journal of Advertising, 9(4). Rossiter, J. and Percy, L. (1987), Advertising, Promotion and Communication Management, New York: McGraw Hill. Stafford, M. & Day, E. (1995), Retail Services Advertising: The Effects of Appeal, Medium, and Service, Journal of Advertising, 24(1), 57-71. Turley, L. & Kelley, S. (1997) A Comparison of Advertising Content: Business to Business Versus Consumer Services, Journal of Advertising, 26(4), 39-48.

Selection of Advertising Appeals in Print Media

135

Annexure 1 Showing of Item to Total Correlations for Advertising Appeals

Item

Item to total correlation coefficient

Consistency

Accepted\Dropped

Love

0.81529

Consistent

Accepted

Fear

-0.13392

Inconsistent

Rejected

Empathy

0.497631

Consistent

Accepted

Romance

0.405157

Consistent

Accepted

Motherhood

0.786514

Consistent

Accepted

Pride

0.395489

Consistent

Accepted

Belongingness

0.681565

Consistent

Accepted

Prestige

0.330658

Consistent

Accepted

Affection

0.835241

Consistent

Accepted

Care

0.717428

Consistent

Accepted

Features

0.607333

Consistent

Accepted

Benefits

0.650461

Consistent

Accepted

Attributes

0.714361

Consistent

Accepted

136

Key Drivers of Organizational Excellence

Annexure 2 Total Variance Explained Initial Eigenvalues

Extraction Sums of Squared Loadings

Rotation Sums of Squared Loadings

Component

Total

% of Variance

Cumulative %

Total

% of Variance

Cumulative %

Total

% of Variance

Cumulative %

1

4.395

39.958

39.958

4.395

39.958

39.958

3.607

32.788

32.788

2

1.912

17.386

57.343

1.912

17.386

57.343

2.701

24.555

57.343

3

.924

8.405

65.748

4

.919

8.359

74.107

5

.649

5.901

80.008

6

.571

5.188

85.196

7

.490

4.453

89.650

8

.433

3.935

93.584

9

.322

2.931

96.515

10

.213

1.933

98.447

11

.171

1.553

100.000

13

Customer Relationship Management in Insurance Sector: A Comparative Study of L.I.C. & Ing Vysya Life Insurance Alok Mittal Ruchi Saxena

Relationship marketing implies attracting, maintaining & enhancing customer relations. It is beneficial because acquiring customer is more costly than retaining existing ones. "Customer is the God" is a very apt saying around the world and insurance companies operating worldwide also operate on the same lines. The main function of insurance is to provide protection against the possible chances of generating losses. Last decade has witnessed proliferation of insurance services all over the world and has seen efforts to integrate the Indian economy into the borderless global economy unleashing a host of challenges to the financial sector. During this time insurance business has also grown manifold. The increase in number of insurance companies has led to a wider choice for Indian consumers. Consumer Behaviour is a very complex phenomenon to understand and even at times marketing research studies fail to make an accurate prediction regarding the same. The results of this study can provide important insights to customers who have short listed these two organizations in their consideration set and thus enable them in taking an informed decision while choosing an insurance company. Findings of this study will also provide important insights about CRM practices to Insurance sector organizations for customer acquisition and retention.

INTRODUCTION According to Drucker (1979) "The essence of CRM is to identify, acquire and retain customers". Therefore to retain customers we must have a stronger focus on measuring and managing the individuals. Further Chakraborthy (1997) is of view that in a service industry like insurance the quality of customer service holds primal significance, particularly in the context of sustained business growth. Insurance organizations are unique as they are producing and delivering the services instantaneously at delivery points, that is, at branches (Godeshwar, 2000). This has an overwhelming impact on the customer psyche and makes him / her super sensitive towards the quality of services. Since the business relationship between the

138

Key Drivers of Organizational Excellence

Insurance organization and the customer is not a onetime transitory relationship but a relatively permanent and enduring one which requires to be nurtured with a good quality of customer service. In the post reform era, which is becoming frighteningly competitive day by day only those Insurance organizations, which have very clear customer focus, will have better chances of survival and growth. Insurance services in INDIA seem to have travelled a full circle. They started as a private activity, came under Government control, with a view to promote socio-economic development & accordingly L.I.C. was created on September 1st1956. With onset of financial deregulation & introduction of L.P.G. (Liberation, Privatization & Globalization) reforms private sector participation has been permitted by Government of India, Insurance Regulatory Development Authority (IRDA) was set up & now we observe many private sector organizations operating in insurance sector in Indian market. Insurance services are high contact services with customers and the real challenge lies in managing the customers (Shankar Ravi, 2002). Due to increasing cut throat competition it has become mandatory for Insurance companies to develop long term relationship with customers.

CONCEPT OF LIFE INSURANCE Whenever there is uncertainty, there is risk, which cannot be averted and it involves multifaceted losses. Since we don't have any command on uncertainties, this makes it essential that we think in favour of a device that becomes instrumental in spreading the loss. It is in this context we think about insurance, which is considered to be a social device to accumulate funds to meet uncertain losses. The main function of insurance is to provide protection against the possible chances of generating losses. It eliminates worries and miseries of losses at destruction of property and death. Further, it provides capital to the national economy, since the accumulated funds are invested in the productive heads. Insurance is essentially a cooperative endeavour. Under any insurance arrangement, basically, a large number of persons, in effect, agree to share a loss, which a few of them are likely to incur in future. Such sharing has the advantage that the individual share of loss is relatively small. When the sharing is done amongst a large number of persons, the individual share remains fairly steady from year to year. Such association of persons for sharing anticipated losses may be brought about voluntarily by all participants or may be organized by a few individuals or by an insurance company. The function of insurance in its various forms is to protect the few against the heavy financial impact to anticipated misfortune by spreading the loss among many who are exposed to the risk of similar nature. While it is not possible to predict which individuals amongst the many participants are likely to be the victims of misfortune, it is often possible to forecast the quantum of the loss, which the group as a whole may suffer. The sharing of such loss amongst the participants ensures that the victims are compensated for the loss suffered by them. As a consequence, the heavy and uncertain loss to some is neutralized by definite contribution of moderate amount that every participant is required to make. In other words, people who are exposed to the same risks come together and agree that if one of the members suffers a loss, then others will share the loss and make good the loss to the person who lost.

Customer Relationship Management in Insurance Sector

139

Life Insurance Life Insurance is the business of affecting the contracts of Insurance upon human life, including any contract whereby the payment of money is assured on death or the happening of any contingency dependant on human life and any contract which is subject to the payment of premiums for a term dependant on human life.

CHARACTERISTICS OF LIFE INSURANCE 1.

In life insurance, unlike in general insurance, the promise has to be redeemed sooner or later. No claim is to be paid on a fire insurance policy, if there was no fire during the term of the policy. But the holder of a life insurance policy will have to be paid earlier, if he dies, or later, if he survives the term.

2.

The amount payable on a claim arising in life insurance is not in doubt. It is as mentioned in the policy. The amount payable in a claim arising in general insurance depends on the extent of damage, and has to be determined through surveys and assessment.

3.

Most of the claimants have not suffered a loss. They are survivors, asking for fulfilment of a promise, in circumstances, which are not tragic.

4.

Claimants of death benefits are people different from the ones who had taken out the policy, and perhaps know little about the circumstances and conditions under which the policy was taken and had been looked after.

5.

Almost all policies are long-term ones. Most of the policies are for term of 15 years or more. There could even be terms of 40 years or more.

INDIAN HISTORY OF LIFE INSURANCE The earliest available reference to some forms of insurance is found in India was in the codes of Hammuralsi and Manu (Manav Dharma Shastra). The Aryans practiced the policy of YOGAKSHEMA as cited in Rig Veda suggesting that some forms of community insurance existed about 3000 years ago. The business of life insurance in India, in its existing form started in India in the year 1818 with the establishment of the Oriental Life Insurance Company in Calcutta. The first organized effort to establish a life insurance office in India was made in 1870 with emergence of Bombay Mutual Assurance Society Limited. In the initial years, this society was contented with operating on a limited scale and it was later on transferred to Oriental Government Security Life Assurance Company Limited established in 1874, which is the proprietary life office to be formed in India. The swadeshi movement of 1905 provided impetus to the formation of other insurance companies. A further spurt in the formation of new companies was witnessed during the II world war when inflationary pressure tended to swell the volume of business written in the country.

Important Milestones Some of the Important Milestones in the Life Insurance Business in India are 1912: The Indian Life Assurance Companies Act was enacted as the first statute to regulate the life insurance business.

140

Key Drivers of Organizational Excellence

1928: The Indian Insurance Companies Act enabled the government to collect statistical information about both life and non-life insurance businesses. 1938: Earlier legislation was consolidated and amended by the Insurance Act with the objective of protecting the interests of the insuring public. 1956: 245 Indian and foreign insurers and provident fund societies were taken over by the central government and nationalized. Thus Life Insurance Corporation was established on 1st September 1956 through a special act of parliament in form of LIC Act with a capital contribution of Rs.5 crores from the Government of India.

Insurance Sector Reforms In the year 1991, the then finance minister Dr. Manmohan Singh initiated economic reforms in India which laid the foundation and paved the way for global mantra in form of Liberalization, Privatization and Globalization (LPG). During the LPG process, the need for privatization of Insurance sector was felt and so in the year 1993, Malhotra Committee headed by former Finance Secretary and RBI Governor, R.N. Malhotra was formed to evaluate the Indian insurance industry and recommend its future direction. In the year 1994, the committee submitted the report and some of the key recommendations included:

Structure a)

The Stake of Government in the insurance Companies should be brought down to 50%.

b)

Government should take over the holdings of GIC and its subsidiaries so that these subsidiaries can act as independent corporations.

c)

All the insurance companies should be given greater freedom to operate.

Competition a)

Private Companies with a minimum paid up capital of Rs.1billion should be allowed to enter the industry.

b)

No Company should deal in both Life and General Insurance through a single entity.

c)

Foreign companies may be allowed to enter the industry in collaboration with the domestic companies.

d)

Postal Life Insurance should be allowed to operate in the rural market.

e)

Only One State Level Life Insurance Company should be allowed to operate in each state.

Regulatory Body a)

The Insurance Act should be changed.

b)

An Insurance Regulatory body should be set up.

c)

Controller of Insurance (Currently a part from the Finance Ministry) should be made independent.

Customer Relationship Management in Insurance Sector

141

Investments a)

Mandatory Investments of LIC Life Fund in government securities should be reduced from 75% to 50%.

b)

General Insurance Corporation (GIC) and its subsidiaries should not hold more than 5% in any company (Their current holdings to be brought down to this level over a period of time).

Customer Service a)

LIC should pay interest on delays in payments beyond 30 days.

b)

Insurance companies must be encouraged to set up unit linked pension plans.

c)

Computerization of operations and updating of technology should be carried out in the insurance industry.

The committee emphasized that in order to improve the customer services and increase the coverage of the insurance industry it should be opened up to competition. But at the same time, the committee felt the need to exercise caution as any failure on the part of new players could ruin the public confidence in the industry. Hence, it was decided to allow competition in a limited way by stipulating the minimum capital requirement of Rs.100 crores. The committee felt the need to provide greater autonomy to insurance companies in order to improve their performance and enable them to act as independent companies with economic motives. For this purpose, it had proposed setting up an independent regulatory body. Later on in the year 1999, an insurance bill was passed in the parliament for setting up a watchdog and apex body for insurance industry and thus Insurance Regulatory and Development Authority (IRDA) came into existence to lay down rules and regulations for private players participating in insurance industry. Thus, IRDA greeted the private players in Indian insurance industry with its stringent rules and regulations to ensure that history should not repeat (in context of malpractices) as it was used to be in 1950's.

LIC OF INDIA LIC had 5 zonal offices, 33 divisional offices and 212 branch offices, apart from its corporate office in the year 1956. All the 2048 branches across the country have been covered under front-end operations. Thus all the 100 divisional offices have achieved the distinction of 100% branch computerization. A Metropolitan Area Network (MAN), connecting 74 branches in Mumbai was commissioned in November, 1997, enabling policyholders in Mumbai to pay their Premium or get their Status Report, Surrender Value Quotation, Loan Quotation etc. from any branch in the city. The System has been working successfully. All 7 Zonal Offices as well as MAN centers are connected through a Wide Area Network (WAN). Interactive Voice Response System (IVRS) has already been made functional in 59 centers all over the country. This would enable customers to ring up LIC and receive information (e.g. next premium due, status of policy, loan amount, maturity payment due, accumulated bonus

142

Key Drivers of Organizational Excellence

etc.) about their policies on the telephone. This information can also be faxed on demand to the customer. LIC's Electronic Clearing System (ECS) and Automatic Teller Machine (ATM) premium payment facility is an addition to customer convenience. Apart from on-line Kiosks and IVRS, Information Centers have been commissioned at Mumbai, Ahmedabad, Bangalore, Chennai, Hyderabad, Kolkata, New Delhi, Pune and many other cities. With a vision of providing easy access to its policyholders, LIC has launched its SATELLITE SAMPARK offices. The satellite offices are smaller, leaner and closer to the customer. LIC is working with the following philosophy"Explore and enhance the quality of life of people through financial security by providing products and services of aspired attributes with competitive returns, and by rendering resources for economic development."

Customer Relationship Management as Practiced in LIC Review & improvement of processes a)

Recognizing the branches through single window system.

b)

Implementation of request tracking module for generation of Management Information System (MIS) control over policyholders' communication.

c)

A minimum of 50% of the new policies to be issued through Green Channel or improvement by 10% over last years' ratio.

d)

ECS for payment of claims.

e)

Alternate channels for premium payments like-Internet, Kiosks, ATM and ECS to be popularized on a large scale. 2000 additional collection point's to be activated. At least 5% of total numbers of collections are done through alternate channels. ECS to be extended to all metros / post office services, computerized printing of cheques, etc. to be adopted.

Data Purity & Management a)

100% completion of Salary Saving Scheme (SSS) address masters, loan purification, IPP master purification

b)

Effective usage of Customer Relationship Management (CRM) by (i) Capturing of Date of Births in policy masters. (ii) Identification of customers for new business for various niche products. (iii) Revival of lapsed policies on a continuous basis. (iv) Identification of orphan policies. (v) Capturing of mobile and telephone numbers of customers for implementation of SMS.

Customer Relationship Management in Insurance Sector

143

Customer Retention & Relationship Building a)

Use of customer ID for strengthening customer relationship and for business growth.

b)

Committee on customer relations for playing an important role in reviewing & maintaining various aspects of customer servicing.

c)

Grievance redressal machinery made very much responsive and all channels of communications are functional and effective.

d)

SSS servicing through special camps and frequent laisoning with Policy Agents.

e)

Customer contact programs made more meaningful and participative exercise.

f)

Special drive for adoption of orphaned policies by agents. Regular customer meets for(i)· Covering a wide spectrum of policyholders. (ii)· Provide latest information about L.I.C. products and services. (iii)·Promote brand image. (iv)· Get feedback to assess customer's perceptions.

Keeping the customers informed a)

Usage of information technology for better security.

b)

Interactive Voice Response System.

c)

LIC's Website providing information about LIC and its subsidiaries and the products offered by them. (i) Grievance Redressal Machinery. (ii) Claims Review Committees. (iii) Consumers' affair committee. (iv) Citizen's charter (v) Customer meets. (vi) Free phone call facility to policyholders.

ING VYSYA LIFE INSURANCE COMPANY ING VYSYA is also a very reputed company in India. ING VYSYA started its operations in India during September 2001, with two (02) branches and an initial investment of Rs.110 crores. It is having headquarters located at Bangalore. Today ING VYSYA is having a dedicated team of 10,000 well trained life insurance advisors with a customer base of 2.5 Lakhs and a network of 115 branch offices located all over India.

144

Key Drivers of Organizational Excellence

Customer Relationship Management as Practiced in ING VYSYA Customer services department (CSD) is centralized with corporate office at Bangalore. All the branches located throughout INDIA, send their requirements and dispatch request of clients to Bangalore. ING VYSYA works on one tire system. Its CRM Process is as Under:

Out-Bound Calls a)

Dispatch of bonds, refunds, maturity, surrenders, claims, bonus cheques are dispatched from centralized head office located at Bangalore.

b)

SMS / Telephone calls are sent to the clients for payment of due premium 21 days prior to the due date.

c)

Premiums overdue intimation sent after 15 days from due date.

d)

Information about policy lapsed is sent after 30 days of due date.

e)

Regarding lapse reinstatement (policy revival) after 2nd, 3rd, & 6th month from due date. (i)· ECS facility is being provided which is effective on banks. (ii)· SMS reminder was given before presenting the cheques to bank for recovering remaining balance. (iii)·In case of failure of collection of premium due to insufficient funds, reminder was sent to the clients for maintaining sufficient balance in their account. (iv)· Sending birthday messages, anniversary messages, etc.

In-Born Calls ING VYSYA has centralized call centre based at Bangalore. All local queries area)

Also routed through centralized call-centers.

b)

Policy owner's services (POS) request.

After reaching to H.O. it will be either acceptable or non-acceptable. a.

Acceptable - switch/redirection/change of address with adequate proof. After carrying out the process it is confirmed to client.

b.

Non-Acceptable - Change of address without adequate proof. Then regret letter is sent to client.

RATIONALE BEHIND THE STUDY Increasing affluence in life style, rising income levels and changing preferences, exposures to westernized culture & increasingly high expectations have contributed to development of new generation of more demanding customers (Jha, 2002). In past customers were very simple & less demanding. They were ready to pay premiums in time without asking too many questions but now they want quickest and easiest money multiplication method in shape of insurance.

Customer Relationship Management in Insurance Sector

145

In today's increasingly competitive environment Customer Relationship Management (CRM) is more critical for corporate success (Sunder, 2000). Delivering high quality services and achieving high customer satisfaction coupled with customer delight is closely linked with profit, cost saving and increase in market share (Ganesan, 1994). The longer accompany retains a customer, it can do more business with the customer, therefore service providers are directly putting their efforts in integrating the three elements1.

People- People play very important and vital role in this sector which cannot be supplemented by any other element.

2.

Services- For becoming good service provider only being good people will not be enough, but of course an organization has to provide excellent services to satisfy and delight customers.

3.

Marketing-Marketing should be given due emphasis for identification of needs and wants and thereafter ensuring complete customer satisfaction.

Prior to introduction of L.P.G. reforms, people had no choice, but to take a policy from L.I.C. only. With the introduction of new entrants in the market (which are around 17 today), customers have more number of options to choose from. Present research aims to explore Customer Relationship Management (CRM) practices adopted by Life Insurance Corporation of India and ING VYSYA Life Insurance Company and seeks to compare private and public sector institutions on the basis of CRM.

Objectives of the Study The study was undertaken with the following objectives1.·

To study the CRM practices in L.I.C. and compare it with the CRM practices adopted by ING VYSYA in Indore region.

2.·

To identify various factors which constitute CRM practices

RESEARCH METHODOLOGY The Study The study was exploratory in nature and was carried out to identify the factors which formed the basis of CRM in insurance sector and on the basis of these factors a comparative analysis of LIC with ING VYSYA was done.

Sample Size A sample size of 100 respondents in all was taken based on convenience random sampling approach which consisted of 50 respondents from L.I.C. policyholders and 50 respondents from ING VYSYA policy holders.

Tools Used for Data Collection For the purpose of data collection a self designed questionnaire meeting the objectives of study was developed based on Likert type 5 Point scale which consisted of 22 statements

146

Key Drivers of Organizational Excellence

wherein the respondents were asked to indicate their degree of agreement or disagreement to a particular statement. Thus 50 respondents owning L.I.C. policies and 50 respondents owning ING VYSYA policies were surveyed to achieve the objectives of the study.

Tools & Techniques Used For Data Analysis Various statistical tools like mean value, standard deviation were used as they are by far the most important and widely used measure of studying dispersion. 'Z' test was employed for the purpose of analyzing the data collected through questionnaire technique. Certain hypotheses were also developed and their validity was ascertained by applying 'Z' test. The calculated value of 'Z' statistic was compared with the critical value of 'Z' statistic i.e. 1.96 at 5% level of significance. If the calculated 'Z' value was higher than the critical 'Z' value the alternate hypothesis was selected and null hypothesis was rejected. However, if the calculated 'Z' value was lower than the critical 'Z' value the null hypothesis was selected and the alternate hypothesis was rejected.

RESEARCH FINDINGS & DISCUSSION The following null and alternate hypotheses were formulated: H01

=

There is no significant difference in the branch network between LIC & ING

Ha1

=

There is significant difference in the branch network between LIC & ING

H02

=

There is no significant difference in up to date technology between LIC & ING

Ha2

=

There is significant difference in up to date technology between LIC & ING

H03

=

There is no significant difference in technology utilization between LIC & ING

Ha3

=

There is significant difference in technology utilization between LIC & ING

H04

=

There is no significant difference in customer commitment between LIC & ING

Ha4

=

There is significant difference in customer commitment between LIC & ING

H05

=

There is no significant difference in empathetic attitude between LIC & ING

Ha5

=

There is significant difference in empathetic attitude between LIC & ING

H06

=

There is no significant difference in safe monetary transactions between LIC & ING

Ha6

=

There is significant difference in safe monetary transactions between LIC & ING

H07

=

There is no significant difference in flexibility between LIC & ING

Ha7

=

There is significant difference in flexibility between LIC & ING

H08

=

There is no significant difference in complimentary services between LIC & ING

Ha8

=

There is significant difference in complimentary services between LIC & ING

H09

=

There is no significant difference in the product range between LIC & ING

Ha9

=

There is significant difference in the product range between LIC & ING

Customer Relationship Management in Insurance Sector

147

H010 =

There is no significant difference in attending customers between LIC & ING

Ha10 =

There is significant difference in attending customers between LIC & ING

H011 =

There is no significant difference in the branch office ambience between LIC & ING Ha11 = There is significant difference in the branch office ambience between LIC & ING

H012 =

There is no significant difference in the personalized services between LIC & ING

Ha12 =

There is significant difference in the personalized services between LIC & ING

H013 =

There is no significant difference in responding to customer query between LIC& ING

Ha13 =

There is significant difference in responding to customer query between LIC & ING

H014 =

There is no significant difference in between LIC & ING on charitable activities

Ha14 =

There is significant difference between LIC & ING on charitable activities

H015 =

There is no significant difference in customer satisfaction between LIC & ING

Ha15 =

There is significant difference in customer satisfaction between LIC & ING

H016 =

There is no significant difference in revival of lapsed policy between LIC & ING

Ha16 =

There is significant difference in revival of lapsed policy between LIC & ING

H017 =

There is no significant difference in claim settlement between LIC & ING

Ha17 =

There is significant difference in claim settlement between LIC & ING

H018 =

There is no significant difference in providing complimentary gifts to customers between LIC & ING

Ha18 =

There is significant difference in providing complimentary gifts to customers between LIC & ING

H019 =

There is no significant difference in providing SMS alert facility for premium renewal to customers between LIC & ING

Ha19 =

There is significant difference in providing SMS alert facility for premium renewal to customers between LIC & ING

H020 =

There is no significant difference in sending greetings and birthday wishes through SMS and E-mails to customers between LIC & ING

Ha20 =

There is significant difference in sending greetings and birthday wishes through SMS and E-mails to customers between LIC & ING

H021 =

There is no significant difference in sending copy of proposal form along with policy to customers between LIC & ING

Ha21 =

There is significant difference in sending copy of proposal form along with policy to customers between LIC & ING

148

Key Drivers of Organizational Excellence

H022 =

There is no significant difference in running special policy revival campaigns for customers between LIC & ING

Ha22 =

There is significant difference in running special policy revival campaigns for customers between LIC & ING

Table showing the mean values and standard deviation values for LIC & ING Vysya

Sr. No.

Hypothesis

Mean (X1)

Mean (X2)

S.D. (ó1)

S.D. (ó2)

Z-Value

Results

01.

H01 & Ha1

1.8

1.9

0.864

0.866

-0.578

H01 Accepted

02.

H02 & Ha2

2.3

1.7

1.069

0.671

3.370

Ha1 Accepted

03.

H03 & Ha3

2.7

2.1

1.099

0.839

3.092

Ha3 Accepted

04.

H04& Ha4

2.4

2.3

1.138

0.839

5.076

Ha4 Accepted

05.

H05 & Ha5

2.5

1.9

1.092

0.818

3.174

Ha5 Accepted

06.

H06 & Ha6

2.0

1.9

0.968

0.912

0.534

H06 Accepted

07.

H07& Ha7

2.5

2.0

0.973

0.968

2.631

Ha7 Accepted

08.

H08& Ha8

3.2

2.9

1.233

1.030

1.333

H08 Accepted

09.

H09 & Ha9

1.9

2.3

1.147

1.006

1.860

H09 Accepted

10.

H10 & Ha10

2.4

1.7

1.138

0.694

3.763

Ha10 Accepted

11.

H11 & Ha11

2.3

1.8

0.952

0.747

2.923

Ha11 Accepted

12.

H12 & Ha12

2.0

1.4

1.020

0.670

3.529

Ha12 Accepted

13.

H13 & Ha13

2.3

1.6

0.899

0.725

4.347

Ha13 Accepted

14.

H14& Ha14

2.7

2.5

1.331

0.862

0.904

H014 Accepted

15.

H15 & Ha15

1.9

1.8

0.912

0.815

0.584

H015 Accepted

16.

H16 & Ha16

2.4

2.5

1.126

0.838

-0.507

H016 Accepted

17.

H17 & Ha17

2.2

2.6

1.212

0.749

-2.000

H017 Accepted

18.

H18 & Ha18

3.0

3.4

1.615

1.066

-1.465

H018 Accepted

19.

H19 & Ha19

3.9

2.1

1.376

1.235

6.940

Ha19 Accepted

20.

H20 & Ha20

4.2

2.6

1.087

1.178

7.270

Ha20 Accepted

21.

H21 & Ha21

4.0

1.7

1.087

1.132

10.450

Ha21 Accepted

22.

H22 & Ha22

2.6

2.4

1.311

0.972

0.689

H022 Accepted

(The Table value of Z at 5% Level of Significance=1.96) From the above results we observe that there is no significant difference in the branch network of LIC and ING Vysya in Indore region. However in rural areas LIC has got a very good branch network, which ING is not having at present. As far as up to date technology is concerned alternate hypothesis has been accepted showing that there is significant difference between the two organizations in level of automation. On the basis of higher mean value we can infer that LIC has better up to date technology than ING Vysya. Similar reasoning can be cited in case of technology utilization by both the organizations. A very important parameter of CRM is fulfillment of promises and customer commitment. Though acceptance of alternate hypothesis implies no significant difference between the two organizations on this aspect of CRM, still higher mean value in case of LIC shows its superiority and better services over

Customer Relationship Management in Insurance Sector

149

ING. Similarly in case of empathy there is no significant difference between the two organizations, but still LIC scored better in terms of sympathetic attitude. As far as safety of monetary transactions was concerned, customers felt equally safe with both the organizations. This could be due to the fact that ING Vysya happens to be one of the oldest private sector banks functioning in Indian market and has been able to generate a feeling of trustworthiness among its customers. Similarly in terms of flexibility and responsiveness it was evident that there exists a vast difference between these two organizations. LIC was found to be more active in attending to customer problems. As far as providing complimentary services and gifts, wide variety of products, customer treatment, charitable activities, quick settlement of claims and revival of lapsed policies were concerned, both the organizations fared equally well. A significant difference was observed between LIC and ING in sending copy of proposal form along with policy to customers. Similarly there was a significant difference between LIC and ING in sending greetings and birthday wishes through SMS and E-mails to their respective customers. This could be due to the fact that ING being a new entrant in Insurance sector has very limited number of customers to serve, whereas LIC has got largest number of customers to cater, hence was in a position to create goodwill and better brand image through its wide branch network and employee base. LIC definitely scored better but ING also closely followed the path.

LIMITATIONS OF THE STUDY Every research has got certain limitations and the present research is also not an exception to this fact. The study had the following limitations1.·

This study was carried out in Indore region and therefore the findings are specifically applicable to Indore region only. They cannot be generalized as a whole for entire nation.

2.

Biasness on part of respondents while responding to the questionnaire might have affected the accuracy of results.

3.

Due to time and other constraints a small sample size of 100 may not give a true picture of the research.

CONCLUSION AND DIRECTIONS FOR FUTURE RESEARCH The Indian insurance market is now under the influence of competition. The private insurance companies are advertising to create their market whereas LIC is reaping the benefits. LIC has been able to show phenomenal growth through single premium policies at rates of return that were above what could be justified by the then prevailing interest rates. But IRDA has started wielding the stick to ensure that all players (including LIC) offer policies that are in line with market conditions. Indeed, the real utility of a product like insurance is there when the customer is no more. Therefore, trust is an extremely important factor. There are two key elements in customer relationship management-promise and trust (Anton, 1996). The promises relate to the assurances that marketers build to maintain the relationship with the customer. Trust is another component in relationship building process. Continuous product innovation, consistency of the product quality and other features of marketing mix have been the reason of the trust (Sridhar, 2002). In insurance, reliability means that the insurance company

150

Key Drivers of Organizational Excellence

delivers its promises - promises about delivery, service provision, problem resolution and pricing. Customers want to do business with organizations that keep their promises, particularly their promises about the core service attributes (Berry, Shostack and Upah, 1983). Here LIC definitely scores. Further, unlike UTI in the mutual fund sector, LIC has been relatively more proactive in changing with the times. Further studies can be carried out for new service development, insurance company's image measurement; insurance company's advertising effectiveness and market share analysis. Studying larger sample and covering a wider geographical area one can replicate the present study. Comparative study of L.I.C. with other insurance organizations can also be carried out. Similar type of studies can be done for other organizations in service sector like Banks, Air Lines, Telecommunication, etc. There is a scope to explore the market potential by proper market survey followed by systematic segmenting, targeting and positioning of products and services offered by various insurance companies. The companies should strive hard for maintaining a proper balance between their offerings vis-à-vis expectations of the customers (Mishra, 2000). Insurance companies can devise effective methods for handling the customer grievances and can try to reduce the procedural formalities in order to reach more close to their customers. Research has shown that if customer complaint is handled efficiently 95 percent of those complaints return to do business (Catherine, 1988) Thus, a "No error attitude and complaint" management leads to higher customer satisfaction. Various innovative schemes can be thought of and implemented by insurance companies at regular intervals for retaining their customers (Singh, Deepali 2001). Findings of this study can provide important insights about CRM practices to Insurance sector organizations for customer acquisition and retention.

Bibliography Anton, T. (1996), Customer Relationship Management: Making Hard Decision with Soft Numbers, Prentice Hall: New Delhi Cook, S. (1997), Customer Care, Kogan page, London. Berry, L.L.; Shostack, G.L. & Upah, G.D (1983), Relationship Marketing – Emerging Perspectives on Services Marketing, Chicago: American Marketing Association Catherin, DeVrye (1988), The

Economic Times,, Mumbai.

Chakraborthy, K. (1997), The Challenges Ahead, IBA Bulletin, 19(3), 23-28. Drucker Peter (1979), Management: Tasks, Responsibilities and Practices, New York: Harper & Row pp 17-19. Ganesan, S. (1994), Determinants of Long Term Orientation in Buyer-Seller Relationships, Journal of Marketing, 58, April, 1-19. Godeshwar, B. (2000), Customer Relationship Management, Udyog Pragati,, 24(3), 21-27. Jha, S.M (2002), Service Marketing, Mumbai: Himalaya Publishing House. Mishra, Puja (2000), Customer Relationship Management, Indore Manager, 34-35. Shankar, Ravi (2002), Services Marketing – An Indian

Perspective, New Delhi: Excel Books.

Singh, Deepali (2001), Information Technology Enabled Customer Relationship Management, Journal of JIMS 8M, January-March. Sridhar, G. (2002), Customer Relationship Management, JIMS 8M, 37-43. Sunder, K. Shyama (2000), Coming Closer to the Customers, Indian Management, December, 49-51.

14

Challenges Faced by Marketing Managers in 21st Century Bharti Venkatesh Vikas Pandey

A marketing manager is always in search of a well-knit marketing mix and best marketing strategy that has good consistency within its elements. With the increase in the global competitiveness in the 21st century, Global marketplace has developed because of factors such as explosive growth in world GDP, rapid expansion in merchandise trade, cost cutting and increasing product quality by firms seeking competitive advantage, and revolution in communication technology. The present paper talks about the challenges faced by marketing managers in the various areas such as, information technology, corporate strategy, ethics, globalization, public values and social responsibility.

INTRODUCTION Marketing is a process, which identifies, anticipates and satisfies customer needs efficiently and profitably. Modern marketing begins with the customer and ends with the customer. The functions of marketing can be broadly classified as buying, selling, assembling, transportation, and storage, financing, grading and marketing information. There are a large number of decision areas in marketing which marketing managers had to take. Most of these pertain to the four basic elements of marketing mix, i.e., product, price, place and promotion. A marketing manager is always in search of a well-knit marketing mix and best marketing strategy that has good consistency within its elements. With the increase in the global competitiveness in the 21st century, Global marketplace has developed because of factors such as explosive growth in world GDP, rapid expansion in merchandise trade, cost cutting and increasing product quality by firms seeking competitive advantage, and revolution in communication technology, Myer (2002). Challenges faced by the marketing managers includes the area of information technology, corporate strategy; ethics, public values and social responsibility; global challenges; the role of the government; ecological issues; quality and productivity; workforce diversity; change; and empowerment, out of which information technology challenge is the major one.

152

Key Drivers of Organizational Excellence

Organizations that want to maintain leadership in the economy and the technology that are going to emerge in the future need to give enough consideration to the social position of the knowledge professionals and their values. The argument is developed that marketing and marketing communication are in transition, moving from the historical marketing approaches of the 1960s, which focused on the 4Ps to a new, interactive marketplace in the twenty-first century. A structural model of three marketplaces is presented based on the location and control of information technology, Kotler (1988). The premise is developed that as information technology shifts from one market player to the next, definitive changes in the need for communication develop. A description of the development of the Integrated Marketing Communication concept is furnished. Based on that, a four level transition process is proposed as organizations move from one stage of integrated marketing communication development to another, generally based on their ability to capture and manage information technology.

FRAMEWORK SUGGESTED BY MARKETING MANAGERS REGARDING CHALLENGES IN THE 21ST CENTURY Based on observation and experience, marketing managers suggests a framework of six emerging themes which regularly appear when examining marketing in the high-technology arena (21st century) and which are closely - related to the key characteristics of high-tech products. Each of the themes identified has implications for the marketing task facing marketing managers of high-tech products - they reinforce the need to address both internal and external marketing issues and the importance of further research to develop paradigms appropriate to successful commercial activities in high-technology industries. Includes the "softer" problems of technology seduction and the usefulness of concepts such as the technology life cycle, and also covers the need to focus on credibility, standards, positioning and infrastructure, all of which impact on the way marketing managers will orchestrate the marketing mix. The themes in no way replace standard marketing approaches, but do provide a background for the formulation of marketing strategy and a basis for further development in this area.

PROBLEMS FACED BY MARKETING MANAGERS OF SSI IN THE 21ST CENTURY Marketing managers face all types of business enterprises problems, and these problems can be in case of small-scale units or large-scale units because of lack of knowledge, adequate funds and lack of experience. Challenges faced by marketing managers of small-scale industry in 21st century can be following problems:

Competition from large scale sector Because of scarcity of resources, small Entrepreneurs usually use inferior technology. As a result their products are not standardized. The obsolete technology used by them gets translated into inferior quality of products, Myer (2002).

Lack of marketing knowledge Most of the small-scale entrepreneurs are not highly educated or professionally qualified to have knowledge of marketing concept and strategy. Their lack of expertise further inhibits their understanding of the prevailing trends in the market.

Challenges Faced by Marketing Managers in 21st Century

153

Lack of Sales Promotion Small units lack the resources and knowledge for effective sales promotion. Large-scale units mostly have well-known branded names. They also have huge amount of resources to spend on advertisement and other sales promotion tools. Small-scale units, on the other hand, have to pay a heavy commission to dealers for their selling efforts, which reduce profits margins.

Weak Bargaining Power At the time of purchase of inputs, large-scale entrepreneurs manage to get huge discounts and credit. Such facilities are not available to small units.

Product Quality It is costly and difficult for a small unit to have quality testing and evaluating equipment.

Credit Sales The small-scale enterprise is invariably called upon to sell on credit. However, when it comes to purchasing inputs, they are denied liberal credit facilities. As a result, they have to borrow excessive working capital than actually needed. This increases the general cost of production and prices, making it non-competitive. SSI Marketing managers in 21st century are likely to face the problems due to large-scale industry, which may be as follows: 1.

Small Scale Industry faces treat from the emerging large-scale industry becasue if SSI grows then the monopoly of large-scale industry will be vanished.

2.

SSI lack innovative ideas, which maintain the organization’s outstanding performance.

3.

Small scale industry also face the problem of product quality in comparison to large scale industry where there will be mass production but here it is difficult to have quality testing.

4.

Some large-scale industries are involved in the speculative transactions, which involves higher risks and creates problems for small-scale industry.

Any organization needs to get started with a good marketing strategy. Good strategic planning is the most critical contributor to the success of any business. It helps us look at all angles of our operation and pinpoint such as: a) The current position of the organization, and b) where the organization needs to be positioned for growth and profits. If the strategic plan is a good one, it will include: l

a definitive outline of the objectives and goals

l

an in-depth evaluation of our operational practices, internal and external

l

a hard look at the products and services-and their unique selling features

l

a comprehensive analysis of the target customers-and new customer acquisition plan

l

a review of the top competitors and an aggressive reduction plan

l

a thorough audit of the current marketing mix, and a plan for growth

154

Key Drivers of Organizational Excellence

Advertising and Marketing Marketing managers has developed a comprehensive offering of the most efficient, effective, and affordable digital marketing and advertising solutions available, include website marketing and Internet advertising, business-to-business and consumer marketing, corporate identity packages, lead generation, strategic planning, new product launches and promotions, Myer (2002).

To Have Good Product Launch and Promotion For any new product launch to succeed, it must be the right product at the right time with high visibility, good credentials and clean delivery. The investment in a new product launch can often be large, the risk substantial, and the price of failure catastrophic.

Realities of New Product Launches in the 21st Century Even the best new products from the most well known companies don't stand a chance in today's marketplace without valiant launch introductions and strong promotional continuity. With 91% of buyers using the Internet to shop, we no longer have a choice. We must maximize our launch impact with a bold online presence if you want to succeed. About 85% of all visits to websites originate from search engines. If our website doesn't rank high enough on the various search engines, potential customers won't find us. If we allow our website and our new product to fall into the "lost" category, our new product launch is doomed to fail. Rivaling fierce competition and the barrage of promotional messaging, our product launch communiqué has to be strong, unique, memorable and useful. It has to be timely, informative and purpose driven. The presentation, in all formats, must be qualitative, quantitative and far-reaching, The Hindu (2004).

Realities to Be Faced By Marketing Managers in the 21st Century The digital age has literally transformed the way companies operate. Traditional marketing solutions don't work anymore. We need to have internet marketing strategies that include a high profile website, a high-visibility Internet advertising plan, and electronic communication capabilities if you want to succeed in today's business world. Even if the primary business goals haven't really changed much over the years, then the marketing strategies and practices should look very different. The way we communicate internally and externally, the way we transact business, the way we project the image, the way we advertise to new prospects-these can all become big problem areas if we've lost sight of the big picture. It seems like a pretty basic concept - identified and qualified customers who are ready to buy what we are trying to sell. Yet no issue is more important and less understood in the digital age than Lead Generation. It's certainly not a new idea. Sales professionals have always practiced lead generating techniques. Trade shows, telephone surveys, cold calls, direct mail, workshops and seminarsthese approaches have all been somewhat historically successful at producing qualified leads, albeit slowly and sometimes very costly. Dynamic Digital Advertising knows what works better-and is far less expensive than the old methods.

Challenges Faced by Marketing Managers in 21st Century

155

The New Reality of Lead Generation For successful Marketing in the 21st Century, it is critical that companies understand the profile of their digital-age prospects. Buyers today have far more choices, much more control, and much better communication tools than even a few years ago-and the tool consumers use most to make informed buying decisions is the Internet. The Internet, and our growing reliance on it, has raised the value of website traffic to lofty heights. That's why affordable lead generation initiatives are so important to an overall marketing and sales plan. Unfortunately for many companies, such initiatives are still undervalued, poorly implemented and misunderstood, Sharavanti (2004). According to marketing managers lead generation programs are the right for the organization. Many companies invest time and money in creating websites that don't help with online lead generation and don't result in the amount of sales they should. So, our first step should be to maximize our website's marketing potential with the following formula: Internet + Intranet + Extranet = Net Profit In today's marketplace, if company doesn't have a strong Internet presence with a search engine optimized corporate website; we are omitting a critical communication tool for both your customers and employees. The absence of a company Intranet site can adversely affect our internal communications resulting in a trickle-down negative impact to your corporate culture, and our profit potential. Likewise, extranet sites are becoming more common and will soon be expected tools for information exchanges in the digital age. A company's "corporate identity" once loosely referred to their logo, letterhead, business cards and the sign outside the building. Then came the Internet and the World Wide Web. As information technology evolved, the definition of "corporate identity" became vastly more complex, and its value has become vastly more important. Today, a company's name, logo and look are only the foundation layer of its corporate identity. Today, a company's corporate identity embodies its entire persona and every contributing factor. Whether you're a new company looking to build your corporate identity or if you're already established and your identity needs some remodeling, call the experts. Dynamic Digital Advertising has all the corporate identity solutions you need. Corporate identity depends on digital technology. The comprehensive digital capabilities and innovative approach positioned the digital age of marketing. Today, the leader in understands the new realities of Marketing in the 21st Century. If we don't advantage of all the available digital technology to build our corporate identity and brand our company, we should take another hard look at our marketing plan.

Services Marketing: Ready to Serve - Laterally In 21st Century Historically companies have strived hard to reduce the cost, time and complexity involved by implementing technological innovations but the need of the hour is to put their creative hat on and to make an everlasting impression. The logic and analysis of the left-brain has been increasingly overshadowed by creative inputs from the right. Cooking the food in front of customers as seen in theme restaurants, strategic tie-ups like M-Ticketing between Hutch and Jet Airways, exotic customer experiences like the Pizza Hut Taco Bell and novel concepts like health tourism are examples of the increasing emphasis being placed on creativity in

156

Key Drivers of Organizational Excellence

every aspect of marketing. A move towards this would involve creating an innovative ecosystem encompassing people, process and physical ambience. The game is no longer about selling services, but marketing enduring experiences.

Retelling the Retail Management in 21st Century The fastest growing sector in the Indian economy, retail, is no longer the mundane activity that we recall. The manifestation of western style malls can be seen in the form of departmental stores, hypermarkets, supermarkets and specialty stores which are revolutionizing traditional markets and which have changed the faces of metros and second-rung cities alike. "Going big" is the in format now, using which the retailers are increasing the touch points and providing a wide array of choices to the consumers. Retail is being introduced afresh to the Indian consumer, who used to spend grudgingly, but now revels in this new shopping experience. With Wal-mart and Reliance vying for a piece of this pie, the question remains: Will this lateral thinking be able to convert footfalls into actual sales realization?

Changing Face of Marketing in the Creative Economy of 21st Century The age of tangible core competencies is long gone. Attributes & strategies that saw corporations through the 20th century wouldn't see them through the 21st. Companies strive to find new and more meaningful competencies lying within the organization which can create sustainable performance. In this new environment the Darwinian struggle will be won by smart companies, who can harness creativity to generate top line growth, Kotlar (1988). Companies are applying innovation to products, services, manufacturing processes, managerial processes, or organizational design. But now the challenge that companies are faced with is to connect with their customers' emotions, to link research and development labs to consumer needs, and to construct maps showing new opportunities for innovation. In an age when the customer is being hailed as God, it has become imperative for the companies to heed to the customer. For this to happen the organizations need to put in place processes for innovation as they would for any other functional area. Nurturing a culture, which supports and encourages creativity, is the only way that organizations can keep themselves ahead of the market curve and satisfy the customer. Corporations are thus converging towards "The Creative Economy" and many have found unique ways of flourishing under this new economy. Companies in the B2C segment have been at the forefront in leveraging innovation to hone their marketing mix and create customer delight. The challenge doesn't end with improving the elements of their marketing mix alone. With convergence being the order of the day the challenges have become multifaceted. Increasingly the organizations are faced with and are trying to address challenges in product and market innovation, innovation in services and in distribution and retailing.

New Strides of Marketing Managers in 21st Century Innovation in products, processes and markets these are the key differentiating factors in a Creative Economy. Companies are innovating and improving in ways that provide value to

Challenges Faced by Marketing Managers in 21st Century

157

both consumers and their own brands. Creativity these days has become integral to every aspect of marketing, manifestations visible in the product innovations of 3M, the rural marketing focus of the likes of HLL and ITC, the distribution of Saffola along with the Mumbai dabbawallas, or new economy mediums like Web2.0, RSS, and blogs and beyond. The underlying strategy in the above examples is to create new products, capture new demand, and enter uncontested markets, while simultaneously ensuring low cost and differentiation, thus making the competition irrelevant. It is not about capturing markets anymore; it's about creating them. In future the delivery of goods, services, and spare parts will be done by e-commerce organizations. On the contrary far more advanced theoretical and information technology knowledge is required for some other kinds of knowledge work such as market research, product planning, advertising and promotion, etc.

References Kotler, Philip (1988), Marketing Management: Analysis, Planning, Implementation and Control (6th Edition) , Prentice Hall of India: New Delhi. Meldrum, M. J. (1995), Marketing High-Tech Products: The Emerging Themes, European Journal of Marketing, 29(10), 45-58. Myer, Klaus, E. (2002), Management challenges in privatization acquisitions in transition economies, Journal of World Business, 37(4), 266-276. Schultz, Don E& Schultz, Heidi F (1998), Transitioning Marketing Communication into the Twenty-First Century, Journal of Marketing Communications, 4(1), 9-26. Sravanthi Challapalli (2004), The Rural Promise, The Hindu Business Line Internet Edition, Financial Daily, Thursday, Aug 12. Stevenson, Howard H., & Jarillo, Carlos (1989), Entrepreneurship Concepts, Theory and Perspective , Springer: Berlin Heidelberg.

158

Key Drivers of Organizational Excellence

15

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches Shilpa Bhakar Shailja Bhakar Neha Pareek

To investigate consumer attitudes in India towards local and foreign brand names, against a background of increasing prevalence of foreign brand names and stereotypes of countries of origin covering the range from positive to negative. This study was designed to extend knowledge of cognitive processing of country of origin cues by refining the concept of country image and investigating its role in product evaluations. The main purpose of this study was to investigate consumer awareness and perceptions towards "foreign" versus "Indian" brand names for different product categories and also examine the effect of COO of a brand on consumer preferences.

INTRODUCTION The concept of brand image aptly sums up the idea that consumers buy brands not only for their physical attributes and functions, but also because of the meanings connected with the brands. Imagery is a mixture of notions and deceptions, based on many things. At times, imagery is indeed largely an illusion. An image is an interpretation, a set of inferences and reactions to a symbol because it is not the object itself, but refers to it and for it. (Levy and Glick, 1973) A product is a symbol by virtue of its form, size, color and functions. Its significance as a symbol varies according to how much it is associated with individual needs and social interactions. A product, then, is the sum of meanings it communicates, often unconsciously, to others when they use it. A brand can be viewed as a composite image of everything people associate with it. These impressions determine how a prospective buyer feels about it and influence his selection. (Newman, 1957)

COUNTRY IMAGE Country image (CI) has generally been conceptualized and operationalized in one of two ways. Some researchers have treated CI as consumers' overall perceptions, e.g. quality of

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

159

products made in a given country (Crawford and Garland, 1988; Etzel and Walker, 1974; Han and Terpstra, 1989; Hong and Wyer, 1989; Howard, 1989). Han and Terpstra (1988) investigated CI on the basis of five dimensions (technical advancement, prestige, workmanship, economy and serviceability) for two products (televisions and automobiles) and found the ratings to be consistent across the product categories. As such, they concluded that generalized country-level CIs might exist. Such an approach assumes that CI is a halo construct (Han, 1989), i.e. consumers are assumed to lack product-level attribute information in memory that can be used in the evaluation process. A second and more common interpretation of CI is its definition as a set of generalized beliefs about specific products from a country on a set of attributes (Bilkey and Nes, 1982). Especially in the case of familiar products, consumers have product-specific knowledge structures in memory, which are well developed. Empirical research repeatedly has demonstrated that consumers do hold different sets of beliefs across different product categories and that attitude towards products from a given country vary by product category (Bilkey and Nes, 1982; Kaynak and Cavusgil, 1983; Roth and Romeo, 1992).

CUSTOMER PERCEPTION Customer perception is the interpretation process by which consumers make sense of their own environment. Many people believe that perception is passive or that we see and hear what is out there very objectively. However, the truth is quite the contrary. People actually actively perceive stimuli and objects in their surrounding environments. Consumers see what they expect to see, and what they expect to see usually depends on their general beliefs and stereotypes (Eroglu and Machleit, 1989). Since different groups (segments) of people have different general beliefs and stereotypes, they tend to perceive stimuli in the marketing environment differently. What does all of this mean for marketers? Basically, that marketers need to be aware of this fact about perception so that they may be able to tailor their marketing stimuli (i.e. ads, packaging, pricing, etc.) differently for the different segments they are targeting (Johansson et al., 1994). Additionally, perceptual expectations can lead to illusions and illusions can be used to great effect in packaging and advertising. Research on Eastern European countries namely, Russian (Johansson et al., 1994), and Hungarian consumers (Papadoupoulos and Heslop, 1993), has been found to show that consumers prefer Western products because of superior quality, despite consumer ethnocentric tendencies. Eroglu and Machleit (1989) too, have found that a product's technical complexity, as is the case with consumer durables, affects the importance given to consumer evaluations and that, the more complex the product more elaborate the evaluations.

COUNTRY OF ORIGIN Country of Origin is an image or extrinsic variable which works as summary statistic in consumer decision making (Erickson et al., 1984; Han, 1989; Huber and McCann, 1982; Johansson, 1989). As such, COO can be utilized as a proxy for judging quality when other information about the product is lacking. From the categorization theory perspective, a country name serves as a categorical cue for consumer information processing. Upon seeing a country of origin label on a bi-national brand, consumers are likely to draw an affective judgment associated with the country name. If the country name is associated with a positive image, attitudes toward the bi-national brand are likely to be positive. On the contrary, if it is associated with a negative image, negative attitudes are likely to result.

160

Key Drivers of Organizational Excellence

LITERATURE REVIEW Image has arisen in marketing and consumer research since the 1950s, first with brand image in consumer behavior research (Gardner and Levy, 1955) coupled with corporate image in marketing (Martineau, 1958), and then with country image in international business (Schooler, 1965). Brand image has become an established term since Gardner and Levy (1955) first popularized it. Outcomes from brand image research are insightful. In the first place, it is found that people consume products not only for their physical benefits but also for symbolic reasons (Bhat and Reddy, 1998), i.e. the consumers are image conscious. Furthermore, a company can monitor its brand images through proper measurement and communicate those images to its targeted consumers by implementing effective strategies (Park, Jaworski and Maclnnis, 1986). As the corporate image gained popularity in marketing research (Martineau, 1958) the criticisms that there are serious difficulties associated with the concept of corporate image (Balmer, 1997) also started. The studies in this stream gradually shifted to corporate reputation and corporate identity. Nevertheless, corporate image remains a useful concept. Firstly, every organization has several images instead of just one image held in the minds of its different stakeholders (Dowling 1988) and these images will influence people's behavior toward the organization and so influence its performance in the society (Fombrun, 1996). Secondly, a corporate image is a reflection of its inherent personality (Martineau 1958) and it is projected from the organization to its different audiences. Furthermore, corporate image can be measured by several different methods (Dowling 1988), therefore, it can be monitored and managed (Abratt, 1989). Finally, there exist joint effects between corporate image and brand image in new product evaluation (Keller and Aaker, 1992), and so corporate image is a strategic issue in marketing management (Gray and Smeltzer, 1985). In brief, corporate image is a useful concept. As well, country image has attracted growing attention in international marketing research (Papadopoulos and Heslop, 1993) since 1960s. Brand image has become a vital concept for marketing managers. This has been demonstrated by findings such as those which confirm that image considerations guide purchase choice (Dolich, 1969), that products are often purchased or avoided not for their functional qualities, but because of how, as symbols, they impact the buyer/user's status and self esteem (Levy, 1959), and that a product is more likely to be used and enjoyed if there is congruity between its image and the actual or ideal self image of the user (Sirgy, 1985). The importance of a brand name (or image) in consumer evaluation of a product is well documented. Consumers who do not have any specific ideas about the product commonly rely on a brand name to infer the quality of a product (Jacoby et al., 1971; Szybillo and Jacoby, 1974). Existence of brand loyalty is a good supporting evidence of brand names' importance in consumer evaluation of products (Ettenson and Gaeth, 1991). The value of a brand lies in what consumers have experienced and learned about the brand. The resulting brand associations held in a consumer's memory constitute the brand image, and affect their behavior. Brand associations are thereby important building blocks of customer-based brand equity (Keller, 1993 and 2003; Krishnan, 1996), and marketers should aim to optimize the attributes and benefits that the brand is associated with by the consumers, satisfying their core needs and wants (Keller 2003; Park, Jaworski and MacInnis, 1986).

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

161

Such strongly held favorably evaluated associations that are unique to the brand and imply superiority over other brands will be critical for a brand success (Broniarczyk and Alba, 1994). Hence, brand associations will have implications for many marketing mix actions, such as (re-) positioning and (re-) design of a brand (Kaul and Rao, 1995), as well as extending a brand to other product categories (Czellar, 2003). Associations between brands and attributes are often directional (Anderson, 1983; Holden and Lutz 1992; Farquhar, and Herr, 1992; Krishnan, 1996): the association is from the brand to the attribute and/or the other way around. For example, the brand equity of BMW is affected by the extent to which positive features like safety and sportiness are evoked by that car brand. In addition, whether or not certain cues or attributes enhance brand recall in a purchase or consumption setting contributes to the equity of the brand. Insights in the communalities and asymmetries of these bi-directional associations can direct towards recommendations for brand managers. Holden and Lutz (1992) stated that when measuring advertising effectiveness, one has to assess effects on attributes evoked by the brand as well as on attributes that are likely to evoke the brand. Farquhar and Herr (1992) showed that the dual nature of brand association is an essential part of determining the limits of a brand's stretch. Hence, when assessing brand image, one should consider both brand-to-attribute and attribute-to-brand associations. Previous conceptual and empirical studies related to the description and assessment of brand image largely ignored the bi-directional nature of brand associations. Exceptions are Farquhar and Herr (1992), Holden and Lutz (1992) and Krishnan (1996), which provided conceptual foundations for studying such associations. However, extant literature does not present methodological tools adapted to the bi-directional nature of the association data. A variety of methodologies have been proposed to assess and visualize brand images spatially on the basis of brand ratings or associations regarding a set of attributes, so-called perceptual mapping methods (see for example Dillon, Frederick and Tangpanichdee, 1985 or Shocker and Srinivasan, 1979). In this stream of literature, several studies (Jaffe and Nebenzahl, 1984; Olsen and Olsson, 2002; Teas and Wong, 1992; Wong and Teas, 2001) have demonstrated important differences for multi-attribute ratings collected through brand-by-brand judgment of all attributes versus attribute-by-attribute judgment of all brands. However, this stream of literature is specially dealing with multi-attribute rating judgments, instead of binary associations. Furthermore, again no methodological tools are presented that account for the directional nature of the data. Here, we aim to contribute to this stream of publications by providing a perceptual mapping procedure to assess brand image based on bi-directional associations. In particular, we present a methodological approach, correspondence analysis of matched matrices (Greenacre, 2003; Greenacre and Clavel, 2002), which provides insightful spatial representations of the communalities and asymmetries between the brand-to-attribute and attribute-to-brand associations. Customer-based brand equity occurs when consumers are familiar with the brand and hold favorable, strong, and unique brand associations in memory (Keller 1993, 2003). Memory for a concept consists of a network of nodes and linkages among these nodes (Anderson 1983). The nodes represent concepts and linkages represent the relationship between the concepts. The strength of the association linking two nodes reflects the likelihood that activation of one node will activate the other (Higgins and King, 1981). A brand node can have a variety

162

Key Drivers of Organizational Excellence

of associations linked to that node, like attributes or benefits. Customer-based brand equity implies a certain amount of brand knowledge causing differential consumer responses to marketing of the brand. Brand knowledge has two components (Keller 1993, 2003): Brand awareness and brand image. Brand awareness is related to the strength of the brand as reflected by consumers' ability to identify the brand under different conditions (Alba and Chattopadhyay, 1985). Brand awareness is often measured by means of brand recall, which refers to the number of consumers that retrieve the brand when no cue at all or a cue like the product category or an attribute is given. Mature brands often score higher on brand recall compared to new brands (Kent and Allen, 1994), which can be attributed to longer history of media support, purchases, and consumption occasions. Brand image can be deemed as consumer perceptions about a brand as reflected by brand associations held in memory. Brand associations are informational nodes linked to the brand node in memory and contain the meaning of the brand for consumers. The favorable, strength, and uniqueness of brand associations are the dimensions of brand knowledge that play an important role in determining the differential response that makes up brand equity (Keller 1993). The links in memory are often conceptualized as directional (Anderson 1983), and may start or end at the brand node. Farquhar and Herr (1992) further elaborated on the dual nature of brand association and show that failure to account for the directionality and possible asymmetries can lead to incorrect conclusions. One of the dimensions of brand image within the customer-based brand equity model is the strength of the associations between a brand and other concepts, such as attributes. The strength of an association is labeled as connectivity by Nelson, Bennett, Gee, Schreiber and MacKinner, (1993). As our research deals with bi-directional associations, we adopt terminology by among others Ashcraft, (1978), Farquhar and Herr, (1992), and Loftus, (1973), who used the term dominance, which combines direction and strength of an association. In particular, we use "attribute dominance" to refer to the strength of the directional association from a brand to an attribute, and "brand dominance" as the strength of the directional association from an attribute to a brand. Attribute dominance is operationalized by the number of people who give the attribute in response to the brand and, in a similar way, brand dominance by the number of people who give the brand in response to the attribute, with appropriate adjustments for total frequencies in order to normalize the measures. Traditionally, dominance has been discretized into high and low dominance using somewhat arbitrary thresholds, e.g. at 50 percent by Ashcraft (1978). Because of the significance of brand image marketing, it is important to translate what we know about this concept, or think we know, into the details of what marketing practitioners should do. This is a task that has been complicated, however, by the lack of consensus concerning the components that make up brand image, and consequently, about how it should be managed. As has been previously noted, those who conceptualize brand image as an attitude are unlikely to accept that it extends to factors beyond the physical product (Reynolds and Gutman, 1984). Others, in contrast, have proposed that the "image" of a brand is composed of factors extrinsic to the product itself. Gensch (1978) made this separation clear when he proposed that product perception consisted of two components, the measures of the brand attributes and the "image" of the brand. He defined "image" as a purely abstract concept

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

163

which incorporates the influences of past promotion, reputation and peer evaluation of the product. A more moderate view is offered by those who suggest that working only with attributes, or only with abstractions, is not the way to measure or understand image. These authors propose instead that the objective or functional product qualities, as well as the psychological qualities of both user and product, must be accounted for. In this vein, Friedmann (1986) suggests that the "psychological meaning" of products is made up of the product's attribute bundle, the consumer's dominant perceptual mode, and the context in which the perceptual process takes place. While Reynolds and Gutman (1984) confirm this synergistic effect, they discuss the components of brand image in terms of a means-end chain, identifying an implication network which reflects memory linkages as the fundamental component of brand image. They describe a means-end chain as the connection between product attributes, consumer consequences, and personal values, and theorize that image is represented by the synthesis of these components. Stone, Dunphy and Bernstein (1966) distinguish between three main components of an image (its theme, its image proper, and its net evaluation). Levy (1978) talks about brand image as being composed of a mixture of the physical reality of the product and the beliefs, attitudes and feelings that have come to be attached to it. And rather exquisitely, Dichter (1984) describes magic and a product's morality as two of the basic components of its image. Image building, image change, image monitoring and maintenance, product positioning, product differentiation and image segmentation are among the present generation of brand image management activities. While clearly such activities presuppose that a brand's image can be manipulated by marketing practice, the literature fails to agree on the extent to which this is possible. In fact, the debate is ongoing as to whether an image is something that is conveyed or something that is received. Bullmore (1984), on the one hand, emphasizes the dependence of brand image creation upon the individual psyche. He refutes the assumption that the image belongs to the brand, and ripostes that an image, like a reputation, can only reside in the minds of people. His contention is that the mind both contains and creates the image, and that it is mediated or stimulated by the consumer's experiences. On the other hand, there are those who suggest that the consumer has a passive role where image creation is concerned. They propose that an image is projected to the consumer by the marketer, and that it can be selected, created, implemented, cultivated, and "managed" by the marketer over time. It has even been suggested that false or deceptive product images can be "ordered to be corrected", as if there is a simple transfuse between the image presented by the media and the consumer's mind (Scammon and Semenik, 1983). In this context, advertisements have been regarded as a primary vehicle through which images can be imparted or "transferred" to a brand. An amalgam of the above perspectives supports the view that product image is a function of the interaction between perceiver and product stimulus. The product's attributes, the sponsoring organization, the marketing mix, the modes through which people tend to perceive, personal values, experience, the types of people associated with use of the brand, and a number of context variables have all been said to be among the factors that contribute to the development of a particular brand's image.

164

Key Drivers of Organizational Excellence

Durgee and Stuart (1987) suggested that each product or brand has a "meaning profile'', which they define rather circuitously as the complex of key meanings associated with the product or brand, or what the product means symbolically in the eyes of consumers. Citing the work of those who have studied the philosophy of meaning, they proposed that there are three different ways in which a product could "mean" something -- causality, context and similarity. Swartz (1983) proposed that to the extent that functional differences between brands of the same product were minimal, "message differentiation" could be used as a viable product differentiation strategy. This was said to involve distinguishing one brand from another based on the message communicated by the use or ownership of the brand. These messages were said to generate directly from the meaning or interpretation that was given to certain brands or products by the persons exposed to them. Friedmann and Lessig (1987) adapt their notion of "psychological meaning" from existential phenomenological psychology. They describe this concept as a mental position, understanding or evaluation of the product that develops in a nonrandom way from interaction between perceiver and product stimulus. Levy (1978) similarly talks about meaning as being learned or stimulated by the component experiences that people have with the product. Reynolds and Gutman (1984) have defined product imagery in terms of the stored meanings that an individual has in memory, suggesting that what is called up from memory provides the meaning we attribute most basically to image. Still others have talked about a product as having "personal" and "social" meanings, but have provided no general framework to explain how these are derived or what they intend. The personification of a brand and its image with human characteristics, a practice that has become especially popular in the 1980s, has been approached from two distinct perspectives. The first involves describing the product as if it were a human being, suggesting that the brand has a distinct personality of its own. Such "personalities" as the Kodak Kolorkins and Betty Crocker are of this ilk. The second focuses on associating the consumer's personality or self-concept with the image of the product or brand. This is exemplified by the "express your individuality" appeal of Calvin Klein jeans, or the fragrance industry's correlation of perfume use with fulfilled dreams, fantasies and aspirations. Relating brand image to personality is intuitively appealing on many grounds. Both are multidimensional, and both appear to operate at the same level of abstraction. Personality has been said by some (Kassarjian and Sheffet, 1975) to be best conceived of as a dynamic whole, which is consistent with the general sense that many have about brand image. Many consumer theorists are of the view that purchase behavior is determined by the interaction of the buyer's self-concept with the 'product's personality', and in this context the definitional relationship is also especially apt. Associating brand image with personality is not, however, without difficulty. Most notably, the struggle that psychologists have had with the definition and measurement of personality likewise becomes a problem for those interested in studying brand image. It is therefore not surprising that those who define brand image by reference to personality do not attempt to define the latter concept in any detailed way. They simply suggest that products have personality images, or they focus in on some distinctly human descriptor, such as "gender" image (Debevec and Iyer, 1986), "age" image (Bettinger and Dawson, 1979), or "social caste" image (Levy 1958).

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

165

Such unity as exists within this group stems from the fact that each stresses a cognitive or mental process by which brand image is said to be triggered. They concentrate on mental effects by naming any one of "ideas", "feelings", "attitudes", "mental constructs", "understandings" or "expectations" as the cardinal determinant of brand image. While most of these definitions may not be directly traceable to Gardner and Levy's initial conceptualization of brand image as "a consumer's feelings, attitudes and ideas towards a brand", most seem to have been influenced by it. The reference to "feelings" suggests a link between product and emotions (Reynolds and Gutman, 1984), which is a particularly germane connection in circumstances where consumers have difficulty obtaining objective measures on product attributes, or where product clones cannot be differentiated or positioned on the basis of a distinct benefit appeal. In either case, the emotional appeal of the product would seem likely to play a significant role. Conceptualizing brand image as an "attitude" provides an orientation that is more amenable to measurement and evaluation. Perhaps because of the attitude measurement techniques that have been developed, it has been noted that definitions which employ this approach have a tendency to restrict image to a set of product characteristics (Reynolds and Gutman, 1984). While such definitions might lead to the conclusion that a product's image can be approximated by the sum of its attribute values, there are surely abundant examples where two brands have the same attribute ratings but different market shares. The "attitudinal" definitions would therefore appear deficient because of their inability to explain this. The terms "understandings", "expectations' and "mental constructs" are rather vague and non-instructive terms for capturing brand image. Nonetheless, they do form an interesting group in their restriction to intellectual or reasoned processes and in their contrast with those definitions which incorporate the "feelings" dimension. Because of the significance of brand image marketing, it is important to translate what we know about this concept, or think we know, into the details of what marketing practitioners should do. This is a task that has been complicated, however, by the lack of consensus concerning the components that make up brand image, and consequently, about how it should be managed. As has been previously noted, those who conceptualize brand image as an attitude are unlikely to accept that it extends to factors beyond the physical product (Reynolds and Gutman 1984). Others, in contrast, have proposed that the "image" of a brand is composed of factors extrinsic to the product itself. Gensch (1978) made this separation clear when he proposed that product perception consisted of two components, the measures of the brand attributes and the "image" of the brand. He defined "image" as a purely abstract concept which incorporates the influences of past promotion, reputation and peer evaluation of the product. A more moderate view is offered by those who suggest that working only with attributes, or only with abstractions, is not the way to measure or understand image. These authors propose instead that the objective or functional product qualities, as well as the psychological qualities of both user and product, must be accounted for. In this vein, Friedmann (1986) suggests that the "psychological meaning" of products is made up of the product's attribute bundle, the consumer's dominant perceptual mode, and the context in which the perceptual process takes place. While Reynolds and Gutman (1984) confirm this synergistic effect, they discuss the components of brand image in terms of a means-end chain, identifying an implication network which reflects memory linkages as the fundamental component of brand image. They describe a means-

166

Key Drivers of Organizational Excellence

end chain as the connection between product attributes, consumer consequences, and personal values, and theorize that image is represented by the synthesis of these components. Stone, Dunphy and Bernstein (1966) distinguish between three main components of an image (its theme, its image proper, and its net evaluation). Levy (1978) talks about brand image as being composed of a mixture of the physical reality of the product and the beliefs, attitudes and feelings that have come to be attached to it. And rather exquisitely, Dichter (1984) describes magic and a product's morality as two of the basic components of its image. The image of countries as origins of products is one of many extrinsic cues, such as price and brand name that may become part of a product's total image (Eroglu and Machleit, 1989). Past research has demonstrated that consumers tend to regard products that are made in a given country with consistently positive or negative attitudes (Bilkey and Nes, 1982). These origin biases seem to exist for products in general, for specific products, and for both endusers and industrial buyers alike (Bilkey and Nes, 1982; Dzever and Quester, 1999). In addition, origin biases have been found for both developed countries and less developed ones (Nes and Bilkey, 1993). Generally speaking, products from the latter are perceived to be riskier and of lower quality than products made in more developed countries. In a metaanalysis, Liefeld (1993) concluded that country image appears to influence consumer evaluation of product quality, risk, likelihood of purchase, and other mediating variables. He also noted that the nature and strength of origin effects depend on such factors as the product category, the product stimulus employed in the research, respondent demographics, consumer prior knowledge and experience with the product category, the number of information cues included in the study, and consumer information processing style. Papadopoulos (1993) posits that the image of an object results from people's perceptions of it and the phenomena that surround it. Based on the studies conducted in eight different countries, Papadopoulos et al. (1988) were among the first to incorporate distinct country image measures in PCI research (in addition to measures of products simply designated as "made in X"), and the first to attempt to model the relationship between country beliefs, product beliefs, familiarity, and product evaluation and willingness to buy, using Lisrel. Despite the theoretical appeal of this conceptualization, which includes the three components of an attitude, most empirical studies of country image cognitive processing have not considered the multi-dimensionality of country image when operationalizing the construct (Johansson et al, 1985; Han, 1989; Knight and Calantone, 2000). In addition, some of these studies tested paths within a conceptual model individually rather than testing the complete model (Johansson and Nebenzahl, 1986). Further, most measured "country" image through product rather than country measures (Han, 1989), and some focused on affect-oriented country/people measures rather than cognitive ones (Knight and Calantone, 2000). In turn, the Papadopoulos et al. (1988) model, which did not share these weaknesses, was hampered by the absence of well-defined country measures at the time of their research, which resulted in model constructs that were not as well defined as would be possible today. It is recognized generally that consumers, through familiarity with products from different countries, develop country images (Erickson et al., 1984; Eroglu and Machleit, 1989; Etzel and Walker, 1974; Hafhill, 1980; Kaynak and Cavusgil, 1983; Roth and Romeo, 1992). These images constitute sets of beliefs on a variety of dimensions that represent important product attributes (Han, 1989; Han and Terpstra, 1988; Roth and Romeo, 1992). Extensive research

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

167

has reported that country images can have considerable impact on consumers' product evaluations (e.g. Bilkey and Nes, 1982; Eroglu and Machleit, 1989; Han, 1989; Han and Terpstra, 1988; Roth and Romeo, 1992; Tse and Gorn, 1993). Like a brand name, country of origin is an image or extrinsic variable which works as summary statistic in consumer decision making (Erickson et al., 1984; Han, 1989; Huber and McCann, 1982; Johansson, 1989). As such, COO can be utilized as a proxy for judging quality when other information about the product is lacking. From the categorization theory perspective, a country name serves as a categorical cue for consumer information processing. Upon seeing a country of origin label on a binational brand, consumers are likely to draw an affective judgment associated with the country name. If the country name is associated with a positive image, attitudes toward the binational brand are likely to be positive. On the contrary, if it is associated with a negative image, negative attitudes are likely to result. When consumers are unfamiliar with the product, country image may serve as a "halo effect", by which consumers infer product attributes. The halo effect implies that country image directly affects consumer beliefs about product attributes and indirectly affects their overall evaluations of products through these beliefs (Han, 1989). Country image includes stereotypes held about a country's economic and political environment, while ethnic image refers to a country's cultural environment; country image and ethnic image together can be viewed as dimensions of national stereotypes. For instance, European/non-European is an ethnic dimension, communist/capitalist is an economic and political environment dimension, while developed/undeveloped is an economic environment dimension (Forgas and O'Driscoll, 1984). When consumers are more knowledgeable about a country's products, country image may be less important in forming their beliefs about the product attributes and their brand attitudes. Instead, country image serves as an indirect channel in affecting product attributes and brand attitudes (Bruning, 1997; Erickson et al., 1984; Han, 1989). Although such stereotypical beliefs are biased, they can play an important role in risk reduction by providing coherence, simplicity and predictability in complex decision making. Consumers tend to develop country images through familiarity with products from different countries (Erickson et al., 1984; Eroglu and Machleit, 1988; Etzel and Walker, 1974; Kaynak and Cavusgil, 1983; Roth and Romeo, 1992), and this may result in a form of stereotype. Research indicates that country images have a considerable impact on consumers' product evaluations (Bilkey and Nes, 1982; Eroglu and Machleit, 1988; Han and Terpstra, 1988; Roth and Romeo, 1992; Tse and Gorn, 1993). Therefore, a positive country image may allow marketers to introduce new products that quickly gain consumer recognition and acceptance (Agarwal and Sikri, 1996). Country image (CI) has generally been conceptualized and operationalized in one of two ways. Some researchers have treated CI as consumers' overall perceptions, e.g. quality of products made in a given country (Crawford and Garland, 1988; Etzel and Walker, 1974; Han and Terpstra, 1989; Hong and Wyer, 1989; Howard, 1989). Han and Terpstra (1988) investigated CI on the basis of five dimensions (technical advancement, prestige, workmanship, economy and serviceability) for two products (televisions and automobiles) and found the ratings to be consistent across the product categories. As such, they concluded that generalized country-level CIs may exist. Such an approach assumes that CI is a halo

168

Key Drivers of Organizational Excellence

construct (Han, 1989), i.e. consumers are assumed to lack product-level attribute information in memory that can be used in the evaluation process. A second and more common interpretation of CI is its definition as a set of generalized beliefs about specific products from a country on a set of attributes (Bilkey and Nes, 1982). Especially in the case of familiar products, consumers have product-specific knowledge structures in memory which are well developed. Empirical research repeatedly has demonstrated that consumers do hold different sets of beliefs across different product categories and the attitudes towards products from a given country vary by product category (Bilkey and Nes, 1982; Kaynak and Cavusgil, 1983; Roth and Romeo, 1992). It should be noted, too, that national loyalty is related to country image and plays an important role in explaining consumer product choices. Brand names, store image, specific product attributes, or other non-country-specific cues to product quality may persuade less nationalistic consumers to prefer foreign products to domestic substitutes (Bruning, 1997). Studies of consumer ethnocentrism and national loyalty indicate that attitudes and intentions are affected by one's sense of loyalty to nation and to other macro-oriented groupings (Bruning, 1997). However, at least one study has found that consumers from developing countries prefer products from developed countries and that their perceptions tend to be more stereotyped (Okechuku and Onyemah, 1996); indeed, this is one of the major marketing challenges facing global marketers (Cordell, 1993). Other studies have concluded that consumers hold more negative perceptions of products made in developing countries (Wang and Lamb, 1983) and that source countries (Han and Terpstra, 1988) and CO have a greater effect on consumer evaluations of product quality than the brand name (Nebenzahl and Jaffe, 1996; Tse and Gorn, 1993). There is some evidence, too, that the CO is sometimes concealed in order to prevent loss of sales (Nebenzahl and Jaffe, 1996). In addition, CO effects have been found to vary according to the nationality of respondents. Koreans have been found to be more prejudiced than Americans against products from less favorable countries (Nebenzahl and Jaffe, 1996): when a favorable CO was presented, little difference in the product evaluation was evident between Americans and Koreans but, when an unfavorable CO was presented, Koreans tended to evaluate the product more negatively than Americans, indicating that CO effects varied between US and Korean subjects. Bos (1994) reports that Mexicans were obsessed with US and Japanese goods, and Jaffe and Martinez (1995) reached similar conclusions. Papadopoulos et al. (1989) found a positive correlation between people's views of their nation and its products. Hence, consumers tend to prefer domestic products, when their national loyalty is strong. It is our contention that while country image affects product evaluations, its very structure, that is the relative importance attached to its cognitive, affective, and conative components, has a significant impact on the extent of its influence on product evaluations. Consequently, the first objective of our study is to empirically confirm the three-dimensional structure of the country image construct. Consistent with Papadopoulos et al. (1988, 1990), we define country beliefs as consumers' beliefs about the country's industrial development and technological advancement. The concept of people affect refers to consumers' affective responses (e.g. liking) to the country's people. Finally, the concept of desired interaction reflects consumers' willingness to build close economic ties with the target country.

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

169

Perception of brand image changes as production is sourced multinationally. In spite of the fact that multinational corporations produce and assemble products bearing identical brand names (e.g. IBM, Pierre Cardin, General Foods, Henkel, Vicks, Black & Decker) in both developed and developing countries, little research has been carried out to measure the effect of host country location on brand image. The lack of research in this regard is all the more surprising because of findings that consumer evaluation of products is influenced by a country's stage of development, i.e. consumers hold more negative perceptions of products made in developing countries (Wang and Lamb, 1983) and that the sourcing country (Han and Terpstra, 1988) and country of origin (Tse and Gorn, 1992) have greater effects on consumer evaluations of product quality than does brand name. Moreover, researches on brand image and brand equity (Aaker and Keller, 1990; Keller, 1993; Park et al., 1991) have shown that brand image strategies should be determined before other elements of the marketing mix. In studies conducted during the Cold War, Chasin and Jaffe (1979; 1983) found that US purchasing agents perceived industrial products made in the USSR, Hungary and Poland to be significantly inferior to those made in the USA. In a study of Canadian consumers (Kaynak and Cavusgil, 1983), the quality of electronics and household goods made in Czechoslovakia was perceived as significantly lower than that of similar products made in the USA, Japan, Spain and Argentina. Among Finnish consumers (Darling, 1981), products made in the USSR were ranked last in performance as compared to Finnish (ranked first), US (fifth) and Japanese (sixth) products. In a study of Italian and Dutch consumers (Morello, 1984), consumer products made in the USSR were believed to be inferior to those of most western European countries and the USA. COO effects refer to the extent to which the origin of a particular product influences its evaluation. Previous research has demonstrated that consumers evaluate the assortment of products originating from a given country with a consistent positive or negative attitude. For example, Bilkey and Nes (1982) reviewed 25 studies and each one indicated that an origin country's image does affect product evaluation. The positive impact of COO on product attitude was found across a variety of subjects (e.g., students vs. non-students, end-consumers vs. industrial buyers), products (e.g., general products vs. specific products, goods vs. services), and methodological settings (e.g., single vs. multiple cue studies, experiments vs. surveys). Liefeld (1993) did a meta-analysis of 22 experimental investigations of country of origin cue effects on consumer judgments and choice. In all but two of the experiments he reviewed, COO was found to be statistically related to consumer product evaluations or choices over a wide variety of products ranging from cars and personal computers to glassware and fruit juice. Further, Herche (1994), building on Shimp and Sharma s (1987) work on ethnocentrism, found that perceptions of the morality of buying imports among the members of a particular market had a much greater influence on the decision to purchase goods from overseas than did a marketing strategy focused on, for example, lower prices or intensive distribution. Finally, Peterson and Jolibert (1995) did a meta-analysis of survey research in the COO field and reported that, as a cue, product origin differentially influences attitudes and purchase intentions. Brand Image refers to the understanding consumers derive from the sum total of brand related activities engaged in by the firm (Park et al., 1986). It is the set of all associations linked in the consumer's memory to a brand (Aaker, 1991). Since country of origin is a factor linked to brand image, brands from countries with a more positive image have a better chance to establish a more positive image than those from countries with less positive image.

170

Key Drivers of Organizational Excellence

Besides, brands enjoying more positive image are generally from countries with more positive image. Therefore, it will be safe to say that a more positive country image is almost a common or natural requirement for a brand to enjoy a positive image. In other words, most consumers will assume that brands with more positive image are from countries with more positive image. However, if consumers are presented with a positive image brand whose company is located in a country with a more positive image but which is actually made in a country with a less positive image, consumers may feel a strong category mismatch; i.e. a strong schema incongruity about the brand. According to categorization theory, this schema incongruity may sharply decrease the evaluation of the binational brand (Fiske, 1982). Of course, brands with less positive image can originate from countries with either more positive image or less positive image. Unlike a more positive image brand, COO may not be considered as an important evaluative factor (or attribute) for a less positive image brand for two reasons: First, COO may not be a significant factor for the evaluation of the less positive image brand originating from a more positive image country. The ineffectiveness of COO information in the consumers' evaluation of such brands is self-proven. Though manufactured in countries with more positive image, these brands still receive less positive evaluations. Second, less positive image brands originating from less positive image countries may have some chance to improve their image by changing the country of manufacture. However, the effect of such a change in manufacturing country (for brands with less positive image originating from countries with less positive image) may be generally low and insignificant. Brand image is like reputation. It is quite easy to demolish, but very difficult to build. Development of brand image is a long term effort (Park et al., 1986). To spruce up their overall image, less positive image brands have to improve various aspects of the product. A change in the manufacturing country alone is unlikely to have any meaningful impact on consumer minds. In conclusion, therefore, the less positive image brand's schema incongruity resulting from changes in COO information may not be severe; thus, evaluation of the bi national brand is unlikely to change to any significant extent. There is a controversy about which has stronger effect on consumer evaluation of bi national brands: Brand Image or Country Image? According to previous studies (Han and Terpstra, 1988; Wall et al., 1991), COO has greater effects on consumer evaluation of product quality than does the brand name. However, while many researchers have doubted whether COO effect may be generalized (Bilkey and Nes, 1982; Peterson and Jolibert, 1995), the role of brand image on consumer evaluation of products has received strong support (Aaker, 1991; Keller, 1993; Park et al., 1986). In addition, brand name, the default stand-in for Brand Image, is a much more salient attribute than COO label. Although country familiarity is certainly one type of familiarity, it is different from product familiarity or brand familiarity. While brand familiarity or product familiarity is related to intrinsic attributes of a product, country familiarity is not. Rather, it is an extrinsic characteristic related to the level of categorical knowledge of COO information. Consequently, relationships between country familiarity and COO can be regarded as relationships between level of familiarity (or knowledge) with a category and a categorical cue. Linville (1982) suggests a guideline that could help us understand the relationships between country familiarity and COO information: the less familiar a person is with a given category, the more extreme a person's evaluation of stimuli from the category. In other words, consumers

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

171

with lower country familiarity are likely to be more extreme in their evaluations of COO effect on a binational brand than are consumers with higher country familiarity. CO is a multi-dimensional construct that evokes a wide range of cognitive responses (Han and Terpstra, 1988; Nebenzahl and Jaffe, 1996; Hong and Yi, 1992; Lim and Darley, 1997). It can be separated into two discrete components. The first is informational; CO provides cues to consumers regarding the quality, dependability, and value for money of the product, when more specific information is not readily available (Han and Terpstra, 1988; Hong and Wyer, 1989). The second component of the CO cue relates directly to one's group affiliation, i.e. national loyalty, and reinforces one's sense of national identity (Bruning, 1997). Consumers' perceptions of perceived risk, together with perceptions of quality and value for money, are important, because they affect the consumer's choice of product. In this respect, CO may be perceived as a risk property (Cordell, 1993); consumers may perceive more risk in purchasing products from countries with a poor image, or they may seek to enhance their status by purchasing products from countries with a positive image. Thorelli et al. (1988) found that product evaluation and willingness to purchase are inversely related to the amount of perceived risk, which results from several factors: uncertainty, potential adverse consequences, probability of loss, and cost of a loss. In addition, Alden (1993) concluded that a country associated with heightened risk appears to cause consumers to spend more time in evaluating the product's performance before forming a judgment. Thus, a high-risk country has a strong negative effect on a consumer's attitudes and evaluations, both before and after a trial that gives the consumer some experience of the product. Like a brand name, country of origin is an image or extrinsic variable which works as summary statistic in consumer decision making (Erickson et al., 1984; Han, 1989; Huber and McCann, 1982; Johansson, 1989). As such, COO can be utilized as a proxy for judging quality when other information about the product is lacking. From the categorization theory perspective, a country name serves as a categorical cue for consumer information processing. Upon seeing a country of origin label on a binational brand, consumers are likely to draw an affective judgment associated with the country name. If the country name is associated with a positive image, attitudes toward the binational brand are likely to be positive. On the contrary, if it is associated with a negative image, negative attitudes are likely to result. With the increasing trend among firms to out-source the production of their products to lowcost destinations, leading to what is now known as hybrid products, later studies on the effects of country-of-origin have started to differentiate between country-of-manufacture and country-of-design effects. These studies have, in general, found that the country in which the product is designed and the country in which it is produced both affect consumers' evaluation of hybrid products. Examples of such studies include Johansson and Nebenzahl (1986), Han and Terpstra (1988), Obermiller and Spangenberg (1989), Ozsomer and Cavusgil (1991), Chao (1993), and Tse and Gorn (1993). Specifically, consumers were found to perceive products manufactured in a less reputable country to be of lesser quality and those manufactured in a more reputable country to be of higher quality. The promotion of such brands means, either emphasizing the COO as has been the case with "American Jeans", "Marlboro" cigarettes, "Italian pasta", and French perfumes such as "Chanel" or alternately, ignoring the COO depending on the perception of consumers in the

172

Key Drivers of Organizational Excellence

foreign country market. Numerous firms have used positive associations with the COO to good advantage in the marketing of goods (Papadoupoulos et al., 1993), as for example, the favorable association of Germany with beer, Sweden with cars, and Japan with microelectronics. However, if the COO stereotype is negative, it can pose formidable barriers for marketers attempting to position their goods within a foreign market (Johansson et al., 1994). In yet other cases, there are product categories not distinctively associated with any COO image as in the case of the car industry, where it has been less easy to market global brands such as "Mercedes", "Audi", "Toyota", "Jaguar" for which brand images have developed quite apart from their COO, and which do not use their national COO association in their promotion and marketing strategies. The gradual trend towards liberalization of the Indian economy during the past decade has served as a major factor in its progressive shift towards a global economy and the entry of foreign brands from Europe and US into this market. In fact the 1990s decade and beyond have been characterized by major structural changes in the Indian consumers market evolution, including those of increased competition, product availability in terms of both quality and quantity, as well as increased levels of awareness and propensity to consume. A large and rapidly growing urban middle and upper class consumer market made up of 300 million, which approximates one-third of its present population, constitutes the market for branded consumer goods, with the latter estimated to be growing at 8 percent per annum. In fact, demand for several consumer products has been growing at 12 percent per annum (India Market Demographics Report, 2002). While rising incomes and shifts in consumer tastes and preferences are evolving predictably as a trend, Indian consumers are faced with increasingly complex sets of choices across all categories of consumption (Business World, 2003). A host of foreign branded goods are now freely used as well as easily available. A related trend contributing to these shifts has been the reduction in trade barriers due to trade agreements, and the globalizing influence of associations such as the WTO with the result that the Indian economy, the third largest in Asia, is expected to grow at 7 percent in the year ending March 2006 (CMIE, 2004). The concomitant decrease in import duties of goods, has paved the way for entry of companies among others, from Germany, France, US, Korea, Japan, and China. The main strategies of these companies have included strategic alliances with domestic Indian companies to provide Indian consumers greater variety with regard to foreign branded goods. Thus, foreign companies like Hyundai (S. Korea), Daewoo (S. Korea), General Motors (US), Ford (US), are prevalent in the Indian car market, while TV brands range from Sony (Japan), Samsung (S. Korea), LG (S. Korea), to local brands such as "Videocon" (India), and "BPL" (India). Similarly, the washing machine market is dominated by "Whirlpool" (US), and the refrigerator market by brands such as "LG" (S. Korea), "Godrej" (India), "Voltas" (Indian conglomerate). In the consumer non-durables product categories such as chocolates, tea, coffee, and detergents the competition from foreign brands has been relatively less, and there is a predominance of local products as in the case of tea (Taj Mahal), coffee (Tatacafe), detergents (Nirma), toothpastes (Babool, Dabur, Vicco along with Close-up, Pepsodent), and ice creams

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

173

(Vadilal, Kwality). Foreign brands in the Indian market have thus begun to compete for both market share and the psychological mind share of consumers, even while widening the range of brands under consideration. The Indian market now holds a position of strategic eminence with several foreign retail chains setting up their stores. Opportunities in the retailing sector are seen to be increasing with the emergence of fast food chains like McDonalds, Dominos, Wendy's, department stores, electronics goods and exclusive retail outlets like those of Nike and Adidas. While shopping malls have made themselves conspicuous in the metros, smaller cities, and nonmetros of population sizes up to 100,000 are also following suit (Images Retail, 2004), with growing discretionary consumer incomes. Although COO image measurements in Asian country markets have been attempted (Pereira et al., 2005), there is a lack of concrete information on consumer attitudes, preferences and market place behavior with regard to foreign brand names in these markets. The present research was, therefore, conducted to obtain a better understanding of attitudes and preferences of Indian consumers with regard to foreign versus local brands, and to evaluate their relative strengths in terms of differentiating features. In addition, the extent to which consumer ethnocentrism accounts for a positive bias toward local domestic brands and a negative bias against products originating from foreign countries was also examined, to identify if there was any strong domestic country bias in the choice of brands. These findings would be of interest to companies in their formulation of foreign marketing strategies by offering a better understanding of how the foreign brands are likely to be perceived in relation to domestic products and those originating from other competing countries in this market. Originally, the concept of Country-of-Origin (COO) was considered as the Made-in country (for Nebenzahl et al., 1997), or the COM, country-of-manufacture (Samiee, 1994), that is, the country which appeared on the "made-in" label, generally that country where final assembly of the good took place. Other concepts have progressively emerged in the COO literature, such as Country of design (COD) or DCI (Designed-in country for Nebenzahl et al., 1997), which is the country where the product was designed and developed. With multinational production, there is a growing discrepancy between COMs and CODs. Moreover, global companies tend to manipulate brand names to suggest particular origins (COB, country-ofbrand, effects). Thus country-of-origin tends more and more to be considered as that country which consumers typically associate with a product or brand, irrespective of where it is actually manufactured. Country image as such (CI) may also have a certain influence on consumer evaluation. For instance, a poor country image in terms of democracy may backfire on the image of goods made in that particular country (Martin and Eroglu, 1993). In seeking to make efficient choices, consumers engage in internal and external information searches. While information search behavior varies among individuals, it typically precedes the formation of brand preferences, due to the presence of imperfect market information, resulting in search and opportunity costs incurred by consumers. For example, there is an opportunity loss, if another brand within the same product class has similar quality standards but is priced lower than that purchased (Maronick, 1995). Country-of-origin (CO) effects, too, are of particular interest to international marketing researchers because of their impact on the product evaluations that help to influence customers' purchase decisions. We often hear comments like "Japanese cars are more reliable", and that Germany leads the world in engineering technology. Hence, we can infer that CO perceptions help to form overall attitudes

174

Key Drivers of Organizational Excellence

on certain product attributes and also have some impact on customers' evaluation of a product's performance (Han, 1989; Bruning, 1997). Bruning (1997) suggests that CO is a cue that consumers use to make inferences about products and product attributes; CO has a direct influence on product attributes, which in turn affect product evaluations, and the CO effect may also result in perceptions of the general quality of products from a particular country. Results of studies in the consumer information processing area indicate that familiarity influences affect (Alba and Mormorstein, 1987; Brucks, 1985). Specifically, many researchers in the COO area have been interested in the role of familiarity in the context of COO effect (Erickson, et al., 1984; Johansson, 1989; Johansson et al., 1984; Johansson, 1989); however, there is evidence that supports the opposing conclusion. Consumers with high familiarity showed more usage of COO (Johansson et al., 1985; Johansson and Nebenzahl, 1986). Although Erickson et al. (1984) and Johansson's (1989) reasoning has a certain persuasive quality, the more logical and appealing reasoning about the role of familiarity can be found in the paper by Rao and Monroe (1988). Although their study is about price and not COO, both are extrinsic cues (product attributes which are not a part of the physical product bundle) and hence, their role in consumer evaluation of products should be quite similar. Rao and Monroe (1988, p. 255) report that unfamiliar or low familiar consumers will be more likely to use extrinsic cues in product quality assessments, because they have relatively little intrinsic product information (product attributes derived from actual physical product such as product size or function) in memory and a less developed schema, making processing intrinsic information more difficult. However, as consumers become more familiar with the product, their ability to assess product quality based on their knowledge of intrinsic attributes, that are informative about quality, improves. Thus, as consumers achieve a moderate level of familiarity, their better knowledge structure increases their ability to examine intrinsic information successfully. Consequently, the relative reliance of moderately familiar consumers on extrinsic cues . . . to evaluate product quality will decrease in favor of using intrinsic cues. As consumers achieve a high degree of familiarity with the product, they continue to be able to assess product quality through an examination of intrinsic cues. However, highly familiar consumers' knowledge of market-based information about the product class allows them to relate extrinsic information to product quality. As consumers achieve relatively higher familiarity, the ability to relate intrinsic cues to quality is augmented by the ability to relate surrogates to product attributes, and thus to quality''. How important is CO to consumers when evaluating products, especially when other cues are available? The CO cue has been found to explain a relatively small percentage of the variance of perceived quality, attitude and purchase intention, suggesting that its theoretical and practical importance is low. Erickson et al. (1984) found that CO had a significant effect only on specific product attributes but was not strong enough to influence the consumer's overall attitude toward the product, and hence the consumer's purchase decision. Johansson (1989) stated that important purchases are not greatly influenced by CO. He concluded that, as consumers become more familiar with the product, CO effects are diminished in their importance as cues. Besides, the presence of other cues, such as brand name, product warranty, or a prestigious retailer, can even compensate for a perceived negative CO cue (Thorelli et al., 1988). Nevertheless, Johansson and Nebenzahl (1986) found a strong positive correlation between

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

175

self-assessed "knowledge about product class" and CO. Hence, it may be that, as consumers have more prior knowledge of the CO and product information, they feel more confident in using CO as a cue. Most studies have used only a single cue (CO) as opposed to multiple cues as the information on which respondents based their evaluations. This not only creates validity problems but also prohibits an assessment of how much influence the CO has in the presence of other product cues. Bilkey and Nes (1982) argued that a single cue might yield a false significant cue effect, which has led to other researchers such as Erickson et al. (1984) to suggest a multi-cue approach in investigating the impact of CO on product evaluation. Eroglu and Machleit (1988) argued that CO is only one of many product cues that consumers use to evaluate product quality. It is the perceived importance of this cue vis-à-vis the others that determines the effect on quality perception. They also stated that, given strong brand-name effects, future research should address how CO interacts with brand name. Also, most previous CO studies have asked respondents to evaluate imaginary products in order to avoid brand bias and/or to disguise the purpose of the study. If studies could use actual products and let respondents experiment with or test them, the results might be entirely different. Further, permitting respondents to understand the purpose of the research might avoid overstated, understated or spurious results (Bilkey and Nes, 1982). As a result of the rapid globalization of sourcing and production, the definition of CO has become blurred and confused in recent years. For instance, a firm headquartered in the USA might source manufacturing components world-wide, perform sub-assembly in southeast Asia, and then assemble the final product in Mexico for sale in the USA and other markets. Does CO refer simply to the locale of the manufacturer's headquarters or to more substantive indicators such as where a product is assembled or from where the majority of the parts originate? How CO is conceptualized and operationalized may significantly alter the findings of empirical studies; hence, researchers are advised to include and compare several configurations of CO variables (Ettenson and Gaeth, 1991). Nagashima (1970) proposed that consumers' views of CO are equated to attitudes towards product evaluations. For example, Maronick (1995) studied US respondents' attitudes towards the phrase "Made in the USA" and found that, for higher-priced products, the "Made in" claim is likely to lead to better ratings, suggesting a direct correlation between CO perceptions and product evaluation. There are differing views as to how CO influences product evaluations (Johansson, 1989). One view is that CO influences beliefs about specific product attributes that, in turn, affect the overall product evaluation; CO perceptions activate concepts about the country and the general quality of products manufactured there, and these concepts may have a general positive or negative effect on the interpretation of other available product attribute information (Hong and Wyer, 1989). In such a case, CO affects overall evaluations indirectly, although it impacts perceptions of specific product attributes. Another view is that CO may act as a salient attribute that evokes affect or stereotypes associated with the producer country, directly influencing overall product evaluations (Lillis and Narayana, 1974; Nagashima, 1970; Reierson, 1967). CO therefore triggers a global evaluation of quality, performance and specific/service product attributes. A "cue" is a characteristic or dimension external to a person that can be encoded and used to categorize a stimulus (Schellinck, 1983). People use cues when forming beliefs about objects,

176

Key Drivers of Organizational Excellence

which in turn influence their behavior with respect to those objects. There are two types of cue, which can be described in terms of the extent to which they are intrinsically a part of the physical product (e.g. taste, weight) or extrinsic to the product (e.g. price, brand) (Jacoby et al., 1977). Consumers use extrinsic cues when intrinsic cues are missing or are hard to evaluate; hence, these intangible extrinsic cues are useful to consumers in forming product evaluations. Some examples of extrinsic cues are guarantees, warranties, brand reputation, seller reputation, and promotional messages (Yong, 1996), as well as perceptions of the country's image. Consumers may perceive more risk in purchasing products made from countries with weaker images, and studies have shown that perceived risk is inversely related to purchase intention and product evaluation. Extrinsic cues are important to consumers in reducing this risk, when they are unsure of the intrinsic cues of the product (Lim and Darley, 1997; Thorelli et al., 1988). A subculture has been defined as a subdivision of a national culture, composed of a combination of social situations such as ethnic background, race, language, regional, rural, or urban residence, religious affiliation, and/or class status, that together form a functional unity which has an integrated impact on the participating individual (Hawkins et al. 1995, Lenartowicz and Roth 2001). The distinct shared history and values of a sub-cultural group may influence their consuming patterns and behavior. Only a small handful of studies have considered COO differences among consumers within a country as distinct to views that might be held at the national level. Among them, Klein et al. (1998) reported that although Japan is often seen as a high quality producer in the People s Republic of China, Chinese consumers in Nanjing (the site of atrocities during the Japanese occupation) may not purchase Japanese products because of hostility toward that country. In their animosity model related to foreign product purchase, they suggested that culturespecific factors influence the weight given to the country of origin in product evaluations. In a related stream of research, Shimp and Sharma (1987) introduced consumer ethnocentrism as a construct that represents beliefs held by Americans consumers about the appropriateness of purchasing foreign-made products. In four independent studies, they uncovered different levels of ethnocentric predispositions toward foreign-made products in different regions of the United States, further supporting the notion of differences within countries. However, the studies by Klein et al. (1998) and Shimp and Sharma (1987) were framed to focus on regional rather than cultural differences as such. On the other hand, in what is probably the only systematic study to date of subcultural influences in the country of origin context, Heslop et al. (1998) researched English and French Canadians attitudes toward products from ethnically affiliated origins and had somewhat mixed findings. Their hypotheses for a preference for British goods among English Canadians, and for products from the respondents' respective home provinces, were confirmed, but those concerning a preference for French products by French Canadians, and for developing countries linked to each of the two groups, were not. Research on the Product-Country Image (PCI) issue (also known as country of origin [COO]) began about 40 years ago and has grown rapidly to become one of the most important fields in international marketing and business theory, with well over 700 published studies to date (Papadopoulos and Heslop 2002). This substantial literature reflects the pervasive presence of origin cues in society and the economy, public policy, and business decision-making.

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

177

The body of work on PCI has made significant theoretical and practical contributions (Jaffe and Nebenzahl, 2001), but this field shares a limitation which is common in international marketing research overall (Heslop et al. 1998): many studies that are described as crosscultural are in fact cross-national. As argued quite reasonably by Lenartowicz and Roth (2001), while a variety of dimensions have been used to reflect culture, the cultural grouping or unit of analysis typically has been identified by national geopolitical boundaries. While sub-cultural differences are taken into account by practitioners in some cases and have been addressed by researchers in the domestic marketing context and especially in relation to immigrant and racially distinct groups (e.g., Mehta and Belk 1991, Donthu and Cherian, 1994), this has not been the case from the international perspective, which has tended to focus on the cross-national divide. Thus, as in international marketing in general, both single-culture and cross-culture studies on PCI effects have implicitly assumed that homogeneous consumer groups exist within the nations studied which is a fallacy" (Padmanabhan, 1988). By ignoring cultural heterogeneities within nations, international marketers may overlook subculture-based opportunities and threats. Sub-cultural biases in preferences might lead consumers to favor products from ethnically-affiliated countries, especially if there are intranational variations in culture. Therefore, the development of effective international marketing strategies that are sensitive to sub-cultural differences within a country would be of considerable importance for success in the marketplace. The objective of this study is to explore sub-cultural similarity-based biases in PCI effects. PCI effects refer to the extent to which the origin of a particular product influences its evaluation. Previous research has demonstrated that consumers evaluate the assortment of products originating from a given country with a consistent positive or negative attitude. For example, Bilkey and Nes (1982) reviewed 25 studies and each one indicated that an origin country s image does affect product evaluation. The positive impact of PCI on product attitude was found across a variety of subjects (e.g., students vs. non-students, end-consumers vs. industrial buyers), products (e.g., general products vs. specific products, goods vs. services), and methodological settings (e.g., single vs. multiple cue studies, experiments vs. surveys). Liefeld (1993) did a meta-analysis of 22 experimental investigations of country of origin cue effects on consumer judgments and choice. In all but two of the experiments he reviewed, PCI was found to be statistically related to consumer product evaluations or choices over a wide variety of products ranging from cars and personal computers to glassware and fruit juice. Further, Herche (1994), building on Shimp and Sharma s (1987) work on ethnocentrism, found that perceptions of the morality of buying imports among the members of a particular market had a much greater influence on the decision to purchase goods from overseas than did a marketing strategy focused on, for example, lower prices or intensive distribution. Finally, Peterson and Jolibert (1995) did a meta-analysis of survey research in the PCI field and reported that, as a cue, product origin differentially influences attitudes and purchase intentions. Since COO was considered an important differentiating factor in consumer attitudes to foreign and local brand names, its effect was further examined for the consumer durables category by performing a Pearson's correlation between the product attribute ratings of "technology", "quality", "status and esteem" associated with the product, "value for money", and "credibility of country-of-origin" of the product.

178

Key Drivers of Organizational Excellence

In this context, research on Eastern European countries namely, Russian (Johansson et al., 1994), and Hungarian consumers (Papadoupoulos and Heslop, 1993), has been found to show that consumers prefer Western products because of superior quality, despite consumer ethnocentric tendencies. Eroglu and Machleit (1989) too, have found that a product's technical complexity, as is the case with consumer durables, affects the importance given to consumer evaluations and that, the more complex the product, the more relevant the COO cue. The fact that product category and product COO interact with each other has also been pointed out in related research (Roth and Romeo, 1992), where it has been argued that the country of manufacture effect may be stronger for products with high to intermediate levels of technical sophistication. While national reputations for products vary from country to country, consumers tend to generalize their attitudes and opinions across products from a given country, based on their familiarity and background with the country, and their own personal experiences of product attributes such as "technological superiority", "product quality", "design", "value for money", "status and esteem", and "credibility of country-of-origin" of a brand. Favorable country perceptions are known to lead to favorable perceptions of associated attributes such as product quality indicating thereby, that consumer evaluations are governed by influences other than the quality of the product (Peterson and Jolibert, 1995). In this context, COO effect refers to the extent to which the place of manufacture influences consumers' product evaluations. COO has furthermore, been used as a foremost and primary cue by consumers in evaluating new products under several conditions, depending on their expertise (Maheswaran, 1994), with minimal consideration given to other product related attributes. As a primary cue, therefore, it has been found to reflect consumers' general perceptions about the quality of products made in a foreign country, along with the nature of people from that country (Iyer and Kalita, 1997). It has also been demonstrated that COO, when known to consumers, influences their evaluation not merely of generic product categories, but also of specific brands (Bilkey and Nes, 1982; Johansson and Nebenzahl, 1986). It has, however, been observed that COO effects predominate only when consumers are able to elaborate on them before evaluations (Hong and Wyer, 1989). Insights to this effect have also been provided by Baughn and Yaprak (1993). They have suggested that culture specific factors influence the weight given by consumers to the COO as an attribute in their evaluations of foreign brand names. In this context, Papadoupoulos et al. (1993) have suggested that consumer perceptions of a product's COO are based on three components associated with the standard attitude model namely their "cognitions" which include knowledge about specific products and brands, consumer "affect" or favorable/unfavorable attitude towards the COO, and their "conative" behavior which is related to actual purchase of a foreign brand. Sometimes, the "affect" or emotional component may be given overriding predominance by consumers and overshadow the "cognitive" or rational component in evaluation of a foreign or local brand name. Here, it has also been found that, under ceteris paribus conditions, consumer ethnocentric motivations result in favoring domestic brands, though not in situations where foreign brands are regarded better. In fact, higher levels of domestic country bias have been found in research on Western consumers where domestic products were found to enjoy a generally more favorable evaluation than foreign made products (Bannister and Saunders, 1978; Cattin et al., 1982). Subsequently Balabanis and Diamantopoulos (2004) too, examined eight product categories with regard to consumer preferences for domestic versus foreign brands and found that ethnocentrism was also dependent to a large extent, on the nature of the product category.

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

179

Owing to changes in the global strategic environment, product country association is, however, no longer just a single country phenomenon and several product and brands are now emerging as a result of multi-firm and multi-country efforts. With the hybridization of country of manufacture, design, assembly and brand name, it is becoming more difficult for consumers to pinpoint a particular country with which a product can be associated. In light of this, the COO paradigm has undergone several shifts so that the brand name, as well as country-of-origin of brand (COB) is taking on a relevance of its own. It is in this context that the present study focused on the issue of comparing preferences for foreign versus local brand names in the Indian market. Consumers' attitudes toward products emanating from different countries are influenced by the country-of-origin cues (Schooler 1965); country image is used by consumers in their information processing (Hong and Wyer 1989) and so has a significant impact upon consumers' purchasing intention and decision making; country of origin effects vary across product categories (Kaynak and Cavusgil 1983); products manufactured in developed countries enjoy image advantage over those made in developing countries (Lumpkin and Crawford 1985); some linkage exists between country image and brand image effects in the marketplace (Nebenzahl and Jaffe 1996). In short, evidence supports the view that an image phenomenon does exist in marketplace and has a significant impact on consumers' buying behavior. Moreover, how all of these variables are affected by image differences between developed country like the United States and a small developed country like Australia, is not known. Global alliances and foreign sourcing trends among companies in different nations have resulted in the manufacture of many famous brands outside the country that originally manufactured the brand. Among well-known US brands, those of GE and Kenmore are often manufactured in Mexico while some Japanese brands such as Honda and Toyota have a substantial manufacturing presence in the US. Even lesser known Korean brands such as Goldstar and Hyundai have manufacturing plant in the US and Canada, respectively. Proliferation of such bi-national brands in the global market has focused attention on the role played by brand image (BI) and country of origin (COO) in consumer evaluations of these brands. This has become a major concern for many multinational companies (Ettenson and Gaeth, 1991). Although several studies have explored the area by employing brand names in their research design, the relative impact of brand image and country of origin remains unclear (Bilkey and Nes, 1982; Ettenson and Gaeth, 1991). While some researchers (Han and Terpstra, 1988; Wall et al., 1991) reported that COO effect is stronger than brand image, a study by Cordell (1992) found mixed results, with effect of brand name becoming stronger or weaker than COO, depending on product type. Besides the ambiguous findings, these studies generally suffer from weak manipulation problems. Han and Terpstra (1988) simply measured brand image, without any manipulation. Cordell (1992) and Wall et al. (1991) carried out manipulations, but of brand familiarity and not brand image. Another issue in the COO research relates to the definition of country image. Researchers in COO studies have typically defined country image at the product class level (Chao, 1993; Cordell, 1992; Han, 1989; Han and Terpstra, 1988; Nagashima, 1970, 1977;

180

Key Drivers of Organizational Excellence

Roth and Romeo, 1992; Wall et al., 1991). For example, Roth and Romeo (1992) defined country image as the consumers' overall view of products from a particular country, based primarily on their prior perceptions of that country's strengths and weaknesses in production and marketing. However, country image need not and should not be confined exclusively to the domain of a specific product class. Rather, it can be viewed at two different levels: one representing product class image and the other representing country image in general or overall terms. Partitioning of country image is not a new concept. According to Bannister and Saunders (1978), country image (CMOI in our terminology) is created not only by products but also by other variables like economic and political status, historical events, relationships, traditions, industrialization, the degree of technological advancement, representative products and so on. Parameswaran and Yaprak (1987) developed scales measuring general country attitudes (GCA), general product attitudes (GPA), and specific product attributes (SPA). Head (1988), Bannister and Saunders (1978), Hooley et al. (1988); Lawrence et al. (1992) also distinguish between general country image and product specific image. We believe that the effect of COO on consumer evaluation of binational products can be better understood through the division of COO into the two sub-constructs of CMOI and CMPI. Consumers can have a positive image about a country's products while they have negative overall image about that country. For example, most Americans may evaluate Iranian or Iraqi carpets highly but not those countries. Also, it is possible that many Israelis may have positive evaluations about the German automobiles but not about Germany itself. Despite possible differences between the two sub-constructs of country image, their separate influences on consumer evaluation of binational products have never been investigated. Most previous studies have focused on CMPI. One probable reason for this is that CMPI has a more direct link to consumer evaluation of binational products. While there have been studies investigating antecedents of CMPI, even these have focused not on an overall country image, but on specific elements of an overall country image such as economic development and political maturity (Wang and Lamb, 1980; 1983). One additional issue in the COO research area deals with familiarity (or knowledge). While there are several types of familiarity seemingly related to COO research, their role in determining the COO effect is unclear. Johansson (1989) argued that, intuitively, the effect of COO on brand image would be strongest for people with little or no product familiarity. However, results on the effect of product familiarity showed positive interaction between product familiarity and COO effect, with the latter becoming stronger with an increase in the former (Johansson et al., 1985; Johansson and Nebenzahl, 1986). In another study employing brand familiarity (famous versus obscure brands), Cordell (1992) hypothesized that the decline in evaluation associated with less developed countries is lower with well-known brand names than with unfamiliar brand names. But results showed that while consumer evaluation of a wristwatch changed as hypothesized (i.e. ratings decreased more for an unfamiliar brand than a familiar one), consumer evaluation of shoes exhibited just the opposite pattern, i.e. ratings decreased more for a familiar brand! Besides effects of product or brand familiarity, country familiarity may play a role in COO research. Depending on the familiarity of the country, use of COO information could vary. Despite such a possibility, role of country familiarity has been neither questioned nor investigated. This study utilizes the little used categorization theory to examine the separate effects of CMPI and CMOI on brand image and familiarity.

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

181

Categorization is an essential part of consumer information processing. Consumers have to deal with categorical cues like brand name and country of manufacture when they form attitudes toward binational brands. Hence, roles of country image, brand image and familiarity on consumer evaluation of binational brands can be explained with categorization theory, which also provides theoretical background for the theory, attitudes toward a stimulus are directly related to attitudes associated with the activated category (Alba and Hutchinson, 1987; Fiske, 1982; Gilovich, 1981; Kahneman and Tversky, 1972, 1973; Read, 1983; Sujan, 1985). On encountering a brand name associated with a positive image, consumers are likely to infer positive attitudes toward the product bearing the brand name. Even though the product may be made in a different country and not the one originally associated with the brand name, consumers may still regard the product as ``belonging'' to the brand. Therefore, when the image generated from a brand name is positive, consumer evaluation of the binational product is likely to be favorable. When the brand image is negative, consumer evaluation of the binational product is likely to be unfavorable.

RESEARCH METHODOLOGY The Study: The study was exploratory in nature with survey method being used to complete the study. Sample Design: Population included consumers in Gwalior region. Since the data was collected through personal contact the sample frame included customers who purchased watches during the data collection phase. Individual respondent were the sample element. Purposive quota sampling technique was used to select the sample. The sample size was of 200 respondents that included equal number of males and females. The average age of the respondents was 35 years. Tools Used for data Collection: Self-designed measure was used to collect data on customer perception towards watches that were manufactured in different countries. Likert type scale was used ranging from 1 to 7 where 7 indicated maximum agreement and 1 indicated minimum agreement. Tools Used for Data Analysis: The measure was standardized through computation of reliability and variability. Item to total correlation was applied to check the internal consistency of the questionnaire. Factor analysis was used to find out the underlined factor. Z - Test was applied to find out the significance difference.

RESULTS AND DISCUSSION Consistency of all the factors in the questionnaire was checked through item to total correlation. Under this correlation of every item with total is measured and the computed value is compared with the cut off value (values >0.1946 at 97 degree of freedom). If the computed value was found less then whole factor/item was dropped and termed as inconsistent.

Item to Total Correlation: The questionnaire was confirmed in first iteration means. In first iteration entire item has higher item to total correlation than the cut of value (.1946) The value of item to total correlation was above cut-off value for most of the items included in the questionnaire so all the items were found consistent in the questionnaire.

182

Key Drivers of Organizational Excellence

Reliability Tests: Reliability test was carried out using SPSS and the reliability test measures are given in the table 1 below: Table1: Showing Results of Different Reliability Tests Applied on Brand Image Measure

Method

Reliability

Cronbach Alpha

0.916

Split Half

0.892

Gutman

0.927

Parallel

0.916

Strict Parallel

0.915

All the reliability test values as indicated in the table above are higher than 0.89, indicating that the measure is highly reliable.

FACTOR ANALYSIS The KMO Bartlett test of Sphericity indicates that the data is suitable for factor analysis. The KMO measures the sampling adequacy which should be greater than 0.5 for a satisfactory factor analysis to proceed. Looking at the table below, the KMO measure is 0.956. From the same table, we can see that the Bartlett's test of sphericity is significant. That is, its associated probability is less than 0.05. In fact, it is actually 0.000. This means that the correlation matrix is not an identity matrix. The above facts indicate that the data collected on Brand Image of Wrist watches in suitable for factor analysis. KMO and Bartlett's Test

Kaiser-Meyer-Olkin Measure of Sampling Adequacy. Bartlett's Test of Sphericity

.956

Approx. Chi-Square

1.445E4

df

496

Sig.

.000

Principle Component Analysis with Varimax Rotation and Kaiser Normalization was applied to identify the common underlying factors. The Principle Component Analysis converged on 6 factors; the factors were named according to the common nature of statements. The details about the factors, the factor name, variables converged and their Eigen values are given in the table below. Table 2: Showing the Results of Factor Analysis on Brand Image of Watches

Factor name

Eigen values

Trustworthy

9.474

Variable converged

% of variance

Loadings

13

Makers of this brand are knowledgeable.

28.710

.655

27 20

Brand is charming. This brand is personally relevant to you.

.610 .606

12

Makers of this brand are innovative.

.602 Contd...

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches Brand is honest. You admire and respect the people using this brand. You like this brand. Makers understand your needs.

581 .571

Brand is elegant. Price of the brand higher from its competitors. You admire this brand. Value of this brand is good. Brand is unique. Brand gives you a feeling of excitement.

.533 .532

29 23

Brand is prestigious. Brand gives you a feeling of warmth.

.442 .434

04 03

Brand has effective speed and responsiveness. Brand satisfies your requirements.

05 02 01

Brand is stylish. Brand is reliable. Frequently you think about this brand.

08

Brand is up – to – date.

07

4.528

14 33 10

Price of the brand lower from its competitors. Trust the makers. Brand is young. Brand is successful.

1.308

30 31

Brand is outdoorsy. Brand is techno-friendly

3.965

.710 .515

1.168

22

Brand is superior to other brands.

3.541

.597

28 19

Brand is daring. You recommended this brand to others.

32

Brand is fascinating.

25 09 16 15 17 06 18 11 21 24

Up-to-date

1.690

1.494 Attractive

Latest Dominant

Fascinating

183

1.052

.563 .561

.529 .510 .499 .466

5.121

.789 .786 .736 .668 .530 .359 .623 .544 .522 .478

.572 .526 3.188

.804

The effect of Country of Origin on the brand image was evaluated using data collected on the brand image of all the four watches (Rolex, Rado, Titan and Citizen) used in the study. The brand image of two Swedish brands Rolex and Rado was compared with that of the two Indian brands Titan and Citizen through z-test computed between the brand Image data.

Z-Test Results and Hypothesis Testing Table 3: Showing the Z-test Results (Male and Female Respondents) Applied between the Brand Image of Indian and Foreign Watches

Male and female respondents

Mean and SD values

Z-test value

Test verdict

Foreign watches (Rolex, Rado)

M=192.98,Sd=8.488

33.64

Significant

Indian watches (Titan, Citizen)

M=148.97,Sd=16.83

184

Key Drivers of Organizational Excellence

Ho: There is no difference in the perception of male and female respondents towards Indian and Foreign watches. The hypothesis is rejected as the Z Test value (33.64) calculated between the mean responses of male and female respondents towards all the watches is significant at 0% level of significance indicating that the perception of male and female respondents differ significantly towards the foreign brands when compared with their Indian counterparts. Also, the brand Image of foreign watches is superior to that of the Indian watches as indicated by the higher mean brand image scores of foreign watches than the Indian counter parts. The variation in the responses towards Indian watches is higher indicating that respondents Table 4: Showing the Z-test Results (Male Respondents) Applied between the Brand Image of Indian and Foreign Watches

Male Respondents

Mean and Std dev values

Z-test value

Significant/ Insignificant

For Indian watches

M=148.065,Sd=16.499

35.342

Significant

For foreign watches

M=194.01,Sd=8.11

Ho: There is no difference in the perception of male respondents towards Indian and foreign watches. The hypothesis is rejected as the Z Test value (35.342) calculated between the mean responses of male respondents towards Indian and foreign watches is significant at 0% level of significance indicating that the perception of male respondents differ significantly towards Indian and foreign watches. Table 5: Showing the Z-test Results (Female Respondents) Applied between the Brand Image of Indian and Foreign Watches

Female Respondents

Mean and std values

Z-test value

Test Verdict

For Indian watches

M=148.97,Sd=16.83

32.054

Significant

For foreign watches

M=191.95,Sd=8.75

Ho: There is no difference in the perception of female respondents towards Indian and foreign watches. The hypothesis is rejected as the Z Test value (32.054) calculated between the mean responses of female respondents towards Indian and foreign watches is significant at 0% level of significance indicating that the perception of female respondents differ significantly towards Indian and foreign watches. Table 6: Showing Z-Test Values Applied Between the Male Responses to Foreign Watches and Female Responses to Indian Watches

Respondents

Mean and SD values

Z-test value

Test Verdict

Females - foreign watches

M=191.95,Sd=8.75

33.116

Significant

Males - Indian watches

M=148.065,Sd=16.499

Ho: There is no difference in the perception of female respondents towards foreign watches and male respondents towards Indian watches.

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

185

The hypothesis is rejected as the Z Test value (33.116) calculated between the mean responses of female respondents towards foreign watches and male respondents Table 7: Showing Z-Test Values Applied Between the Male Responses to Indian Watches and Female Responses to Foreign Watches

Respondents

Mean and SD values

Z-test value

Test Verdict

Male res. for foreign watches

M=194.01,Sd=8.11

34.095

Significant

Female res. for Indian watches

M=148.97,Sd=16.83

Ho: There is no difference in the perception of male respondents towards foreign watches and female respondents towards Indian watches. The hypothesis is rejected as the Z Test value (34.095) calculated between the mean responses of male respondents towards foreign watches and female respondents towards Indian watches is significant at 0% level of significance indicating that the perception of female respondents towards Indian watches and male respondents towards foreign watches differ significantly. The results presented in the seven tables above indicate that the foreign brands have different brand image than that of the Indian brands. The implications are that country of Origin has significant impact on the brand image of wrist watches. The vast literature on country-oforigin effects (Reierson, 1967; Nagashima, 1970; 1977; Gaedeke, 1973; Dornofit et al., 1974; Erickson et al., 1984; Johansson et al., 1985; Han, 1989) have all consistently found that country-of-origin has an impact on consumers’ evaluation of the product. Many studies have investigated whether the country-of-origin effect might be mediated by external environmental factors. For example, researchers have looked at political, economic, cultural, and social factors (see for examples, Gaedeke, 1973; Kaynak and Cavusgil, 1983; Wang and Lamb, 1980; 1983; Hallen and Johanson, 1985; Lumpkin and Crawford, 1985; Shimp et al., 1993; Samiee, 1994). Others have examined the mediating effects of demographic factors (see for examples, Schooler and Sunoo, 1971; Dornofitet al., 1974; Bannister and Saunders, 1978; Wall et al., 1991). In addition, Johansson (1989), Hong and Wyer (1989), Heimbach et al. (1989), and Tse and Gorn (1993) have investigated whether consumers’ familiarity and knowledge about the product would affect how they view hybrid products. However, in all these studies, only a single cue such as the country-of-origin label (Bilkey and Nes, 1982) has been examined. This is not only unrealistic, but can potentially bias the results (Johansson et al., 1985). The current study has evaluated the mediating effect of the gender of respondents on country of origin effects on brand image of watches. Products with foreign brand names are frequently associated with the country-of-origin (COO) of the brand.

CONCLUSION This study has resulted in developing a standardized measure to evaluate the effect of the brand image of foreign sourced goods (Rolex, Rado) and Indian goods (Titan, Citizen). This study elucidated that the brand image of foreign sourced watches (Rolex, Rado) in the minds of Indian consumer’s perception both for male and female is better than the brand image of Indian watches (Titan, Citizen). The brand image of foreign watches is better among male respondents than among female respondents and the brand image of Indian watches is better among female respondents than among male respondents. The effect of Country of

186

Key Drivers of Organizational Excellence

Origin proved to be significant for foreign watches as there is significant difference in the perception of male respondents (Z-test value: 35.342) and female respondents (Z-test value: 32.054) between the Indian and foreign watches. The study thus suggested that the brand image for foreign watches is higher than the brand image for Indian watches.

References Aaker, D. (1991), Managing Brand Equity: Capitalizing on the Value of a Brand Name, The Free Press, New York, NY. Aaker, D., Keller, K. (1990), Consumer Evaluation of Brand Extension, Journal of Marketing, 54(1), 27-41. Alba, J. W. and Marmorstein, H. (1987), The Effects of Frequency Knowledge on Consumer DecisionMaking, Journal of Consumer Research, vol. 14 (June), 14-25. Alba, J.E., Hutchinson, W. (1987), Dimensions of Consumer Expertise, Journal of Consumer Research, 13( 1), 41154.. Alden, D.L. (1993), Product Trial and Country of Origin: An Analysis of Perceived Risk Effects, Journal of International Consumer Marketing, 6 (1), 7-25. Balabanis, G., Diamantopoulos, A. (2004), Domestic Country Bias, Country-of-Origin Effects, and Consumer Ethnocentrism: A Multidimensional Unfolding Approach, Journal of the Academy of Marketing Science, 32(1), 80-95. Bannister, J.P., Saunders, J.A. (1978), UK Consumers’ Attitudes towards Imports: The Measurement of National Stereotype Image, European Journal of Marketing, 12(8), 562-70. Baughn, C.C., Yaprak, A. (1993), Mapping Country of Origin Research: Recent Developments and Emerging Avenues, in Papadopoulos, N., Heslop, L. (Eds), Product-Country Images: Impact and Role in International Marketing, International Business Press, New York, NY, pp.89-115. Bilkey, W. J. and E. Nes (1982), Country-of-origin effects on product evaluations, Bos, C.A. (1994), The Road to Mexico, Target Marketing, 17(4), 48-9. Bruning, E.R. (1997), Country of Origin, National Loyalty and Product Choice: The Case of International Air Travel, International Marketing Review, 14( 1), 59-74. Cattin, P.J., Jolibert, A., Lohnes, C. (1982), A Cross-national Study of Made-in Concepts, Journal of International Business Studies, Winter, pp.131-41. Chao, P (1993), Partitioning Country of Origin Effects: Consumer Evaluations of a Hybrid Product, Journal of International Business Studies, 24 (2), 291-306. Chasin, J., Jaffe, E. (1979), Industrial Buyer Attitudes toward Goods Made in Eastern Europe, Columbia Journal of World Business, 14(2), 74-81. Chasin, J., and Jaffe, E. (1983), Industrial Buyer Attitudes towards Goods Made in Eastern Europe: An Update, European Management Journal, 5(3), 180-9. Cordell, V. V. (1992), Effects of Consumer Preferences for Foreign Sourced Products, Journal of International Business Studies, 23, 251-270. Darling, J. (1981), The Competitive Marketplace Abroad: A Comparative Study, Columbia Journal of World Business, 17, pp.53-62. Dichter, E (1962), The World Customer, Harvard Business Review, 40, July-August. Donthu, N., Cherian, J. (1994), Impact of Strength of Ethnic Identification on Hispanic Shopping Behavior, Journal of Retailing, 70(4), 383-93.

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

187

Dornofit, R.J., Tarkersley, C.B., White, G. (1974), Consumer Perceptions of Imports , Akron Business and Economic Review, Vol. 5, pp., 26-9. Dzever, S., Quester, P. (1999), Country-of-Origin Effects on Purchasing Agents’ Product Perceptions: An Australian Perspective, Industrial Marketing Management, Vol. 28, pp.165-75.. Erickson, G.M., Johansson, J., Chao, P. (1994), Image Variables in Multi-Attribute Product Evaluations: Country-of-Origin Effects, Journal of Consumer Research, Vol. 11 pp.694-9. Erickson, G.M., Johansson, J.K., Chao, P. (1984), Image Variables in Multi-Attribute Product Evaluations: Country of Origin Effects, Journal of Consumer Research, 11(September), 694-9. Eroglu, Sevgin A. and Karen A. Machleit, (1988), Effect of Individual and Product-Specific Variables on Utilizing Country-of-Origin as a Product Quality Cue, International Marketing Review, 6 (November), 27-41. Ettenson, R., Gaeth, G. (1991), Commentary: Consumers’ Perception of Hybrid Bi-National Products, Journal of Consumer Marketing, 8(4), 13-18. Fiske, S.T. (1982), Schema-Triggered Affect: Applications to Social Perception, in Clark, M., Fiske, S. (Eds), Affect and Cognition: The 17th Annual Carnegie Symposium on Cognition, Erlbaum, Hillsdale, NJ, pp.55-78. Forgas, J., and O’Driscoll, M. (1984), Cross-Cultural and Demographic Differences in the Perception of Nations, Journal of Cross-Cultural Psychology, June, pp., 199-222. Gaedeke, Ralph (1973), Consumer Attitudes toward Products ‘Made In’ Developing Countries, Journal of Retailing, 49 (Summer), 13-24. Garland, B.C., Crawford, J.C. (1987), Satisfaction With Products of Foreign Origin, in Tan, C.T., Sheth, J.N. (Eds), Historical Perspectives in Consumer Research, Association for Consumer Research, Singapore, pp.160-1. Gilovich T. (1981), Seeing the Past in the Present: The Effect of Associations to Familiar Events of Judgments and Decisions, Journal of Personality and Social Psychology, 40 , 797-808 Halfhill, D. (1980), Multinational Marketing Strategy: Implications for Attitudes toward Country of Origin, Management International Review, 20(4), 26-30. Hallen, L., Johanson, J. (1985), Industrial Marketing Strategies and Different National Environments, Journal of Business Research, Vol. 13, pp. 495-509. Han, C. Min and Vern Terpstra (1988), Country-of-Origin Effects for Uni-National and Bi-National Products, Journal of International Business Studies, (Summer), 235-55. Han, C.M. (1989), Country Image: Halo or Summary Construct? Journal of Business Research, 29 (February), 151-162. Han, M.C. (1989), Country image: halo or summary construct? Journal of Marketing Research, Vol. 26, pp. 2229. Hawkins, D.I., Best, R.J. and Coney, K.A. (1995), Consumer Behavior: Implications for Marketing Strategy , Irwin, Chicago. Head, D. (1988), Advertising Slogans and the “Made-in” Concept, Internal Journal of Advertising, 7, 237-252. Heimbach, A.E., Johansson, J.K., MacLachlan, D.L. (1989), Product Familiarity, Information Processing and Country-Of-Origin Cues, Advances in Consumer Research, Vol. 16 pp.460-7. Herche, Joel (1994), Ethnocentric Tendencies Marketing Strategy Impact Purchase Behavior , International Marketing Review 11(3): 4-16 Heslop, L, Papadopoulos, N, Bourke, M (1998), An Interregional and Intercultural Perspective on Subculture Differences in Product Evaluation, Canadian Journal of Administrative Sciences, 15(2), pp.113-27. Hong, S., Wyer, R.S. Jr. (1989), Effects of country-of-origin and product-attribute information on product evaluation: an information processing perspective, Journal of Consumer Research, Vol. 16 pp.175-87.

188

Key Drivers of Organizational Excellence

Hooley, G. J., David Shipley, and Nathalie Krieger (1988), A Method for Modelling Consumer Perceptions of Country of Origin, International Marketing Review, 5 (Autumn), 67-76. Howard, D.L. (1989), Understanding How American Consumers Formulate Their Attitudes about Foreign Products, Journal of International Consumer Marketing, 2(2), 7-22. Huber, J., McCann, J. (1982), The Impact of Inferential Beliefs on Product Evaluations, Journal of Marketing Research, Vol. 19 pp.324-33. Iyer, G.R., Kalita, J.K. (1997), The Impact of Country-of-Origin and Country of Manufacture Cues on Consumer Perceptions of Quality and Value, Journal of Global Marketing, 11(1), 7-28. Jacoby, J., Szybillo, G.J., Busato-Schach, J. (1977), Information Acquisition Behavior in Brand Choice Situations, Journal of Consumer Research, 3(4), 209-16. Jaffe, E.D., Martinez, C.R. (1995), Mexican Consumer Attitudes towards Domestic and Foreign Made Products, Journal of International Consumer Marketing, 7(3), 7-26. Jaffe, E.D., Nebenzahl, I.D. (2001), National Image and Competitive Advantage, Copenhagen Business School Press, Copenhagen, Johannson, J. (1989), Determinants and Effects of the Use of ‘Made in’ Labels, International Marketing Review, 6( 1), 47-58. Johansson, J., Douglas, S., Nonaka, I. (1985), Assessing the Impact of Country of Origin on Product Evaluations: A New Methodological Perspective, Journal of Marketing Research, Vol. 22 pp.388-96. Johansson, J.K. and I. D. Nebenzahl (1986), Multinational Production: Effect on Brand Value, Journal of International Business Studies, 17 (3), 101-126. Kahneman, D., and Tversky, A. (1973), On the Psychology of Prediction , Psychological Review, 80, 237-251. Kahneman, D., and Tversky, A (1972), Subjective Probability: A Judgment of Representativeness, Cognitive Psychology, 3, 430-454. Kahneman, Daniel and Amos Tversky (1973), On the Psychology of Prediction, Psychology Review, July, 237251 Kaynak, E., Cavusgil, T. (1983), Consumer Attitudes towards Products of Foreign Origin: Do They Vary across Product Classes? International Journal of Advertising, Vol. 2 pp.147-57. Keller, K.L. (1993), Conceptualizing, Measuring, and Managing Customer-Based Brand Equity, Journal of Marketing, Vol. 57, pp., 1-22. Knight, G.A., Calantone, R.J. (2000), A Flexible Model of Consumer Country-of-Origin Perceptions: A Cross Cultural Investigation, International Marketing Review, Vol. 17, pp. 127-45. Lenartowicz, T., Roth, K. (1999), A Framework for Culture Assessment, Journal of International Business Studies, 30(4), 781-798. Levy, M. (1978), Methodology for Improving Marketing Productivity through Efficient Utilization of Customer Service Requirements, PhD dissertation, Ohio State University, Columbus. Levy, S.J. (1978), Marketplace Behavior, Amacom: New York, NY. Liefeld, J.P. (1993), Experiments on Country-of-Origin Effects: Review and Meta-Analysis, in Papadopoulos, N., Heslop, L. (Eds), Product Country Images: Impact and Role in International Marketing, International Business Press: New York, NY. Lillis, C.M., Narayana, C. (1974), Analysis of Made-In Product Images – an Exploratory Study, Journal of International Business, 5(1), 119-27. Lim, J.S., Darley, W.K. (1997), An Assessment of Demand Artifacts in Country-of-Origin Studies, Using Three Alternative Approaches, International Marketing Review, 14(4), 201-16.

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

189

Linville P.W. (1982), The Complexity-Extremity Effect and Age Based Steriotyping, Journal of Personality and Social Psychology, 42( 2), 193-211. Lumpkin, J. R., J. C. Crawford and G. Kim (1985), Perceived Risk as a Factor in Buying Foreign Clothes: Implications For Marketing Strategy, International Journal of Advertising, 4(2), 157-71. Maheswaran, D. (1994), Country-of-Origin as a Stereotype: Effects of Consumer Expertise and Attribute Strength on Product Evaluations, Journal of Consumer Research, 18 (March), 519-529. Maronick, T.J. (1995), An Empirical Investigation of Perceptions of ‘Made in USA’ Claims, International Marketing Review, 12( 3), 15-30. Martin, I. and S. Eroglu (1993), Measuring a Multi-Dimensional Construct: Country Image, Journal of Business Research, 28, 191-210. McGill, Ann L. and Jill G. Klein (1995), Counterfactual and Contrastive Reasoning in Explanations for Performance: Implications for Gender Bias, in Neal J. Roese and James M. Olson (eds.), What Might Have Been: The Social Psychology of Counterfactual Thinking, Hillsdale, NJ: Erlbaum, 333-352. Mehta, Raj and Russell Belk (1991), Artifacts, Identity, and Transition: Favorite Possessions of Indians and Indian Immigrants to the United States, Journal of Consumer Research 17 (March), 398-411. Morello, G. (1984), The ‘Made In’ Issue, European Research, Vol. 6, pp. 5-21. Nagashima, A. (1970), A Comparative ‘Made-In’ Product Image Survey Among Japanese Businessmen, Journal of Marketing, 41( 3), 95-118. Nagashima, A. (1977), A Comparative ‘Made In’ Product Image Survey among Japanese Businessmen, Journal of Marketing, 41( 3), 95-100. Nebanzahl, I.D., Jaffe, E.D. (1997), Measuring the Joint Effects of Brand and Country Image in Consumer Evaluation of Global Products, Journal of Marketing Practice: Applied Marketing Science, 3(3), 190-207. Nebenzahl, I.D., Jaffe, E.D. (1996), Measuring the Joint Effect of Brand and Country Image on Consumer Evaluation of Global Products, International Marketing Review, 13( 4), 5-22. Nes, E.B., Bilkey, W.J. (1993), “A multi-cue test of country-of-origin theory”, in Papadopoulos, N., Heslop, L. (Eds), Product-Country Images: Impact and Role in International Marketing , International Business Press, New York, NY, pp.179-85. Obsersmiller, C. and E. R. Spangenberg (1989), Exploring the Effects of Country-of-Origin Labels: an Information Processing Framework, Advances in Consumer Research, 16, 454-459. Okechuku, C., Onyemah, V. (1996), Nigerian Consumer Attitudes towards Foreign and Domestic Products, Journal of International Business Studies, 30(3), 611-22. Ozsomer, A., Cavusgil, T.S. (1991), Country-of-Origin Effects on Product Evaluations: A Sequel to Bilkey and Nes Review”, in Gilly, M., Dwyer, T.F., Leigh, T.W., Dubinsky, A.J., Richins, M.L., Curry, D., Venkatesh, A., Kotabe, M., Dholakia, R.R., Hills, G.E. (Eds), American Marketing Association, Chicago, IL, pp.269-77.. Padmanabhan, K.H. (1988), Channel Control: Do Successful Franchise Systems Ultimately Become Wholly Owned Chains? Journal of Midwest Marketing, Vol. 3, pp. 17-36. Papadopoulos, N. G. (1993), What Product and Country Images Are and Are not, In N. G. Papadoupoulos and L. A. Heslop, eds., Product-Country Image: Impact and Role in International Marketing, 3-38. New York: International Business Press. Papadopoulos, N. G. and L. A. Heslop (1993), Product-Country Images: Impact and Role in International Marketing, New York: International Business press. Papadopoulos, N., Heslop, L.A. (2002), Country Equity and Country Branding: Problems and Prospects, Journal of Brand Management, 9(4-5), 294-314. Papadopoulos, N., Heslop, L.A., Bamossy, G. (1990), A Comparative Image Analysis of Domestic Versus Imported Products, International Journal of Research in Marketing, 16( 7), 283-94.

190

Key Drivers of Organizational Excellence

Papadopoulos, N., Heslop, L.A., Beracs, J. (1989), National Stereotypes and Product Evaluations in a Socialist Country, International Marketing Review, 7(1), 32-47. Papadopoulos, N., Marshall, J.J., Heslop, L.A. (1988), Strategic Implications of Product and Country Images: A Modeling Approach, Marketing Productivity, European Society for Opinion and Marketing Research, Lisbon, pp.69-90. Parameswaran, R. and A. Yaprak (1987), A Cross-National Comparison of Consumer Research Measures, Journal of International Business Studies, 18 (1), 35-49. Park, C., Milberg, S., Lawson, R. (1991), Evaluation of Brand Extensions: the Role of Product Feature Similarity and Brand Concept Consistency, Journal of Consumer Research, Vol. 18, pp. 185-93. Park, C.W., Jaworski, B.J., MacInnis, D.J. (1986), Strategic Brand Concept-Image Management, Journal of Marketing, Vol. 50 pp.135-45. Park, T.; Loomis, J.B.; Creel, M. (1991), Confidence Intervals for Evaluating Benefits Estimates From Dichotomous Choice Contingent Valuation Studies, Land Economics 67(1): 64–73. Pereira, A., Hsu, C-C., Kundu, S.K. (2005), Country-Of-Origin Image: Measurement and Cross-National Testing, Journal of Business Research, January, pp.103-6 Pereria, A., Hsu, C-C, Kundu, S.K. (2005), Country of Origin Image: Measurement and Cross National Testing, Journal of Business Research, January 1, Vol. 58, pg. 103-106. Peterson, Robert A., and Alain J. P. Jolibert (1995), A Meta-Analysis of Country-of-Origin Effects, Journal of International Business Studies, 26 (4), 883-900. Rao, A., Monroe, K. (1989), The Effect of Price, Brand, Name and Store Name on Buyers’ Perceptions of Product Quality: An Integrative Review, Journal of Marketing Research, 26(3), 351-8. Reierson, C. (1967), Attitude Changes towards Foreign Products, Journal of Marketing Research, Vol. 4 pp.3857. Roth, M.S., Romeo, J.B. (1992), Matching Product Category and Country Image Perceptions: A Framework For Managing Country-Of-Origin Effects, Journal of International Business Studies, 23(3), 477-98. Samiee, S (1994), Customer Evaluation of Products in a Global Market, Journal of International Business Studies, 25( 3), 579-604. Schellinck, D.A. (1983), Cue Choice as a Function of Time Pressure and Perceived Risk, Advances in Consumer Research, Association for Consumer Research, Ann Arbor, MI, Vol. 10 pp.470-5. Schooler, R. D. (1965), Product Bias in the Central American Common Market, Journal of Marketing Research, 4, 394-397. Schooler, R.D. (1971), Product Bias in the Central American Common Market, Journal of Marketing Research, 2, 394-397. Shimp, T.A., Saimee, S., Madden, T. (1993), Countries and Their Products: A Cognitive Structure Perspective, Journal of the Academy of Marketing Science, Vol. 21 pp, 323-30. Shimp, T.A., Sharma, S. (1987), Consumer Ethnocentrism: Construction and Validation of the CETSCALE, Journal of Marketing Research, 24(3), 280-289. Sujan, Mita (1985), Consumer Knowledge: Effects on Evaluation Strategies Mediating Consumer Judgments, Journal of Consumer Research, 12, 31-46. Thorelli, H.B., Lim, J.S., Ye, J. (1988), Relative Importance of Country-of-Origin, Warranty and Retail Store Image on Product Evaluations, International Marketing Review, 6(1), 35-44. Thorelli, H.B., Lim, J.S., Ye, J. (1988), Relative Importance of Country-of-Origin, Warranty and Retail Store Image on Product Evaluations , International Marketing Review, 6(1),35-44. Tse, D. K. and G. J. Gorn (1993), An Experiment on the Salience of Country-of-Origin in the Era of Global Brands , Journal of International Marketing, 1(1), 57-76.

Image Marketing: A Comparative Study of Foreign Sourced and Indian Watches

191

Tse, D.K., Gorn, G. (1992), An Experiment on The Salience of Country-of-Origin in the Era of Global Brands , Journal of International Marketing, 1(1), 57-76. Wall, Marjoire, John Liefeld, and Louise A. Heslop (1991), Impact of Country-of-Origin Cues on Consumer Judgments in Multi-Cue Situations: a Covariance Analysis, Journal of the Academy of Marketing Science, 19(2), 105-13. Wang, C., Lamb, C. (1983), The Impact of Selected Environmental Forces Upon Consumers’ Willingness to Buy Foreign Products, Journal of the Academy of Marketing Science, 11 (2), 71-84. Wang, C., Lamb, C.W. (1980), Foreign Environmental Factors Influencing American Consumers Predisposition towards European Products, Journal of the Academy of Marketing Science, Vol. 8, pp., 345-56. Yong Z. (1996), Country-of-Origin Effects: The Moderating Function of Individual Difference in Information Processing , International Marketing Review, 4(4), 267-87.

192

Key Drivers of Organizational Excellence

16

The Role of Culture in Consumer Behavior Hitendra Bargal Nitin Tanted Ashish Sharma

The brands are communicated to the consumers according to culture of the country. How different are the consumers across different cultures? Can there be an extension of consumer behavior in one country towards a product or services to another country? How consumers in one culture could be exposed to goods being used by the people following a different culture is an important part of consumer behavior. The main purpose of this study is to delineate a suitable method to ensure success of brands across cultures through communicating according to the cultures. The research paper thus highlights the role of culture in consumer behavior.

INTRODUCTION At Present all the major companies are marketing their products beyond their home countries. They are extending their operations in the global market. Selling products outside the country of manufacture in not the major issue for these companies but managing the total communication about the product outside the country is. The pertinent issue is whether to use same strategy in all the countries or to tailor the strategies according to the culture of each of these countries. Many countries are now joined in the communities of nations and chances are that these diverse markets will be transformed in to a single market of homogeneous group. The multinationals are spreading the fever of similar variety in most of the nations. As more and more consumers come in contact with the material goods and life style of people living in other parts of the world, they have the opportunity to adopt these. The researchers working on topics in the area of consumer behavior of diverse cultures need to understand the cultural connotations of the specific country. Some marketers have expressed their views that markets are becoming homogeneous therefore; the application of standardized marketing strategy will be possible. The standardized strategy will be more feasible if synergy can be developed between the shared values and needs. The needs of the consumers may be common but values of the consumers may be different. The companies are favoring a world brand because of the economies associated with such a

The Role of Culture in Consumer Behavior

193

strategy. The brands, which have started as a local brands are now well accepted across a large number of countries in the world. The brands like Gillette, Parker Pen, and General Motors are in this category. These brands are well attached to the culture of the country of use. The problem arises in identifying the appropriate time for spreading and extending the same to other cultures in other countries. The regional differences in culture will require different overall package or at least a different of communicating about the product. The purchasing action assessment will be essential to know for the marketers. The brands are communicated to the consumers according to the culture of the country. The marketers introduce the new products and services to the market and they try to exploit the exposure gained in the new culture. The consumers also gather the taste through different measures and gather different cultural view. This is known as culture transfer. When consumers are making purchase decisions they may take in to consideration the culture of their country. Culture becomes a criterion of evaluation for consumers for deciding various purchase decision. The purchasers have an attitude for different products belong to the different countries. For an example Ford belongs to America, Mitsubishi belongs to Japan. In view of these marketers the total apathy for Psychological, Social, Cultural and Environmental characteristics of other countries and society. Marketer will first have to obtain an in depth picture of society’s present attitudes and customs with regard to preventive medicine and related concepts. The very important situations exist for marketers to understand the culture objectively. The most important situation is that consumers present their attitude about the foreign products and how much consumers are influenced with their own culture.

NEED OF STUDY It is often difficult for a company planning to do business in foreign countries to undertake cross – cultural consumer research. In Islamic countries the gathering of four or more people is not allowed so the researches for companies like focus group etc. are not applied in that society. Culture is part of the external Impact that influences the customer. That is, culture represents effects that are put on the consumer by other individuals. The definition of culture includes knowledge, belief, art, morals, custom, and any other system received as a member of society culture, as a “complex whole,” is a system of interdependent components. Knowledge and beliefs are important parts. As in the U.S. efficiency considered strongly desirable. Chinese and Japanese have the same beliefs in other sense. Civil society of different countries has accepted these as a norm. Culture has several important characteristics: (1) Culture is comprehensive. This means that all parts must fit together in some logical fashion. For example, bowing and a strong desire to avoid the loss of face are unified in their manifestation of the importance of respect. (2) Culture is learned rather than being something we are born with. (3) Culture is manifested within boundaries of acceptable behavior. For example, in American society, one cannot show up in class naked, but wearing anything from a suit and tie to shorts and a T-shirt would usually be acceptable. Failure to behave within the prescribed norms may lead to sanctions, ranging from being hauled off by the police for indecent exposure to being laughed at by others for wearing a suit at the beach. (4) Conscious awareness of cultural standards is limited. One American spy was intercepted by the Germans during World War II simply because of the way he held his knife and fork while eating. (5) Cultures fall somewhere on a continuum between static and dynamic

194

Key Drivers of Organizational Excellence

depending on how quickly they accept change. For example, American culture has changed a great deal since the 1950s, while the culture of Saudi Arabia has changed much less. Cultural rules can be classified into three types. Formal manner, Informal manner and finally, technical cultural rules involve implicit standards as to what constitutes a good product. Language is a significant member of culture. It should be realized that regional differences may be subtle. Different perspectives exist in different cultures on several issues; e.g.: 1.·

Symbols differ in meaning. For example, while white symbols means purity in the U.S., and a symbol of death in China. Colors that are considered masculine and feminine also differ by culture.

2.·

Even in terms of manners, some cultures have more tough procedures than others. In some countries, even the standards are for giving the gift.

The United States has undergone some transformation in its culture over the last several years. Again, however, it should be remembered in mind that there are great deviations within the culture. For example, on the average, Americans have become less materialistic and have opted for more leisure and even the long hours working has also increased Many changes have occurred in gender roles in US society. One of the reasons is that more women work outside the home than before. Subculture refers to a culture within a culture. For example, African Americans are, as indicated in the group name, Americans; however, a special influence of the African American community is often also present. For example, although this does not apply to everyone, African Americans tend to worship in churches that have predominantly African American membership, and church is often a significant part of family life. Different perspectives on the diversity in U.S. culture exist. Subculture is often categorized on the basis of demographics. Thus, for example, we have the “teenage” subculture and the “Cuban-American” subculture. While part of the overall culture, these groups often have distinguishing characteristics. An important consequence is that a person who is part of two subcultures may experience some conflict. For example, teenage Native Americans experience a conflict between the mainstream teenage culture and traditional Indian ways. Values are often greatly attached with age groups because members within an age-group have shared exposure. Regional influence also plays an influencing role both in the America and other areas. Unlike in Middle East or Asia, guys seem to love aggressive girls in the US. The survey shows that American guys find aggressive and flirty women very attractive. One respondent told the researcher that no matter how ugly the woman is, once she shows her interest in him and willingness to you-know-what, her attractiveness increases by about 10%. Physical contact is the key and flirty comments like “you are kinda cute” or “have you been working out?” are tacky but greatly appreciated company’s success in marketing a product or service in a number of foreign countries is likely to be influenced by how similar the beliefs, values and customs are that govern the use of the product in the various countries. Too many marketers find International expansion is not a right strategy for companies with preconceived notion that marketing is failure outside the domestic. The product like jeans if

The Role of Culture in Consumer Behavior

195

have to be communicated to the consumers then it will have to present life style. The consumers behave in a stressing a social group image. The life style plays an important role for deciding the culture of the society. The youth and old people adopt the particular life style which in a way gives with to the particular culture with country specific. This gives an opportunity to some products for consumption and marketability. The marketability also depends on culture of adoption of certain products. The products also get positioned with cultural suitability and beliefs of the consumer. The customer will create the need for such products and culture of buying such products will be created.

The present study covers following areas 1.

The various dimension of culture which affect the buying decisions of company.

2.

The age group and culture will affect each other.

3.

The country and society influence culture.

4.

Sub culture also plays an important role in deciding the cultural dimensions of customer.

Hypothesis The following hypothesis have been framed for the study: 1.

Culture affects the buying decision of customers selectively.

2.

The cultural effect on buying process varies according to age group of customers

3.

The cultural effect on buying process varies according to life style.

4.

The culture is adopting flexibility in purchasing of various products.

The Review of Literature Sky (1995) evaluated whether Pepsi has affected the consumption culture in the youth. Bachmann, Roedder and Rao (1993) have described how children are affected through peer pressure in selecting their purchase choice. Rook (1985) listed various dimension of consumer behavior. They have also highlighted the relative importance of various factors of consumer behavior. Heroux and Church (1992) have justified the changes in behavior of consumers on the occasion of wedding anniversary and gift giving rituals. Potter (1954) presented the situation of consumers with diverse nature and ambitions. Miller (1995) has described 80 different natures of women displayed during purchasing in different capacities. Fitzerald (1994) has described the role of life style in deciding the consumer behavior. Mcclelland (1961) has described the behavioral consumption of customers. Ramesh and Ogden (1995) have described the contribution of social classification in consumer behavior pattern. Nisbet (1970) has stressed that social bond plays a very important in deciding the consumer behavioral status. Bosanko (1994) has described that working women play an important deciding role in culture of consumer behavior. Reiser (1995) posited that clubs played an

196

Key Drivers of Organizational Excellence

important role in developing a consumer culture. The advertising age (1995) has given the reasons for developing different consumer images. Bar (1995) has described different societies within a country’s culture. Wynter (1994) specified the role of group effect in deciding the purchasing behavior of consumer. Hoof (1994) has mentioned that the group differences play an important role in deciding the consumer behavior. Cohen (1992) has evaluated different community systems about their purchasing system. Rubel (1995) has shown that behavior of market play an important role in deciding the consumer behavior. Mundell (1994) suggested that age plays an important role for deciding the consumer behavior. Wilkie (1995) has advocated that Demographics play a vital role in deciding the culture of consumer. Adweek’s Marketing week (1994) has described the seen of vanishing culture of housewives and replacement by working wives.

RESEARCH METHODOLOGY The research design selected for this study is exploratory. The new facts about the culture and its relationship on the consumer purchase decision. The research work is intends to cover new facts about the practicality of the culture in consumer decision making. The sampling method adopted for research work was simple random sampling method. The sample size was 200. The data collection method was based on the following 1.

Questionnaire

2.

Observation

The respondents have been observed at various players like at shopping Hall, Talkies, Market Street, Beauty Parlours, Hotels, Restaurants, School, College and various social gatherings.

Data Analysis and Interpretation Sample classification is based on following as N=200 (sample size) Sample Unit Classification

S.No

Sample Unit

Strength

1

Students

50

2

Professionals

50

3

Business men

50

4

Tourists

50

Culture and buying decision classification

Factors Using the culture effect

Rational

Emotional

Total

75

25

100

Not using culture

30

70

100

Total

105

75

200

Culture effects more at the time of buying.

The Role of Culture in Consumer Behavior

197

Relationship between Culture and Buying Decision

Age Group

High culture Conscious

Medium Culture conscious

Low Culture Consciously

Total

20-40

45

50

15

110

40 and above

64

20

06

90

Total

109

70

21

200

Culture and buying decision are interrelated Language as a feature

Language

High

Low

Total

Standard

60

30

90

Regional

55

45

100

Total

115

75

190

Language also give its response Social Values Selected For research

Social Value

Response

Comfort

25%

Age group effect

30%

Social Mobility

10%

Material success

15%

Health and Fitness

5%

Appearance

8%

Relationship To Technology

7%

Values also work as a trendsetter. Dress code status in culture

Dress code

High

Low

Total

Formal

25

65

90

Informal

20

45

65

Total

45

110

165

Dress code also works as a category.

Festival and cultural occasion Cultural intensity

Festival Small Community Festival

High

Low

Total

75

25

100

Majority Community Festival

85

5

90

Total

160

30

190

Festivals also affect the buying.

198

Key Drivers of Organizational Excellence

FINDINGS The study has revealed that the culture affects differently in cases of owned product and rented product. When a consumer owned the product then the level of belongingness is different. Age group also plays an important role in framing the dimensions of cultural. Different responses shown by different age groups shows that age affects the culture to a great extent. Life style also playing an important role in determining the dimensions of culture, also, the life styles varied with the different age group situations. The different classes of life style exhibit different responses to a culture. Every class behaves as per their on parameters. Language also effects culture and works with its components. The standard format of language works differently. The regional format of language works differently. The dress code also plays an important role in dealing with culture aspect. Formal dress code and informal dress code affect separately. The formal dress code and informal dress codes decides the status of mindsets for consumers. The festivals also significantly decide the cultural situation.

CONCLUSIONS AND SUGGESTIONS 1.

Study of culture is very essential for marketers

2.

The role of culture could be decided under the marketing strategy formulation process

3.

The culture also works in a component style

4.

The various variables of culture are independent variables.

5.

The diagnosis of cultural factor is also universal at many times.

References Larry Jabbonsky (1995), Pepsi Sees If consumers Wanna Guarana, Beverage World, June 30, 3. Gwen Rae Bachnann, Deborh Roedder John, and Akshay Rao (1993), Children’s Susceptibility to Peer group Purchase Influence; An Exploratory Investigation, in Leigh Mc Alister and Michael L. Rothschild, eds., Advances in Consumer Research 20 (Provo, UT: Association for consumer Research), 463-68. Dennis W. Rook (1985), The Ritual Dimension of Consumer Behavior, Journal of Consumer Research 12, December, 251-64. Lise Heroux and Nancy J. Church (1992), Wedding Anniversary Celebration and Gift-Giving Rituals: The Dialectic of Intimacy, In Robert L. King, ed., Marketing: Perspectives for the 1990s (Richmond, VA, Southern Marketing Association) 43-47. Divid M. Potter (1954), People of Plenty, Chicago; University of Chicage Press, 167. Cyndee, Miller (1995), Study Dispels ‘80s Stereotypes of Women, Marketing News, May 22, 3. Kate Fitzgerald (1994), Hallmark Alters Focus as Life Styles Change, Advertising Age, October 31, 4. Deborah Basanko (1994), Why do working women work out? American Demographics, February, 12-13. Eric Hollreiser (1995), Reebok: The Club, Brand Week, February 20, 1995, 1. Leon E. Wynter (1996), Group Finds Right Recipe for Milk Ads in Spanish, The Wall street Journal, March 6, 1996, B1.

The Role of Culture in Consumer Behavior

199

Leon E. Wynter (1996), Group Finds Right Recipe for Milk Ads in Spanish, The wall Street Journal, March 6, B1. Harvey Braum (1991), Marketing to Minority Consumers, Discount Merchandiser, February, 44-46, 74. Chul Li (1992), The Asian - American Market for Personal Products, Drug and Cosmetic Industry, November, 32-36 and Ashok Pradhan, Ethnic Markets: Sales Niche of the Future, National Underwriter, November 6, 1989, 18. Helen Mundell (1994), Direct Marketers Discover the Elderly, American Demographics, June, 8, 20. Maxine Wilkie (1995), Scent of a Market, American Demographics, August, 40-49. Adweek’s Marketing Week (1991), America’s Vanishing Housewife, June 24, 28-29.

200

Key Drivers of Organizational Excellence

17

Neuromarketing – Band Wagon Between Brain and Brand Image Ekta Kapur

Marketers have always spent a great deal of time, money and energy trying to find ways to influence buyer decisions. As a result, corporations are likely to continue to make use of tools ranging from traditional consumer surveys to brain imaging studies. The study focuses on the mounting concept of neuromarketing and has looked at the science behind neuromarketing in particular. This literature scrutiny is an endeavor to impart a beam on span of neuromarketing beyond commercial brand and study of consumer senses. It includes some of the particulars about the nascent field of marketing science like olfactory marketing, sensory branding and many more. "Neuromarketing is a new field of marketing that studies consumers' sensorimotor, cognitive, and affective response to marketing stimuli". Typically, researchers connect subjects to a Functional Magnetic Resonance Imaging (FMRI) machine and watch their brain activity throughout an experiment. Marketing analysts use this information to more accurately measure consumer preference, and then apply this knowledge to help marketers better create products and services and to design more effective marketing campaigns. In addition this research embrace the commercial value means companies adopting neuromarketing technique. The pioneer in this field those who understand biology to master the science of science. How neuron marketing acts like a push button for marketers, its ethical aspects as well its future? Some future research directions are suggested. "Aristotle taught that the brain exists merely to cool the blood and is not involved in the process of thinking. This is true only of certain persons." – Will Cuppy, the Decline and fall of Practically Everybody, 1950: "Brain, An apparatus with which we think we think." – Ambrose Bierce

NEUROMARKETING-A PROLOGUE Consumers think, feel, reason, and select between different alternatives? The psychology of the consumer is influenced by his or her environment (e.g., culture, family, signs, and media).The behavior of consumers while shopping or making other marketing decisions. So the marketers are adapting a number of techniques for figuring out the ways to improve their

Neuromarketing – Band Wagon Between Brain and Brand Image

201

marketing campaigns and marketing strategies to more effectively reach the consumer. For example – Advertisements of idea cellular services, its endorser, jingle, and concept is been always awesome. and, –The ambience of cross river mall, Delhi instigates me to visit there every weekend

WHAT‘S THE BEHIND????PUSH BUTTON??? Marketers are always interested in these questions to know how people’s brains respond to advertising and other brand-related messages. In many cases they simply ask the consumer. Sometimes they observe the consumer in action and attempt to deduce the underlying rationale. Today they have carefully designed experiments that can provide valuable Information about things such as which advertisements do people respond to or what kind of store layout encourages buying. Neuro-marketing allure a quantitative way to test the subconscious effectiveness of ads, jingles, and logos before spending big dough’s on placements. In this field of study, consumervolunteers are wired to functional magnetic resonance imaging (MRI) machines to record the response of their brains to names of companies, sight, taste or use of particular products or brands, or stimuli such as content of advertisements or sales pitch by charting and registering their brain activity. The old anecdote-”Neuro-marketing is a new field of marketing which uses medical technologies such as functional Magnetic Resonance Imaging (FMRI), and Electroencephalography (EEG) to study the brain’s responses to marketing stimuli. Researchers use the FMRI to measure changes in activity in parts of the brain, or EEG to measure activity in specific regional spectra of the brain response, to learn why consumers make the decisions they do, and what part of the brain is telling them to do it.” The new anecdote-”Neuromarketing is a new field of marketing that studies consumer’s sensorimotor, cognitive, and affective response to marketing stimuli. Researchers use technologies such as functional Magnetic Resonance Imaging (FMRI) to measure changes in activity in parts of the brain, Electroencephalography (EEG) to measure activity in specific regional spectra of the brain response, and/or sensors to measure changes in ones physiological state (Heart Rate, Respiratory Rate, Galvanic Skin Response) to learn why consumers make the decisions they do, and what part of the brain is telling them to do it.”

DELVE INTO THE EMERGENCE OF THE WORLD OF NEUROMARKETING Marketers are always intended to explore the roots of consumer’s desire. Previously, they heavily relied upon taking words from customers regarding why a customer prefers one product (brand) in respect to another. While using focus group method and in-depth interview in test marketing, they realized that instead of telling what they really think, participants always tell what marketer want to hear? NM is related to information processing techniques, or alternatively, that it researches the physiological or biological processes involved in human decision and action. Ever since the

202

Key Drivers of Organizational Excellence

Egyptian time, the brain was thought to be a potential part of the human being. The Greeks began to understand how the brain works thought that emotions and feelings were produced by the heart; we know today that it is processed in the brain. In the Dark Ages, superstitions tool over reason and religion was the center of the human beings, therefore the brain was left aside for a period of time. Even in the renaissance time, little of the brain was studied; the problem of studying the brain is that there are only two ways of studying the brain, the first one is by making a cessation on the brain and watching the change behavior, the second one is by stimulating certain areas and seeing what happens. Around the late 1700, Franz Gall created a cerebral map because he believed that certain areas of the brain were involved in specific task, he called this new science “Phrenology”, which has since been discredited. There is some truth in his findings, for example, Paul Broca (1824-1880) discovers that there were specific area in the brain that specializes in language, now we call this area Broca’s Area, however most of the brain does not work by area, it’s an extreme network of neurons and is all communicating with each other. Neuron was not familiar to the scientific community until Santiago Ramon Y Cajal (18521934), a Spanish neuron scientist, discovers the neuron and was given the noble prize for it (1906). In relation to understanding the brain, Phineas P Gage (1823-1860) was the first documented case of behavior change due to cessation in the brain. In September 1848, in Cavendish, Vermont, incident occurred which was to change our understanding of the relation between mind and brain. Phineas P Gage, was a railroad worker when an explosion blew an iron bar right through his skull, he did not die but his behavior changed radically. He became fitful, irreverent, impatient, capricious, and indulge in profanity. For centuries scientists believed that the brain had regions that were responsible for different functions. These functional areas could be fairly broad, dealing for example with rational information processing or subconscious emotions, but also quite specific, dealing with for example vision or motor control. Even though the brain had been formally studied for hundreds of years, it wasn’t until the late 1990’s that modern brain imaging technology started to be used for marketing research purposes. Researchers at Harvard University developed this new approach in order to test and improve the effectiveness of ads. The rationale for this was based on estimates made by neuroscientists who indicated that as much as 95% of brain activity could be subconscious. Further, since behavior is largely caused by emotion understanding emotion, which is largely unconscious, would lead to the understanding of behavior, which is what marketing research ultimately seeks to accomplish. This being the case, consumers might be driven toward preference, purchase and loyalty if their subconscious emotions are stimulated appropriately. More specifically, if marketing targeted brain areas associated to reward, empathy, bonding, curiosity, pleasure, identity, attention, concentration and memorization, marketing efforts would yield superior results. Thus the field of NM was created, which set out to study the location, timing, strength and frequency of brain activity in response to marketing stimuli. Stimuli used in Neuromarketing can be visual, aural, olfactory, and tactile or any combination thereof, with the potential to cover a wide marketing elements such as products, packaging, pricing, and advertising, among others. Those stimuli that generate the strongest and most positive reactions during research are later reinforced to improve the effectiveness of marketing efforts. Neuromarketing uses different Neuro-imaging technologies with FMRI (functional Magnetic Resonance Imaging) being its main one. This technology, originally intended for medical research, relies on strong magnetic fields and radio waves to trace the flow of blood

Neuromarketing – Band Wagon Between Brain and Brand Image

203

into the areas of the brain where neural activity is taking place. By taking snapshots every couple of seconds, extremely accurate and detailed three-dimensional images can be obtained describing how the brain is reacting and processing marketing information .Given the required neuroscience equipment, as well as trained and experienced specialists to interpret the data, consultants work closely together with hospitals or research facilities. It is thus quite an expensive proposition, costing about three times as much as focus groups. However, as competition increases, technology improves and institutions increasingly lend themselves to this type of research, costs are bound to decrease, making neuromarketing increasingly available to smaller organizations and thus more commonly used. Understanding how and why humans make the choices they do will undoubtedly require a Neuromarketing science. Yet traditional research methods such as focus groups have inherent flaws in understanding the sub-conscious. Olfactory marketing is study of influence of smell on purchase behavior and sensory branding a study of all five senses i.e. touch, taste, smell, sight, hear is a predecessor of neuromarketing. Focus groups can be dominated by a forceful, single personality; thereby distorting results. Polls have been proven to be unreliable and account only for surface responses from respondents, since they are unable to penetrate the complexity of thoughts, emotions, and instincts that shape desires. One of the earliest studies using the newer technology was by Ambler and his colleagues at the London Business School. It asked people while they were in a MEG scanner (see inset) which of 3 brands they would purchase and found that familiar brands stimulate the right parietal cortex. The authors pointed to this area as the possible ‘location of brand equity’. Neuromarketing is now one of the blistering of its trade. At the most basic levels, companies are starting to sift through the piles of psychological literature that have been steadily growing since the 1990s’ boom in brain-imaging technology. Surprisingly few businesses have kept tabs on the studies - until now. “Most marketers don’t take a single class in psychology. A lot of the current communications projects we see are based on research from the ’70s,” says Justine Meaux, a scientist at Atlanta’s Bright House Neuro-strategies Group, one of the first and largest neurosciences consulting firms. For an ad campaign that started a revolution in marketing, the Pepsi Challenge TV spots of the 1970s and ’80s were almost absurdly simple. Little more than a series of blind taste tests, these ads showed people being asked to choose between Pepsi and Coke without knowing which one they were consuming. A study at Baylor College of Medicine in Houston which showed that the brain registers a preference for Coke or Pepsi similar to that chosen by the subjects in blind taste tests. In another study, conducted by Richard Silberstein a neuroscientist with the Brain Sciences Institute at the Swinburne University of Technology in Melbourne, Australia, it would found that successful advertisements generate both high levels of emotional engagement and long-term memory encoding. FMRI provides a detailed record of brain activity at any particular time the procedure is fraught with problems when it comes to using it for the purposes of commercial Neuromarketing research.MRI scanners are large and cumbersome pieces of equipment which must be used within specialist locations such as hospitals or clinics. Only one volunteer can be tested at a time and for many the experience may prove so disagreeable they are unable to continue. The introduction of the MRI in the 1980s enabled scientists to observe the human brain at work. When we perform a particular task or receive a stimulus, certain regions of our brain

204

Key Drivers of Organizational Excellence

are activated. Different levels of activity or magnitudes of blood oxygenation have distinct magnetic properties. The FMRI utilizes these differences in magnetic response to show us exactly which parts of the brain are functioning; this data can then be compared to baseline levels to determine the induced activation. The technique is called BOLD (Blood Oxygen Level Dependent) FMRI and has been used most frequently in cognitive neuroscience research. The FMRI apparatus is a large, donut-shaped magnet that detects changes in electromagnetic fields within the ring. In a typical experiment, subject lies inside the donut, does nothing for thirty seconds, performs a task, and then rests for another thirty seconds. Researchers operating the FMRI compare the signal during the task to the signal when the subject is at rest. Regions with strong signals are often responsible for processing that particular task. The prominent carmaker Daimler-Chrysler discovered that reward centers in male subjects’ brains responded more distinctly to sportier models. Interestingly, in this study, the images of cars also activated the region in the brain that recognizes faces, perhaps explaining why some people like identifying themselves with their cars. Meanwhile, Lieberman Research Worldwide, a marketing firm in Los Angeles, is working with Caltech neurobiologist Steven Quartz to provide neuromarketing services to Hollywood studios. In one study, Quartz analyzed the FMRI brain images of the audiences as they viewed movie trailers to see which ones created the most brain buzz. He discovered that the orbit frontal cortex (a part of the prefrontal cortex) was associated with liking or anticipation. In 2001, Bright House, a marketing consultant company, established the Neuro-strategies Group, which aimed to “unlock the consumer mind.” Researchers at the University of California, Los Angeles have found that Republicans and Democrats react differently to campaign ads showing images of the Sept. 11th terrorist attacks. Those ads cause the part of the brain associated with fear to light up more vividly in Democrats than in Republicans.

Commercial perspective To identify which one of several early edits - or animate versions - of a TV is most likely to be memorable and generate positive emotions towards the brand. To divulge the extent to which viewers are processing the information in an advertisement logically and analytically or imaginatively emotionally. Neuromarketing also helps to find out the extent to which viewer attention is maintained at the point of branding in a radio or television commercial occurs. Tracking subconscious responses to different package designs like we see in various segments is too of great requirement to evaluate whether the forms of visual advertising billboard or magazine displayer eye-catching or not. Also to measure the extent to which accompanying music adds to or subtracts from the overall intended message. Reveal what is happening in the consumers mind as he, or she, studies different design features for a new product, new idea. In short understanding how and why humans make the choices they do will undoubtedly require a Neuromarketing science.

Who is into Neuromarketing? Kellogg’s, Proctor & Gamble, Pepsi, Coke, Hitachi, K-mart, ADIDAS and Home Depot, Unilever , Nestle , DaimlerChrysler (DAI), L’Oreal, etc.

Neuromarketing – Band Wagon Between Brain and Brand Image

205

Ethics and this brain scam Neuromarketing is a controversial new field of marketing which uses functional Magnetic Resonance Imaging (FMRI) - a medical technology — not to heal, but to sell products. “It uses FMRI to identify patterns of brain activity that reveal how a consumer is actually evaluating a product, object or advertisement. Thought Sciences marketing analysts use this information to more accurately measure consumer preference, and then apply this knowledge to help marketers better create products and services and to design more effective marketing campaigns”-Bright House Institute for Thought Sciences news release issued June 22, 2002 explains. Reading brain- an ethical perspective l

Market researchers and their clients should be allowed to invade the privacy of consumers and the supposed power this will give them to manipulate purchasing decisions.

l

Increase in advertising efficiency could boost advertising related diseases such as obesity.

l

Overly sensational press stories have served to heighten a misunderstanding of the complexity of the human brain and the limits of current technology.

l

No reputable neuroscientist would claim that we are capable of either explaining or predicting real world decision making and the idea that the procedure will enable us to identify a “buy-button” in the brain is utterly implausible.

l

Research subjects occasionally report dizziness or nausea when their heads are moved within the bore of the magnet

But such potential physical harms are secondary. The real risk of neuromarketing research is to the people - including children - who are the real targets of this research. Already, marketing is deeply implicated in a host of pathologies. The nation is in the midst of an epidemic of marketing-related diseases. Children are suffering from extraordinary levels of obesity, and pathological gambling, while millions will eventually die from the marketing of tobacco.

FUTURE IMPLICATIONS Beyond testing consumer reaction to marketing, can brain scans actually predict what people will buy? With marketers eager to edge competition, and critics concerned about advertising’s affect on the nation’s health, the debate on neuromarketing is unlikely to end soon. And although the conclusion of this debate may remain elusive, the recent research signals an increasing role of neuromarketing in consumers’ lives in the near future. As the mostadvanced neuroscientific research capabilities and understanding of how the brain thinks, feels and motivates behavior. This knowledge of the brain enables corporations to establish the foundation for loyal, long-lasting consumer relationships, the website says. The implications are quite profound, actually: Neuromarketing — assuming its science can be translated into a meaningful technology — would finally enable marketers to reach out and pinprick consumers not using broad strokes like geography; not even using concentric criteria like demographic and psychographic information within a geographic area. Rather,

206

Key Drivers of Organizational Excellence

imagine the implication of knowing one’s customer at the deepest biological level! To do so is necessarily to recognize the biological commonalities that define Homo Sapiens — and at the same time, to wend through the labyrinth of upbringing, education, genetic proclivity and emotion that comprises each individual’s quintessential uniqueness. This new research approach is fascinating also for media researchers because it opens a totally new perspective of our basic assumptions how media are “consumed” respectively what of the inner and the outer world human beings can perceive, how this works. And perhaps even why sometimes the information given in an interview may be wrong but not a lie. Future development is also contingent on how far the scientific/academic community wants to even push the whole nature vs. nurture issue. By essentially reducing people’s choices and behavior to mere biological-physiological responses, perhaps genetically predisposed and influenced, NM lands right in the middle of this controversy. The scientific establishment has largely attributed human behavior to environmental factors such as parental rearing (nurture) while innate biological/genetic factors (nature) have been mostly dismissed. The denial of human nature may be linked to three widespread and connected dogmas: Blank Slate (the mind lacks innate traits and dispositions), Ghost in the Machine (each person has a soul which makes decisions free from biological influence) and Noble Savage (the human soul is essentially good but corrupted by society). The reason for the scientific establishment denying human nature, which in turn affects the credibility of NM, are its potentially politically incorrect consequences, as discoveries might lead to unequal treatment of people, undesirable social change, absolve people from the responsibility of their choices and even reduce life’s/religion’s meaning and purpose.

CONCLUSION Neuromarketing is an emerging field that applies medical technologies such as the FMRI to scan the brains of test subjects as they consume particular products or look at advertisements. Neuromarketers aim to discover what kinds of stimuli trigger neural responses. To understand the influence of various forms of print ads, TV ads, audio ads, billboard etc. To know the right marketing mix for product etc it is undoubtedly is lucrative. Physical harms are also there but soon most advanced research capabilities will come in the market with profound technology of neuroscience.

18

Personality and Purchasing Decisions of Bikes Hitendra Bargal Ashish Sharma Gayatri Gupta

A Personality has been in the core of different positioning status of products. We present personality as inner psychological characteristic that influence the consumers and customers. Personality is an important concept because it helps in dividing the consumers in different segments. Personality is consistent factor if it also gives reason to marketers to predict consumer behavior in different manner. The personality can explain the reason for consumption of certain products. This also shows the appeal for different products by different group of customers personality is reason for the effect of change. The dynamic marketing campaign can change some fraction of the personality in a phased manner.

INTRODUCTION Marketers are always interested in the personality aspect of consumers. Personality is not only of an academic interest but it has practical interest also for the consideration of Marketers. Consumers purchase place a very important role under the effect of personality trait. This may be reason why advertisers have targeted specific personality figures in their message. Product and brand images have caught personality status as perception factor. When we get an advertisement "Boost is the secret of my energy", The toughness comes on face with the figure of Kapil Dev. A Personality has been in the core of different positioning status of products. We present personality as inner psychological characteristic that influence the consumers and customers. Personality is an important concept because it helps in divisionalises the consumers in different segments. Personality is consistent factor if it also give reason to marketers to predict consumer behavior in different manner. The personality can explain the reason for consumption of certain products. This also shows the appeal for different products by different group of customers personality is reason for the effect of change. The dynamic marketing campaign can change some fraction of the personality in a phase manner.

208

Key Drivers of Organizational Excellence

The researchers have focused the consumer purchase and consumption situation by treating the consumer's personality on the basis of appearance and positions. Brand personality normally comes as a one factor, which affects various categories of products. Marketers have defined this term as a brand personification. The youth plays an important role in today's marketing decision. Youth are potential for many product and marketing driven companies. The MTV culture has inspired the youth for very conscious about their status and owning to different product's ownership. The companies are today compelled to decide the marketing strategies on the life style and attitude of young customers. This paper is going to examine the correlation between personality and purchasing decision of bikes by young customers.

Objectives 1.

To examine the relationship between purchase decision of bike and personality factor.

2.

To study of exclusivity by personality in the buying process.

3.

To explore the chances of application of personality in the marketing strategies of bikes manufacturing companies.

4.

To differentiate between innovative and dogmatic personality factors.

5.

To assess the changing preference with the changing of personality.

REVIEW OF LITERATURE J P Gulford (1959), has given the detailed relationship between personality and purchasing. Milton Rokeach (1960), explained about open and close mind situations of personality, which decide the decision-making situations many time. William Wright (1998), posited that the self-concept plays an important role in purchase decision making. Richard Petty (1988), exploring the utility of need for cognition had presented the case of advertising situations, which depict different personality situations. He identified the relationship between the personality and ad effectiveness. Satya Menon and Barbara E Kahn (1995), identified the impact of context on variety seeking behavior of customers in product choices. They explained the different conditions that lead to different product purchases. Adrian Furnham, and Patrick Heaven (1999), have explained about all the situations in which personality and social behavior have given a birth to many buying situations. Elizabeth C. Hirschman (1980), identified positive relationship between innovativeness, novelty seeking and consumer creativity. They found that Innovativeness was an important factor contributing to novelty seeking behavior.

RESEARCH METHODOLOGY As the objectives of research are very clear with the inclination of youth towards the purchasing of bikes, we have decided to go for an exploratory research design initially. This was necessary to observe the trend in the young customers. We have observed about the choices of young customers about the different brands of bikes. When we have finished the exploratory work in beginning then it was necessary to go for the descriptive research design.

Personality and Purchasing Decisions of Bikes

209

Descriptive Research design has presented us the more detail picture of market. Basically we have covered following areas with the help of Descriptive Research Design: 1.

Parameters of purchasing, which normally carries for bike purchase.

2.

To analyze the attitude and aptitude of young customers in purchasing.

3.

Analyze the pattern of purchasing alongwith the different personality traits.

4.

Highlight the different variables, which give shape to personality of young customers.

5.

To examine whether companies are taking personality in to an account for marketing their Bikes.

We have selected 300 sample size of youth for this research work. They were selected at random basis. Their reference matters have also studied for this research study.

Data Collected Match between personality and present Bike Age group

Response rate

20 - 25

20%

25 - 30

40%

30 - 35

30%

35 - 40

15%

Bikes have special linkages with the personality of the youth. Youth gives most of preference to personality factor. Personality and Bikes are with the combination of each other. Personality as an exclusive factor Age Group

Response Rate

20 - 25

40%

25 -30

10%

30 - 35

20%

35 - 40

25%

Exclusive factors play an important role in many decisions. All the consumers are having some exclusive factors. Youth give personality an important priority. Personality Traits

Strength

Innovative Personality figures

60%

Rigid personality figures

20%

Non-decisive figures

20%

Innovation is the key of many product purchase decisions. Innovation is considered many different ways. Youth are appealed by innovative manner.

210

Key Drivers of Organizational Excellence Excitements stage from new ad campaign Excitement

Strength

Positive

70%

Negative

20%

Confused

10%

Excitement is an important factor for youth. Communication of excitement is suppose to be in well define meaning. Feeling with the bikes Feeling

Strength

Emotional

60%

Rational

20%

Non affected

20%

Feeling is a most important factor. Communication of feeling is also plays an important role. Companies give importance to feeling in different manner. Chance for change Change

Strength

Highly confident

50%

Partially confident

20%

Non-confident

30%

There are many factors, which can change the decision for Brand switchover

DATA ANALYSIS & INTERPRETATION (i)

Mean rate of match between personality and Bike is 26.25%

(ii)

Mean of response where personality is as an exclusive criteria is 23.75%. This shows that response rate is moderate for taking personality as exclusive factors.

(iii) Data also proves that most of young customers are of Innovative in personality. The strange thing is that they are also in rigid class. (iv) Young customers mostly get influence with ad campaign of bike. The strength of positive response is 70% but surprisingly then we have a confused state of customers. (v)

Young customers are always emotional about their bikes. They believe in great emotional attitude and love for their bike as an important asset.

CONCLUSION Personality and bike purchasing are related with each other. Innovation should be highlight in ad campaign of bikes. Bike manufacturing companies should consider the feelings and emotions of young customers while preparing the marketing strategy. More and more

Personality and Purchasing Decisions of Bikes

211

excitement plays an important role. Exclusive factors play an important role in many decisions. Personality and Bikes are with the combination of each other. Innovation is considered many different ways. Youth are appealed by innovative manner. Communication of feeling is also plays an important role. Companies may give importance to feeling in different manner. Communication of excitement is suppose to be in well define meaning.

References J P Gulford (1959), Personality, McGraw Hill, New York. Milton Rokeach (1960), The Open and Closed Mind , Basic Books, New York. Satya Menon and Barbara E Kahn (1995), The Impact of Context on Variety Seeking in Product Choices, Journal of Consumer Research: An Interdisciplinary Quarterly , 22(3), 285-95. Elizabeth C. Hirschman (1980), Innovativeness, Novelty Seeking and Consumer Creativity, Journal of Consumer Research: An Interdisciplinary Quarterly, 7(3), 283-95. Petty, R. E., & Cacioppo, J. T. (1984), The effects of involvement on response to argument quality and quantity: Central and peripheral routes to persuasion, Journal of Personality and Social Psychology, 46, 69-81. Adrian Furnham, Patrick Heaven (1999), Personality and Social Behavior, Oxford University Press, US Gerald Matthews, Ian J Deary, Martha C Whiteman (2003), Personality Traits 2nd edition, Combridge University Press: UK. W Ray Crozier (1997), Education, Individual Learners: Personality Differences in Education, Routledge: UK.

212

Key Drivers of Organizational Excellence

19

Emerging Trends in the Indian Retailing B.V.H. Kameswara Sastry D.V. Chandra Shekar

Over the last decade, India has registered an average Gross Domestic Product (GDP) growth of 6%. Per capita income has increased at about the same rate during the last five years. The economy today is strong and vibrant owing to the progressive liberalization of government policies, increase in foreign direct investment, increased global competitiveness, and investment in infrastructure and growth in domestic as well as international demand for Indian goods and services. According to the widely discussed Goldman Sachs report of October 2003, over the next 50 years, Brazil, Russia, India and China - The BRIC economies -could become a much larger force in the world economy. The report also states that rising incomes may also see these economies move through the 'sweet spot' of growth for different kinds of products, as local spending patterns change. Retail is the new buzzword in India. Many believe that retail in India is a recent phenomenon. This perception may not be exactly incorrect, considering that through the 1990s organized retail in India added just one million sq. ft of space a 1990s year. The pace has changed dramatically from 2001 onwards and it is estimated that close to 40 million sq. ft. of retail space will be added in a short span of 2-3 years. The global retail development index developed by A.T Kearney has ranked India first, with its cities showing the rapid pace of change. This change is a reflection of the changes in the Indian consumer, his lifestyle and his habits. The main aim of this study is to examine the trends in retailing business in India.

INTRODUCTION While barter would be considered to be the oldest form of retail trade, since independence, retail in India has evolved to support the unique needs of our country, given its size and complexity. Haats, Mandis and Melas have always been a part of the Indian landscape. They still continue to be present in most parts of the country and form an essential part of life and trade in various areas. The PDS or the Public Distribution System would easily emerge as the single largest retail chain existing in the country. The evolution of the public distribution of grains in India has its origin in the 'rationing' system introduced by the British during World War II.

Emerging Trends in the Indian Retailing

213

The system was started in 1939 in Bombay, and subsequently extended to other cities and towns. By the year 1946, as many as 771 cities/towns were covered. The system was abolished postwar, however, on attaining Independence, India was forced to reintroduce it in 1950 in the face of renewed inflationary pressures in the economy. The system, however, continued to remain an essentially urban oriented activity. In fact, towards the end of tile first five-year plan (1956), the system was losing its relevance due to comfortable food grains availability. At this point in time, PDS was reintroduced and other essential commodities like sugar, cooking coal, and kerosene oil were added to the commodity basket of PDS. There was also a rapid increase in the Ration Shops (now being increasingly called the fair price shops or FPSs) and their number went up from 18,000 in 1957 to 51,000 in 1961. Thus, by the end of the Second Five Year Plan, PDS had changed from the typical rationing system to a social safety system, making available food grains at a 'fair price' so that access of households to food grains could be improved and such distribution could keep a check on the speculative tendencies in the market. The PDS has been functioning for more than four decades now, and its greatest achievement lies in preventing famines in India. The Canteen Stores Department and the Post Offices in India are also among the largest network of outlets in the country reaching populations across state boundaries. Table 1 given below indicates the numbers of these retail outlets. Table1: Showing India's Largest Retail Chains

PDS

Post Office

KVIC

CSD Stores

463, 000

160, 000

7, 000

3, 400

Source: Business World, Marketing White Book, 2005

The Khadi & Village Industries (KVIC) was also set up post Independence. Today, there are more than 7,000 KVIC stores across the country. The co-operative movement was again championed by the government, which set up the Kendriya Bhandars in 1963. Today, they operate a network of 112 stores and 42 fair price shops across the country. Mother Dairy, another early starter, controls as many as 250 stores, selling foods and provisions at attractive prices. In Maharashtra, Bombay Bazaar, which runs stores under the label Sahakari Bhandar and Apna Bazaars, runs a large chain of co-operative stores. In the past decade, the Indian marketplace has transformed dramatically. However, from the 1950's to the 80's, investments in various industries were limited due to low purchasing power in the hands of the consumer and the government's policies favoring the small-scale sector. Initial steps towards liberalization were taken in the period from 1985-90. It was at this time that restrictions on private companies were lifted, and in the 1990's, the Indian economy slowly progressed from being state led to becoming "market friendly". While independent retail stores like Akbarallys, Viveks and Nalli's have existed in India for a long time; the first attempts at organized retailing were noticed in the textiles sector. One of the pioneers in this field was Raymond's, which set up stores to retail fabric. It also developed a dealer network to retail its fabric. These dealers sold a mix of fabrics of various textile companies. The Raymond's distribution network today comprises 20,000 retailers and over 256 exclusive showrooms in over 120) cities of the country. Other textile manufacturers who set up their-own retail chains were Reliance - which set up Vimal showrooms and Garden Silk Mills, which set up Garden Vareli showroom. It was butnatural that with are growth of textile retail, readymade branded apparel could not be far

214

Key Drivers of Organizational Excellence

behind and the next wave of organized retail in India saw the likes of Madura Garments, Arvind Mills, etc., set up showrooms for branded menswear. With the success of the branded men's wear stores, the new age Departmental store arrived in India in the early nineties.

DIMINISHING DIFFERENCE BETWEEN RURAL AND URBAN INDIA Rural India accounts for over 75% of India's population and this in itself offers a tremendous opportunity for generating volume driven growth. While food grain production has steadily increased, tax benefits associated with incomes in rural areas has fuelled the increase in the spending power of the average rural family. These factors have created a vast market, which has led to a rush amongst companies to tap this latent demand. Rural India boasts of nearly 42,000 Haats and 6,800 mandis. It is interesting to note that LIC sold 50% of its policies in Rural India in the year 2002-03. Of the two million BSNL mobile phone connections, 50 per cent are in small towns and villages. Out of the 20 million who have signed up for Rediffmail, 60% are from small towns. Of the 100,000 who have transacted on Rediff's shopping site, 50 per cent are from small towns. The fact that the 20 million Kisan Credit Cards (KCC) issued so far exceeds the 25 million credit plus debit cards issued in urban India. A whopping Rs.

Emerging Trends in the Indian Retailing

215

65,000 crores have been sanctioned under the KCC scheme. This increase in incomes has happened in both urban and rural India.

THE SIZE OF RETAIL TRADE IN INDIA The Global Retail Development Index developed by A. T. Kearney rank India as the favored retail destination in the year 2005. A similar report brought out by the same firm in the year 1995, had ranked India at number 16. In a short span of a decade, India has now emerged as a nation which cannot really be ignored by the global retailers. The large size of the population has always made it a large market. However, the size of the population, a steady rate of GDP growth and the increasing mobility between the middle and the upper classes make it a lucrative retail market. With a contribution of 14 per cent to the national GDP and employing 7 per cent of the total workforce (only agriculture employs more) in the country, the retail industry has emerged as one of the pillars of the Indian economy.

CHANGING REGULATORY ENVIRONMENT The Government has initiated the process of aligning policies to help escalate retail growth. Value Added Tax (VAT) was rolled out in April 2005. Earlier, most retailers were not liable to pay taxes, as the first seller was to pay tile tax. The VAT regime has brought all retailers into tile tax loop and made seamless trade possible for organized retailers. The peak customs duty on goods imported into India was reduced to 20% from tile previous 40%. The Government has asked tile Ministry of Commerce to prepare a detailed note on organized retail. Various rules and regulations that have an influence on large format retailing will be

216

Key Drivers of Organizational Excellence

studied, including tile Agricultural Produce Marketing Committee (APMC) Act. All these measures are being taken to set tile stage for tile advent of FDI in retail trade. These steps in the direction of economic growth have created conditions that invite participation in the retail sector.

CHALLENGES TO RETAIL DEVELOPMENT IN INDIA Organized retail in India is little over a decade old. It is largely an urban phenomenon and the pace of growth is still slow. Some of the reasons for this slow growth are Retail not being recognized as an industry in India; Lack of recognition as an industry hampers the availability of finance to the existing and new players. This affects growth and expansion plans. The high costs of real estate bulge the expenditure of the retailers. In addition to the high cost of real estate, the sector also faces very high stamp duties on transfer of property, which varies from state to state (12.5% in Gujarat and 8% in Delhi). The presence of strong pro-tenancy laws makes it difficult to evict tenants. The problem is compounded by problems of clear titles to ownership, while at the same time land use conversion is time consuming and complex as are the legal processes for settling of property disputes. Poor roads and the lack of a cold chain infrastructure hamper the development of food and fresh grocery retail in India. The existing supermarkets and food retailers have to invest a substantial amount of money and time in building a cold chain network. The sales tax rates vary from state to state, while organized players have to face a multiple point control and tax system, there is considerable sales tax evasion by small stores. In many locations, retailers have to face a multi point octroi. With the introduction of Value Added Tax (VAT) in 2005, certain anomalies in the existing sales tax System causing disruptions in the supply chain are likely to get corrected.

CONCLUSION The above discussion reveals that the retailing business in India is growing at a fast rate and benefiting the economy in many ways. At the same time, there are several challenges that are obstacles to the path of retail growth in India. Rural India is a virtual goldmine and marketers are attempting to cash on the growth of consumption levels in rural market. The latest formats have made India usher into stronger organized retailing and in the coming years, it will strengthen further.

References A.T. Kearney (2005), Foreign Policy, Measuring Globalization: Who's Up, Who's Down? In: Foreign Policy, May/ June 2005, pp. 52-60.

Benefits of Modern Trade to Indian Economy, Confederation of Indian Industry, Images KSA Technopak, India Retail Report, 2005 Indian Retail Real Estate, the Road Ahead, ICICI White Paper. Price Water House Coopers, 2005. Swapna Pradhan, Retailing Management, Tata McGraw-Hill Publishing Company Limited, New Delhi. Retail's Coming Face - Off, Business Today. 2006. The Emerging Retail Landscape, Business Today, 2005 The Marketing White book, Business World, 2005

Relationship Marketing: A Key to Customer Retention

217

20

Relationship Marketing: A Key to Customer Retention B.V. H. Kameswara Sastry A.V. N. Sundar Rao D. V. Chandra Shekar

Relationship marketing is illustrated as the process of creating, maintaining and enhancing strong, value-leaden relationship with customers and other stakeholders (Kotler et al, 1998). The application of this marketing concept differs greatly for various products and services, but in general it involves mutually valuable and ongoing relationships, resulting in increased consumer retention over the long term. In an increasingly competitive business environment, many companies are realizing the importance of maintaining their customer base and maximizing the lifetime value of their customers. This requires an entire business approach towards building and maintaining profitable relationship at many levels and within all marketing communications channels. Relationship marketing seeks to build interdependence between partners and relies on one-to-one communications, historically delivered through the sales force. With the growth of marketing databases and the internet, the ability to reach customers individually becomes a viable strategy for a wide range of firms including consumer products companies. The purpose of this article is to highlight how relationship marketing is a key for customer retention and to understand its dynamics.

RELATIONSHIP MARKETING In the mid twentieth century, the philosophy and thoughts of relationship marketing is being supported publicly more and more strongly by marketers. Relationship marketing is a form of marketing that evolved from direct response marketing in which emphasis is placed on building long term relationships with customers rather than on individual transactions. It involves understanding the customers’ needs as they go through their life cycles. It emphasizes providing a range of products or services to existing customers, as they need them. Relationship marketing is not about having a “buddy-buddy” relationship with customers. Customers do not want that. Relationship Marketing uses the event-driven tactics of customer retention marketing, but treats marketing as a process over time rather than single unconnected events. By molding the marketing message and tactics to the lifecycle of

218

Key Drivers of Organizational Excellence

the customer, the Relationship Marketing approach achieves very high customer satisfaction and is highly profitable. Relationship marketing is about having an indirect conversation with the customer through analyzing their behavior over time. The relationship marketing process is usually defined as a series of stages, and there are many different names given to these stages, depending on the marketing perspective and the type of business. Interaction > Communication > Valuation > Termination Suspect > Prospect > Customer > Partner > Advocate > Former Customer Awareness > Comparison > Transaction > Reinforcement > Advocacy

Awareness This happens even before a person’s first visit to the business. Do the customers hear about the business from a friend? Do they read about their business on another site or in a magazine or newspaper? Do they find the address on a search engine? Do they see the address on business card, newsletter, or even invoice? People don’t just magically appear at the business. Entrepreneur has to work to get them there. It is vital for the company to obtain customers address. People don’t usually just volunteer their address. Entrepreneurs need to find an incentive that will motivate them to provide with their address. One of the surest ways to get someone’s address is to give something away.

Comparison Before a person can consider buying something from enterprise, they have to compare company’s product or service with competitor’s product or service. Meaningful content and credibility are the keys to success at the comparison stage. The more information the company can provide, the higher the company will make the sale. The comparison stage also provides company with an opportunity to learn more about visitor’s needs and, in doing so, be better able to fine-tune companies offering to their needs.

Transaction The transaction stage is where money (or credit card information) changes hands. Unless companies are giving products away for little or no profit, the transaction stage will only take place if company has played cards right during the awareness and comparison stages. The transaction stage should be viewed as the beginning, not the end, of the relationship. The transaction stage sets the stage for the highly profitable stages that follow.

Reinforcement The reinforcement stage is where company adds value to its customers’ purchases by showing them how to maximize the value and pleasure their purchases can provide. The reinforcement stage presents company with an opportunity to position itself apart from competition by thanking customers for their purchase and paving the way for future purchases. At this point the enterprise begins the process of creating word-of-mouth ambassadors for the firm out of satisfied customers.

Relationship Marketing: A Key to Customer Retention

219

Advocacy Advocacy is the final stage of the Cycle. Advocacy takes place when company provides their customers with the tools, or feeling of community, they need to become promoters, motivating past customers to drive new visitors to business and pre-selling firm with word-of-mouth recommendations, the most effective form of advertising ever devised. Using the relationship marketing approach, companies customize programs for individual consumer groups and the stage of the process they are going through as opposed to some forms of database marketing where everybody would get virtually the same promotions, with perhaps a change in offer. The stage in the customer lifecycle determines the marketing approach used with the customer. A simple example of this would be sending new customers a “Welcome Kit,” which might have an incentive to make a second purchase. If 60 days pass and the customers have not made a second purchase, company would follow up with an e-mailed discount. Companies are using customer behavior over time (the customer Lifecycle) to trigger the marketing approach. Let’s say a customer visits business site every day and then just stops. Something has happened. They are unhappy with the content, or they have found an alternative source. Or perhaps customer is not interested in the subject anymore. This inaction on their part is a trigger telling company something has to happen to change the way the customer thinks about the transition and perhaps even the service. Company should react to this and then look for feedback from the customer. If it can improve the content, e-mail them a notice, and if the customer starts visiting again, the feedback has been given. The cycle is complete until the next time the data indicates a change in behavior, and company need to react to the change with communication. Suppose the same customer makes a first purchase, this is an enormously important piece of data, because it indicates a very significant change in behavior. Company has a new relationship now, a deeper one. Company should react and look for feedback. Company has to send a welcome message, thank the customer for the trust they have displayed in the transition, and provide a second purchase discount. Then await feedback from the customer, in the form of a second purchase, or increased visits. Perhaps company gets negative feedback, a return of the first purchase. React to this new feedback and repeat the process. All of the marketing decisions in the examples above were triggered by customer behavior, the actions of the customer as tracked by their activity (or lack of activity). This activity tracked over time is the customer Lifecycle. If company can track customer Lifecycles, company can begin to predict them, and if it predicts them, company can target their marketing efforts at the most critical trigger points in the customer Lifecycle. This approach eliminates a lot of wasted marketing spending, and creates very high ROI (Return on Investment) marketing campaigns. Company spends less money overall, and the money it spends is much more effective. All of the above is accomplished by using the data customers create through their interactions with company to build simple Lifecycle models or rules to follow. The relationship marketing approach then uses this Lifecycle model as a “timing blueprint” to follow, targeting the right customers at the right time, with the most profitable offer. Relationship marketing has the potential to radically transform the company that adopts the principles and practices it advocates. It involves the ongoing process of identifying and creating new value with

220

Key Drivers of Organizational Excellence

individual customers and then sharing the benefits of this over a lifetime of association. It involves the understanding, focusing and management of ongoing collaboration between the suppliers and selected customers for mutual value creation and sharing through interdependence and organizational alignment.

CUSTOMER RETENTION A famous proverb says that “it costs six times more to get a new customer than to retain the existing customers”. At the core of relationship marketing is the notion of customer retention. Studies in several industries have shown that the cost of retaining an existing customer is only about ten percent of the cost of acquiring a new customer so it can often make economic sense to pay more attention to existing customers. It is claimed that five percent improvement in customer retention can cause an increase in profitability of between 25 and 85 percent (in terms of net present value) depending on the industry. A study published in Harvard Business Review concluded that “some companies can boost profits by almost 100% by retaining 5% more of their customers”. The increased profitability associated with customer retention efforts occurs because the cost of acquisition occurs only at the beginning of the relationship, so longer the relationship, lower the amortized cost. Account maintenance costs decline as a percentage of total costs (or as a percentage of revenue). Long-term customers tend to be less inclined to switch, and less price sensitive. This can result in stable unit sales volume and increases in sales volume. Long-term customers may initiate free word of mouth promotions and referrals. Long-term customers are more likely to purchase ancillary products and high margin supplemental products. Customer Retention marketing is a tactically driven approach based on customer behavior. It is the core activity going on behind the scenes in Relationship Marketing, Loyalty Marketing, Database Marketing, Permission Marketing, and so forth. The basic philosophy of a retentionoriented marketer:

Past and Current customer behavior is the best predictor of Future customer behavior In general, it is more often true than not true, and when it comes to action-oriented activities like making purchases and visiting web sites, the concept really shines through. It is the actual behavior, not implied behavior. Being a 45-year-old man is not a behavior; it’s a demographic characteristic. Suppose there are two groups of potential buyers who surf the ‘Net: l

People who are a perfect demographic match for company site, but have never made a purchase online anywhere

l

People who are outside the core demographics match for company site, but have purchased repeatedly online at many different web sites.

If company sent a 10 percent off promotion to each group, asking them to visit and make a first purchase, response would be higher from the buyers (core demographics match) than the demographically targeted group (perfect demographic match). This effect has been demonstrated for years with many types of Direct Marketing. It works because actual behavior is better at predicting future behavior than demographic characteristics. Company can tell whether a customer is about to defect or not by watching their behavior,

Relationship Marketing: A Key to Customer Retention

221

once company can predict defection, company has a shot at retaining the customer by taking action.

Active customers are happy (retained) customers; and they like to “win” They like to feel they are in control and smart about choices they make, and they like to feel good about their behavior. Marketers take advantage of this by offering promotions of various kinds to get consumers to engage in a behavior and feel good about doing it. These promotions range from discounts and sweepstakes to loyalty programs and higher concept approaches such as thank-you notes and birthday cards. Promotions encourage behavior. If company wants their customers to do something, company have to do something for them, and if it’s something that makes them feel good (like they are winning the consumer game) then they’re more likely to do it. Retaining customers means keeping them active with company. If companies do not keep customer active, they will slip away and eventually no longer be customers. Promotions encourage this interaction of customers with company, even if companies are just sending out a newsletter or birthday card. The truth is, almost all customers will leave company eventually. The trick is to keep them active and happy as long as possible, and to make money doing it.

Retention Marketing is all about: Action – Reaction – Feedback – Repeat Marketing is a conversation. Marketing with customer data is a highly evolved and valuable conversation, but it has to be back and forth between the marketer and the customer, and company has to listen to what the customer is saying. Based on the study conducted by a researcher on 50 companies it was found that the company looks at some average customer behavior. Company looks at every customer who has made at least 2 purchases, and it calculates the number of days between the first and second purchases. This number is called “latency” - the number of days between two customer events. Perhaps company finds it to be 30 days. Now, look at One-Time buyers. If a customer has not made a second purchase by 30 days after the first purchase, the customer is not acting like an “average” multi-purchase customer. The customer data is telling something is wrong, and company should react to it with a promotion. This is an example of the data speaking for the customer; company has to learn how to listen.

Retention Marketing requires allocating marketing resources Company has to realize some marketing activities and customers will generate higher profits than others. Company can keep their budget flat or shrink it while increasing sales and profits if company continuously allocates more of the budget to highly profitable activities and away from lower profit activities. This does not mean company should ”get rid” of some customers or treat them poorly. It means when company has a choice, as it frequently does in marketing, instead of spending the same amount of money on every customer, company spends more on some and less on others. It takes money to make money. Unless companies gets a huge increase in their budget, where the money will come from. Based on the study conducted by the researcher it reveled that, if company has 1,000 customers, and it has an annual budget of Rs. 1,000. Company spends Rs.1 on each customer each year, and for that Rs.1, company gets back Rs.1.10 in profits. That’s an ROI of 10 percent; company gets back Rs.1,100 for spending Rs.1,000. Now, what if the company

222

Key Drivers of Organizational Excellence

spends Rs.2 each year on a certain 50 percent of customers would bring back Rs.8 in profits. That’s a 400 percent ROI. Where does company get the extra Rs.1? Companies take it away from the other 50 percent of customers. Company spend the same Rs.1, 000 total and it takes back 500 (half the customers) x Rs.8 = Rs.4,000. If company always migrates and reallocates marketing money towards higher ROI efforts, profits will grow even as the marketing budget stays flat. Company has to develop a way to allocate resources to the most profitable promotions, delivers them to the right customer at the right time, and not wastes time and money on unprofitable promotions and customers. This is accomplished by using the data customers create through their interactions with company to build simple models or rules to follow. These models are listening system, like the “30 day latency” model above. They allow the data to speak to company about the customer.

EXPLORE THE VALUE OF CUSTOMER RETENTION Customer retention is not only a cost effective and profitable strategy, but in today’s business world it is necessary. This is especially true when companies remember that 80 percent of their sales come from 20 percent of their customer. With these statistics it is surprising why most marketing and sales campaigns are designed for the new customer. Perhaps company needs to rethink their marketing and sales strategies, after all many experts will tell that it is five times more profitable to spend on marketing and advertising to retain current customers than it is to acquire new customers. However, there is a solution. Sophisticated technology and database equipment has made it possible for specialized firms to make attempts at customer retention through database marketing programs. Establishing a detailed client database will allow these companies to keep track of personal information and individual preferences of all their customers. This enables them to provide better service and value. With effective implementation of customer databases, companies will be able to re-establish contact with customers. It will be able to work successfully towards increasing customer retention, repeat sales, and customer referrals. To achieve the objectives of the database and customer retention programs, the entire campaign should be designed and carried out with the customer in mind. The exercise will only be effective if the customer recognizes and associates some value being part of company database. If customers do not perceive value in company program all of its communications, coupons, special offers, and newsletters will be discarded.

RELATIONSHIP MARKETING A KEY TO CUSTOMER RETENTION According to old saying “Any customer walking into your showroom is like the proverbial goose. Extract one egg per visit and go for repeat business. Don’t try to make a fast buck and vanish overnight”. Winning new customers is important… but retaining them is critical to the financial health of business. And as it costs considerably less to retain a customer than it does to win a new one, focusing on retention strategy makes perfect business sense. Customers that stay with company tend to be satisfied with the relationship and are less likely to switch to competitors, making it difficult for competitors to enter the market or gain market share.

Relationship Marketing: A Key to Customer Retention

223

Regular customers tend to be less expensive to service because they are familiar with the process, require less “education”, and are consistent in their order placement. Increased customer retention and loyalty makes the employees’ jobs easier and more satisfying. In turn, happy employees feed back into better customer satisfaction in a virtuous circle. The researcher has suggested the following solutions are the points for dynamics of relationship marketing and its relationship with customer retention: l

Redefining customer loyalty and retention in today’s marketplace

l

Understanding the challenges and why satisfied customers are not enough

l

Examining the “R” in CRM and looking at the consumer view

l

Understanding the nature of loyalty in order to drive behavioral change

l

Key considerations when planning your retention strategy

l

The importance of generating real understanding and consumer insight

l

Understanding the need for a holistic view

l

Directing company efforts to where they’ll have most impact

l

Ensuring that the needs of all stake holders are met

l

Integrating customer retention into long term business planning

l

Changing the business focus from share of market to share of wallet

l

What company can expect a loyalty program to achieve and what it cannot

Customer relationships are the lifeblood of every good company. Relationships between a company and their customers, distributors, employees, referral sources, are vital to sustained growth, and stability. Loyal relationships with these valued individuals make for a strong bottom line. So, why do so few companies focus on customer relationship marketing? Probably the most frequent answer is lack of understanding of the potential profits in keeping existing customers happy versus constantly acquiring new ones. When company considers, that two-thirds of customers switch over from one company to another because of a perceived attitude of indifference from the former company, it makes sense to focus as much attention on customer retention as on customer acquisition. Company should not be a victim of indifference. Develop a good relationship marketing program that takes into consideration both customer relationship marketing, and customer acquisition through relationship marketing. With well-planned relationship marketing efforts, company can impact retention — and that will impact the bottom line. A few value-add strategies that company can use for retaining customers include several marketing tools such as Membership cards and programs that entitle customers to special offers, discounts, or preferential treatment; Welcome, acknowledgement, sales recognition, and thank you statements; After sales satisfaction and complaint inquiries and surveys; Event oriented communications in which the customer is genuinely interested; Enhanced and empowered customer, after sales, and technical support. Business owners tend to be driven, both financially and philosophically, to make cold calls, pursue new contacts, and acquire new customers.

224

Key Drivers of Organizational Excellence

But often, little thought is given to nurturing relationships with the customers they already have. Given that acquiring a new customer can cost five times more than retaining an existing one, this can be a costly approach. Customers who are continuously courted, interacted with, and reminded of company’s presence are less likely to go racing off when competitors come calling. Making those customers feel recognized, known and appreciated can go a long way toward locking up their loyalty. And, it is also a great way to get them referring to others. Regardless of how effective customer retention efforts are, some relationships will inevitably break down. The manifold reasons that may be listed in this regard can include customer felt that pricing was too high or unfair; Customers have an unresolved complaint; Customers have taken up a competitors offer; Customers have left that the company will not care.

Certain customers will suspend their relationship with company and business As companies across the world race to develop more loyal and profitable customer relationships, the competition is fueling a spectacular burst of technological growth. At the heart of this rapid expansion is relationship marketing, a strategy to increase customer retention by making loyalty more convenient for a customer than disloyalty. This concept has been supercharged in recent years, thanks to the emergence of powerful, reasonably priced new technologies that speed the flow of mission-critical information across the enterprise. Any enterprise can now use currently available technology to interact with customers individually, on-line, on the phone, at the point of purchase or through an automated sales force. And any enterprise can use currently available technology to customize its products or services in order to offer goods and services that meet the real individual needs of the customers.

Relationship Marketing Strategies for Customer Retention These include: l

Developing a core service around which to build a customer relationship.

l

Customizing the relationship to the individual customer.

l

Augmenting the core service with extra benefits.

l

Pricing services to encourage customer loyalty

l

Marketing to employees so that they would perform well for customers.

The above-mentioned 30Rs cannot be a simple sequence, as the Rs are not sequential by nature. Nor are they with the exception of R1-in ranking order. In reality, the Rs appear concurrently in different constellations. As they are composed of many qualities, they can partly over lap. Numbering them is a practical issue: they are many, and the numbers make it easier to keep track of them.

CONCLUSION The relationship marketing has affected the way in which products and services are marketed to their target customers. Relationship marketing has created a need for increased customer

Relationship Marketing: A Key to Customer Retention

225

retention. The relationship marketing provides marketers with unique opportunities to reach to potential and existing customer and build strong customer retention with them. A longterm approach of relationship marketing towards creating profitable retention with end customers as well as business partners has allowed many businesses to maximize the lifetime value and profitability. “Start treating the customer like your ‘sons-in-law’, you can’t go wrong. You products and services (daughters) will be happier. And they will, in turn, make you happy and prosperous”.

References Berry, L. (1983), Relationship Marketing in Berry, Shostack, and Upah (eds), Emerging Perspectives on Services Marketing, American Marketing Association, Chicago, 1983. Carrol, P. and Reichheld, F. (1992), The Fallacy of Customer Retention, Journal of Retail Banking, 13(4). Christopher, M. Payne, A. and Ballantyne, D. (1991), Relationship Marketing, Butterworth-Heinemann: Oxford. Dawkins, P. and Reichheld, F. (1990), Customer Retention as a Competitive Weapon, Directors and Boards, 14 (4). Gordon, I.H. (1999), Relationship Marketing: New Strategies, Techniques and Technologies to Win the Customers You Want and Keep Them Forever, John Wiley: New York. Ford, D. (Ed) (1990), Understanding Business Markets: Interaction, Relationships and Networks, Academic Press: London. Gronroos, C. (1990), Relationship Approach to the Marketing Function in Service Contexts: The Marketing and Organization Behavior Interface, Journal of Business Research, No.1. Kotler, P. (1991), Megamarketing, Harvard Business Review, March-Aprill, PP. 117-24. McKenna, R (1991), Relationship Marketing, Addison-Wesley: Reading, MA. Kotler, Philip (1998), Marketing 4th Edition, Prentice Hall: Sydney.

226

Key Drivers of Organizational Excellence

Annexure The following are the 30 Relationships (30Rs) for Customer Retention can be considered for a successful retention of customers: R1:

The Classic Dyad: the relationship between the supplier and the customer.

R2:

The many headed customer and the many headed suppliers.

R3:

Mega Marketing: the real “customer” is not always found in the market place.

R4:

The Classic Triad: the customer-supplier-competitor relationship.

R5:

Alliances change the market mechanisms.

R6:

Market mechanisms are brought inside the company.

R7:

The Service Encounter: interaction between the customer and front line personnel.

R8:

Inter Functional and Inter Hierarchical Dependency: the relationship between internal and external customers.

R9:

Relationship via Full Time Marketers (FTMs) and Part Time Marketers (PTMs).

R10: Internal Marketing: relationship with the “employee market”. R11: The non-commercial relationship. R12: Physical Distribution: the classic marketing networks. R13: The electronic relationship. R14: Mega alliances. R15: Quality providing a relationship between production and marketing. R16: Personal and social network. R17: The two-dimensional matrix relationship. R18: The relationship to external providers of marketing services. R19: The relationship to the customer’s customer. R20: The owner and financier relationship. R21: Para social relationships via symbols and objects. R22: The law-based relationship. R23: The criminal network. R24: The mental and physical proximity to customers vs. the relationship via market research. R25: The customer as member. R26: The relationship to the dissatisfied customer. R27: The green relationship. R28: The knowledge relationship. R29: The mass media relationship. R30: The Monopoly Relationship: the customer or supplier as prisoners.

Retail Transformation - Competition or Conflict!

227

21

Retail Transformation - Competition or Conflict! Kanwal Thakkar Swati Tomar

A new idea whose time has come when arrives cannot be banned by anyone. Especially when it proposes an essentially finer monetary and behavioral value scheme to its clientele, it gradually takes over the old way, and other stakeholders have no option but to acknowledge and transform accordingly. Modern retailing is one such inevitable reality which has started taking a spin in the traditional retail scenario and is soon liable to capture the retail sector and further enhance its compass. All elements in the delivery chain better accept it and prepare, rather than trying to rob the customers of a superior way of life by promulgating fallacy and protecting vested interests. The question which then arises in the face of this foreseeable change is the future of the traditional outlets (Kiranas) with a network so intense that most of us have a kirana store within five minutes of our residence. The kiranas also operate on a low-cost model with family-owned properties (an extension of the house), with most of the family working in the store itself. They cater to impulse needs at short notice, and early opening and late closing times which suit many families. The supermarkets on the other hand propose an elite ambience with economy for all sectors of the society. This paper deals with the dilemma of calling this confrontation - competition or conflict? Both rivalries have competitive advantages. The kirana will have a low cost structure, convenient location and customer intimacy. Modern trade large outlets will have product width and depth, disintermediation and technology. Like in any competitive market, the smartest survives and the consumers win.

INTRODUCTION India as a country has dominance of unorganized retail market. "Kirana stores" the traditional retail outlets work with an age old set up of a shop in the front & house at the back. More than 99% retailers function in less than 500Sq.Ft of area. The producers distribute goods through C & F agents to Distributors & Wholesalers. Retailers happen to source the merchandise from Wholesalers & reach to end-users. The merchandise price gets inflated to a great extent till it reaches from Manufacturer to End-user. Selling prices are largely not controlled by Manufacturers. The vibrant change in the market that has occurred in the past decade has made retailing probably the hottest area to venture into.

228

Key Drivers of Organizational Excellence

There is an elementary budge stirring in the market. There are several reasons why organized retailing has now come into picture. These reasons include increased purchasing power of the customers, Young population as Indian consumers, Shift in the consumption patterns, More Tier-2 cities emerging across the country and Better infrastructure facilities and improved logistics. Though this transformation has been becoming more and more visible, several hindrances exist in the path to organized retailing. These barriers subsume the taxations and legislative systems, Barriers to FDI, Supply chain bottlenecks, Customer's preferences as per their social class, and Lack of industry status. With considerable transformations happening in the Indian retail sector, it will be apt to consider how these transformations will impinge on the Indian consumer. An array of retail formats budding enables the consumers to have numerous alternatives depending on their standard of living. An additional appealing aspect is that the retail density (number of retail outlets/given number of consumers) in the Indian market is growing in contrast with numerous developed markets where the retail density is much lower due to diffusion of modern retailing practices. If we consider this at a very fundamental level the major contemplation for the retail players is its target customer. The demographic and psychographic factors of the consumer are significant criteria for the planners. In comparison with the US the Big malls cannot have huge stores all over the cities and hence might not be able to cater to the instant needs which the local kiranas suffice. Regardless of consumers' lifestyles, the kirana stores would persist to provide regulars as before, possibly with a noteworthy modification in the variety of products reserve. It is noteworthy that a consumer at the upper socio-economic section may procure clothes or some of the fast moving consumer goods through contemporary retailing but would still depend on the kirana stores for categories like vegetables fruits and groceries may be making an occasional premium purchase at the sophisticated mall. In the developed markets where daily essentials might be stocked for a week to avoid travelling for some distance, in India due to the widespread presence of kiranas the daily needs would be purchased from the unorganized retail shops present at every nook and corner. However discount stores like Subhiksha with their everyday low pricing scenario takes on the battle of the organized with the unorganized to some extent. Using the supply chain provisions these utilitarian stores source the products at an attractive price and transfer the benefits to the consumers. These kinds of stores may command a strong loyalty from the customers as we have a cultural practice on fresh fruits and vegetables and the availability of such products is there even in the urban markets which are quite far from their source. Also the attractive price adds on to the attractiveness of such utility malls. The presence of these stores in the neighbourhood might pose a stiff competition for the kirana stores and not modern retail hypermarkets. But even in this set-up, discount retailing is not sufficient to cover up the complete consuming population, and in most urban and rural areas, the conventional grocer will persist to be imperative for the consumer. However for products like electronic appliances and apparels modern retail formats like Big Bazaar or Shoppers' Stop will attract consumers from several parts of a city as they may not mind travelling to save on durable products and apparel.

Retail Transformation - Competition or Conflict!

229

THE IMPACT OF LOCATION The contemporary retail stores which deal in FMCGs have to then bring in some form of differentiation and uphold that differentiation to capture the unorganized stores clientele. Having a location based advantage can be rewarding to make sure that it is not within the proximity of a discount store or a supermarket if it wishes to use the rewards of supply chain allied with well-known brands. In most of the metros there is a cluster of malls in a single location making it difficult for the consumers to associate any specific proposition to any of these stores. Consequently, if a supermarket is to proffer sustained discounts on all its commodities, it has to depend on its supply-chain competencies and also have a massive quantity of sale. Location is an imperative source to draw consumer traffic for such huge volumes for such stores. However this might not hold much relevance for a small neighbourhood store where volumes are not very high.

CREATING PRIVATE BRANDS A significant opportunity for the large retailers is to develop private brands that can create a proposition of value from the consumer's perspective. A proven example of this is the Aldi chain in Germany. Almost 95 per cent of the commodities in this chain consist of local brands. Wal-Mart could not budge the strength of this chain due to its strong positioning amongst its buyers who enjoyed the value offered to them though the various brands. This model is often referred to as one of the major causes of Wal-Mart's failure in Germany (Berman and Evans, 2002). If we look at the Indian market there is colossal capacity existing with the retailers to explore in this area of private branding due to the diverse spread of spending power and generic competition at the lower end of the market. Product categories like FMCG, Fruits, vegetables which may be bought on daily basis a good quality can draw a premium price from the consumers at the higher end who are not very sensitive to price as compared to quality. This section of consumers wants the best of products and would not hesitate to spend on a private brand which is not well established in the market and would willingly pay a premium price for such brands provided they get a better quality (Varley and Rafiq, 2004). Thus the option of private branding may be suitable for a organized retailer trying to compete with the local seller catering to the high end consumer. However if we look at the bottom of the pyramid which is a massive consumer section of Indian socio-economic continuum we find the presence of generic competition. There is the reality of minimum wages and a high price sensitivity which makes the consumer balance between his desire to use branded products from well known retailers and his income. For these consumers quality means acceptable level of performance across categories. And hence these consumers may balance their budget by having a trade-off by buying branded commodities across product categories. The consumer may buy a few branded products and compromise in the other product categories for unbranded offerings (Newman and Cullen, 2002). Hence there is a massive potential of developing private labels suitable to the various segments. Big Bazaar has already started developing such private labels with regard to durable categories. However, even with these strategies the organized retailers might not ever be able to have the whole cake as a very significant share of it will remain with the

230

Key Drivers of Organizational Excellence

unorganized stores given their location and accessibility advantage. Coming to the Kirana Scenario; with the materialization of modern trade, the anxiety is building on neighbourhood kirana stores. But it's not the end of the world for them as they are swiftly acclimatizing themselves to the requirements of the new consumer. So they are making positive changes in their stores, buying swipe card machines, recruiting sales people to engage in long hours and providing home delivery even for small ticket items. The situation is cutthroat. Traditional (Kirana) stores need to advance to endure competition. Long-term survivors will need to implement diverse strategies to discriminate themselves from their organized competitors. The importance of Kirana stores is emphasized in the fact that consumers need convenience in retail. The neighbourhood Kirana store will forever maintain a favourable factor of convenience. Unlike a developed market where consumers travel some distance for shopping, Indian consumers have the Kirana stores to service them on all days, all through the year. Small shops can adopt several strategies to compete in the given scenario. These strategies can include increasing credit for monthly purchases, Quick home or phone delivery of small ticket items, Open shops in areas which are beyond the reach of big retailers, and Diversifying into sale of SIM cards and mobile phones. The biggest dent in the big retailers eating up the small Kirana is the reach of Kirana stores to the customer for his daily individual needs. Who would travel to a mall for a loaf of bread or a bar of soap?

BENEFITS OF DEVELOPMENT OF ORGANIZED RETAILING IN INDIA l

Employment Creation: Organized retail market boom is anticipated to generate the much needed mass employment in India. This would ensure the development of India's second and third tier cities to international standards. many professionals like real estate dealers, builders, architects, display designers, retail shop managers and workers like sales persons, security etc are likely to get employment due to development of world class retail shops (Lervy and Weitz, 2004).

l

Small business development: Mega malls mushrooming in and around the cities give rise to upcoming of small businesses near such malls providing services to a large number of shoppers visiting the malls.

l

Economic growth: The organized sector will contribute to the economy in terms of GDP and employment generation

l

New Opportunities: Rural and small units can find a ready market for their products and thus generate income and get private brand equity for their products. Large retailers would depend on the unorganized manufacturing sectors and small units for their successful in sourcing cost-effective goods for supply through their retail outlets.

l

Income generation: Many women, household workers, artisans and small scale business operators can become a part of the supply chain by enhancing their skills to bring about quality consciousness and increase their real time incomes. Eradicating the middle operators these working groups will be getting better remunerations and there will be a significant improvement in their standards of living.

l

Development in the infrastructural facilities: There will be augmentation in transport facilities required for providing goods and services for the retail outlets. It will help our

Retail Transformation - Competition or Conflict!

231

farmers to get the product to market place in time. The supply chain can again offer prospects for a horde of manufacturing, trading, and services. Air, road and rail transport is going to profit from this. Quite a few of domestic airlines are now increasing their cargo services considerably to meet the requirements of this sector. l

Improved Inventory Management systems: Big retailing requires a well managed inventory system. The finest practices accessible in the world in this field would come into the country with the coming of players like Wal-Mart in the logistics of retail marketing.

l

Wider market for consumer durables: Products like mobiles, washing machines, refrigerators, television, air conditioners etc would find a wider platform to present their products on.

l

Direct benefits to the consumers: Consumers would get wider choices of products and cheaper prices. This will augment consumption rate and will ultimately create more employment and wealth. The local retailers will start offering enhanced discounts to capture maximum consumer sections. Customer satisfaction would enhance through value added services like online shopping, home delivery through web portal and ability for better comparison of products.

l

Education scenario: Growth in the organized retailing has enhanced the education setup in the country. A new area of education has emerged in the name of retail management. While retail giants inflowing there are many prospects opening up in the educational sector. To face this expected talent shortage higher educational institutions have started introducing new courses. Retail career area includes store operation, supply chain management, human resource management, entrepreneurship, IT, sales etc.

l

Elimination of middlemen: Budding organized food retailing in India will bring momentous change in the agribusiness management. The supermarket and fresh food outlet showrooms will procure directly from the farmers. Farmers are likely to get better prices for their products as these mega retailers are likely to acquire their farm products directly from the farmers. Many of the middle men would be eliminated.

l

Shopping experience: The emergence of malls and supermarkets has given fashion and festivals a new face in the country. They have become an entertainment hub for families and friends to celebrate special occasions.

l

E-tailing: Retailing is now moving towards e-tailing with Retail showrooms will start offering multi channel online retailing facility. The Indian society will soon shift to purchase product and services online in a large amount. The online shopping destinations like eBay, Amazon comparison shopping portals like froogle and shopping.com will be more popular in Indian community. New domestic e-commerce retailers will be born.

THE OTHER SIDE OF THE COIN With almost 25% of India living below the poverty line the fruits of the development and organized retailing might result in a social dissatisfaction amongst the lower side of the population. With almost 40 million people in India depending on the traditional retail

232

Key Drivers of Organizational Excellence

sector, the trade unions and traders fear these people's employment will suffer if retail giants are permitted to enter India's retail market. Most of the employment opportunities that promise to create are for the semi skilled and unskilled labors this is not useful for majority of highly educated Indian youth. Due to delay in processing and corrupted middle players of government employees, the benefits of the elimination of middlemen are not reaching to the real farmers. The coming of the big players in the retail market would be a threat for the friendly neighborhood Kirana stores. The personal touch one used to get from the service of Kirana stores would be missed in these sophisticated shopping giants. The life style of the community would change. Food consumption patterns are already seeing a major change as fast foods and junk foods are replacing the more nutritional conventional foods. Whole sale commodity markets would disappear and there would be considerable job loss. The inevitable journey to plain cultural homogeneity started with globalization would be speeded up. Local products would go off the shelf and more popular products would take its place.

COMPETITION OR CONFLICT! The question to be deliberated upon now is whether we take this confrontation as a competition or a conflict? India is a price sensitive market. And in such a scenario the question which arises is that can the ambience and shopping experience of shopping malls beat the 'kirana' stores? Do you see this competition moving beyond just a price war? Looking at the situation from a different perspective 'Kirana' stores target the masses whereas organized retail services cater to a specific class of people. It would be unfair to apply the same rules at both places, when they cater to two different sets of people. The mall going consumers are more or less global, that is, they have moved from price to value. This section is more value conscious, as they will buy the most exclusive item, and concurrently will want to acquire the finest price for it. A price conscious consumer, alternatively, will look at price alone and choose a product which is the cheapest. The consumer's taste and choice are becoming global. Even the food cooked in their households need ingredients which is available only in the supermarkets. On the other hand as the options increase, consumers will demand extra and constraint of space will become a problem for the 'kiranas'. Possibly, what will come about is that the generalized retailer will convert into a specialized one. We can presume that the 'kirana' store will be around for the next 3-4 decades easily, as it is the neighborhood store and convenience works in its favor. It also has two channels to buy from - traditional suppliers and hypermarkets. So to conclude, we can say that the 'kiranas' might have a layout and product threat, but not under any business threat. Where the prices are concerned, a consumer is motivated to buy more quantity of the goods and there is an immense scope for impulsive shopping in the organized retail shops. Eliminating the impulse shopping factor a consumer saves money in the supermarkets as they offer best prices, however that is not what happens. The consumers more or less end up doing an impulse purchase in the alluring set up of these stores. Competition amongst these two setups will in fact help expand the organized retail market. And of course the consumers will take the final call.

Retail Transformation - Competition or Conflict!

233

CONCLUSION The paper tries to present the fast approaching retail boom scenario likely to happen sooner than later. The impact of the same on the Indian Traditional retail outlets is discussed with the likely positive and negative impact of this revolution. Where the organized sector poses a cut-throat competition for the kiranas the fact still remains that India being a country with diversified social classes there is a scope for both to survive. The emergence of a developed retail sector will pose a competition rather than a threat to the traditional stores which would help these stores change their outlook and ways of working. As it is said the best one survives in the war- and this is not a conflict but a golden opportunity of co-survival where winner is only the consumer.

References Berman B and Evans J R (2002), Retail Management, Pearson Education: New Delhi. Michael Lervy M and Weitz B W (2004), Retailing Management, Tata McGraw-Hill: New Delhi) Newman A J and Cullen P (2002), Retailing: Environment and Operations, Vikas, New Delhi. Varley R and Rafiq M (2004), Principles of Retail Management, Palgrave Macmillan: HoundmillsBasingstoke, Hampshire.

234

Key Drivers of Organizational Excellence

22

Creating Customer Value through Ecotourism for Development of Sustainability in the Rural Regions Moumita Mitra. Sanjoy Kumar Pal

An Effective Tool to Promote Sustainability in Rural Regions Ecotourism has become more than a catch phrase for nature loving travel and recreational activities. Ecotourism is concerned with preservation of the world's natural and cultural environments and sustaining the diversity of the same. It accommodates and entertains visitors in a manner that is minimally intrusive or destructive to the environment and sustains and supports the native cultures in the location it is operating in. The genuine meaning for ecotourism lies in the responsibility shared by both the visitors and service providers. Saving the environment around oneself and preserving the natural luxuries and forest life is what ecotourism is all about. This article is attempted to uncover through field survey the various ways a rural area can be promoted through sustainability and carrying capacity methods to develop a rural area into an international ecotourism destination.

INTRODUCTION India is a country of diverse interest and offers a number of tourist attractions throughout its vast land. India offers a great diversity of culture, tradition landscapes, ways of life and a wide variety of tourism options to the discerning tourist (Shepard et.al, 1997). Tourism, which is India's second largest export industry, is the quickest and simplest way to increase the GDP and employment in India (Paul, 2001). It is an economic activity where India has a unique advantage with its natural beauty, its wildlife, and its warm friendly people. Tourism has emerged as the country's third largest net earner of foreign exchange and was more than $5731 million in 2006-2007. Tourism provides employment both in organized and unorganized sectors for high skilled, semi-skilled, and unskilled manpower and gradually bridges the gap between the rich and the poor.

WHAT IS ECOTOURISM? The relationship between the environment and Tourism is not just fundamental but highly complex. There is a mutual dependence between the two that has often been described as

Creating Customer Value through Ecotourism

235

symbiotic (Ronda et.al, 2002). In simple terms since tourism benefits from being located in good quality environment those same environments ought to benefit widely from measures of protection aimed at maintaining their value as tourist resources. Under this concept, tourists are taken to national parks, bird sanctuaries, wildlife sanctuaries, natural habitats of local tribes, and backwater areas of a particular region (Ronda et.al, 2002). People enjoy being in the lap of nature-they spend money on such regions, a part of these funds is spend on the upkeep and development of these natural regions. They however are not allowed to pollute waters of rivers and lakes. They are not allowed to kill wild animals of these places, not even ignite fires in fires in jungles; least these fires should engulf those jungles.

Objective of the Study 1.

To understand the result of demonstration effect.

2.

How sustainable development can take into account economic benefit but also prevent environmental degradation.

RESEARCH METHODOLOGY The study was exploratory in nature. The data for the study was collected using Purposive Sampling Technique. Individual Visitors to the rural regions of the Dooars were the sample elements for the study. The total sample size was 1000 respondents divided into four age groups. The data was collected using self developed questionnaire consisting of 10 statements. The data was collected on a categorical scale.

RESULTS AND DISCUSSIONS The Data collected through a field survey is given below: Influence of Demonstration Effect

AGE (in years) 16-19

AGE (in years) 20-23

AGE (in years) 24-27

AGE (in years) 28-31.

% of Influence

Visitors Strongly saying Yes.

50

92

130

63

33.5

Visitors Strongly saying No.

160

181

90

23

45.4

Visitors saying neither Yes or No.

13

28

18

11

7.0

The study showed that the locals of Dooars were losing their ethnic culture and were getting influenced by visitor's culture. Even the Dokra art which is a unique style of art work in bronze metal is losing its ethnicity, its originality. The art work which depicted models of Nararaja in the vibrant dancing pose, Rama, Sita, Narayana, etc., are now getting replaced by models of Hollywood, Bollywood actress, models and macho man. The demonstration effect has also penetrated into the 4T (Tea, Teak, Tribal, and Tobacco) concept of Tourism in the Dooars region. Tribal people's culture, tradition, food habits, dresses; everything has been influenced by the demonstration effect. The bamboo leaved clothes has been replaced by Levis Genes and Capri's. Brew, which was served in bamboo leaves, has been replaced by whisky now being served in goblets (Ronda et.al, 2002). Again as more and more visitors flock these places, it results in development (construction of hotels, and apartments, new roads, new attractions, etc) can result in direct loss of habitats.

236

Key Drivers of Organizational Excellence

Due to developments of ski-fields here, the ecological balance has been significantly altered and in case of deforestation, greatly increased the risk associated with landslides and snow avalanches. At a more localized scale, other impacts become apparent. (Ronda et.al, 2002) Destruction of vegetation at popular visitor locations through trampling or the passage of wheeled vehicles is a common problem (Luciana Coelho Marques, 2000).Trampling causes more fragile species to disappear and where regeneration of vegetation is possible by more resilient species. This results in overall reduction of species diversity and the incidence of rare plants which in turn may create an impact on biomass of the region. This also results in the location and may sometimes create localized eutrophication of water.

TOWARDS A SUSTAINABLE RELATIONSHIP BETWEEN TOURISM AND ENVIRONMENT The evident problems that surround tourists and the environment have led to the formulation of a range of management responses to the perceived difficulties. This has been mirrored both in the development of site specific management techniques and also more fundamentally in strategies and approaches aimed at developing sustainable forms of tourism (Ronda et. al, 2002). There are a number of tourism management techniques that have been widely applied in areas where protection of environments is a key consideration (Shepard et. al, 1997) 1.

Spatial Zoning.

2.

Spatial concentration or dispersal of tourists.

3.

Restrictive entry or carrying capacity.

Spatial Zoning is an established land management strategy that aims to integrate tourism into environments by defining areas of land that have differing suitability or capacities for tourism (Wright, 1998). Hence zoning of land may be used to exclude tourists from primary conservation areas; to focus environmentally abrasive activities into locations that have been specially prepared for such events (Wight et al, 1997). Zoning policies are often complemented by strategies for concentrating tourists into preferred sites or where sites are under pressure, deflecting visitors to alternative destinations (Paul, 2001). In contrast where conditions require a distribution of tourist activity, devices such as planed scenic drives may have the desired effect of taking away from environmental pressure points. Carrying capacity again varies with fragility of the area concerned and the nature of tourist activity contemplated (Luciana Coelho Marques, 2000). Carrying capacity is the maximum number of people who can use a site without an acceptable alteration in the physical environment .Carrying Capacity thus prevents environmental, socio-cultural and economic deterioration of the destination (Ronda et. al, 2008).

CONCLUSION The results, of this field survey analysis, are; to create a popular resort in one area in order to relieve the pressure from another area of a more sensitive nature; to apply dispersion policies at the destination; to encourage green policies; to provide urban tourist facilities in the destination to fulfill the needs of the tourists; to create environmental awareness by providing training to the host community; and to develop private and public sector co-operation, co-

Creating Customer Value through Ecotourism

237

operation among the hosts and the guests at the national and international levels aimed at developing touristic environmental guidelines and standards.

References Shepherd, A., and C. Bowler. 1997. Beyond the requirements: improving public participation in EIA. Environmental Planning and Management 40:725-738. Ronda Green (1997), The Tour Operator's Dilemma: Keeping the customer happy while not disturbing the wildlife , The 2002 EAA International Ecotourism Conference in Cairns Ronda J Green, (2008), Ecological interactions and climate change, National Ecotourism Australia Conference Paul, P. (2001). Getting inside Gen Y. American Demographics. 23(9). p. 42. Pamela Wight, Pam Wight & Associates (1997), Sustainability, Profitability and Ecotourism Markets: What Are They and How Do They Relate? Canada, International Conference on Central and Eastern Europe and Baltic Sea region: "Ecotourism -Balancing Sustainability and Profitability" 22-23 September 1997 Pärnu, Estonia Wright, P. (1998) Tools for sustainability analysis in planning and managing tourism and recreation in a destination. In C.M. Hall and A.A. Lew (eds) Sustainable Tourism: A Geographical Perspective (pp. 75-91). Harlow: Longman. Luciana Coelho Marques (2000), An Evaluation of Ecolodges, in The Brazilian Amazon , Cuarta Feria Ecoturistica y de Production 15 - 23 Julio, 2000 Buena Noche de Hato Nuevo, Manoguayabo, Santo Domingo, D.N., Republica Dominicana.

238

Key Drivers of Organizational Excellence

23

Retail Management in the Growth of Indian Economy Maulik C. Prajapati Vipul B.Patel

Retailing can be defined as the buying and selling of goods and services. It can also be defined as the timely delivery of goods and services demanded by consumers at prices that are competitive and affordable. Retailing, one of the largest sectors in the global economy, is going through a transition phase not only in India but the world over. With the liberalization and growth of the Indian economy since the early 1990s, the Indian customer has witnessed an increasing exposure to new domestic and foreign products through different media, such as television and the Internet. Apart from this, social changes also had a positive impact, leading to the rapid growth in the retailing industry. Increased availability of retail space, rapid urbanization, and qualified manpower also boosted the growth of the organized retailing sector. In this paper, we have focused on introductory part of Indian Retail Industry and factors which are essential for the growth of same industry. We have included significant types of retail operations like department store, specialty store, discount/ mass merchandisers, warehouse/wholesale clubs and factory outlet. This paper has also talked about the impact of retail management on the growth of Indian economy.

INTRODUCTION Retailing can be defined as the buying and selling of goods and services. It can also be defined as the timely delivery of goods and services demanded by consumers at prices that are competitive and affordable. Retailing involves a direct interface with the customer and the coordination of business activities from end to end- right from the concept or design stage of a product or offering, to its delivery and post-delivery service to the customer. The industry has contributed to the economic growth of many countries and is undoubtedly one of the fastest changing and dynamic industries in the world today. Retail is India's largest industry, accounting for over 10 percent of the country's GDP and around eight percent of employment (Dominic, 2007). Retail in India is at the crossroads. It has emerged as one of the most dynamic and fast paced industries with several players entering the market. That said, the heavy initial investments required make break even hard

Retail Management in the Growth of Indian Economy

239

to achieve and many players have not tasted success to date. However, the future is promising; the market is growing, government policies are becoming more favourable and emerging technologies are facilitating operations. Retailing in India is gradually inching its way to becoming the next boom industry. The whole concept of shopping has altered in terms of format and consumer buying behavior, ushering in a revolution in shopping. Modern retail has entered India as seen in sprawling shopping centers, multi-storied malls and huge complexes offer shopping, entertainment and food all under one roof. The Indian population is witnessing a significant change in its demographics. A large young working population with median age of 24 years, nuclear families in urban areas, along with increasing workingwomen population and emerging opportunities in the services sector are going to be the key growth drivers of the organized retail sector. In India, the retail industry is broadly divided into the organized and unorganized sectors. The total market in 2005 stood at Rs. 10,000 billion, accounting for about 9-10% of the country's gross domestic product (GDP). Of this total market, the organized sector accounted for Rs. 350 billion (about 3.5 % of the total) of the total revenues. Traditionally, the retail industry in India comprised of large, medium and small grocery stores and drug stores which could be categorized as unorganized retailing. Most of the organized retailing in India had recently started and was mainly concentrated in metropolitan cities. The retailing sector of India can be split into two segments. They are the informal and the formal retailing sector. The informal retailing sector is comprised of small retailers. If the retail industry is divided on the basis of retail formats then it can be split into the modern format retailers and the traditional format retailers. The modern format retailers comprise of the supermarkets, Hypermarkets, Departmental Stores, Specialty Chains and company owned and operated retail stores. The traditional format retailers comprise of Kiranas, Kiosks, Street Markets and the multiple brand outlets. The retail industry can also be subdivided into the organized and the unorganized sector. The organized retail sector occupies about 3% of the aggregate retail industry in India.

SIZE AND CONTRIBUTION OF THE RETAIL INDUSTRY IN INDIA In terms of value, the Indian Retail industry is worth $300 billion. Its contribution to the Gross Domestic Product is about 10%, the highest compared to all other Indian Industries. The retail sector has also contributed to 8% of the employment of the country. The organized retail sector is expected to triple its size by 2010. The food and grocery retail sector is expected to multiply five times in the same time frame. The major reason behind the low participation in the Indian retail sector is the need for lumpy investments that cannot match up their break-even points. The government policies are being revised from time to time to attract investments in this sector. The retail industry continued in India in the form of Kiranas till 1980. Soon, following the modernization of the retail sector in India, many companies started pouring in the retail industry in India like Bombay Dyeing, Grasim etc. As has been mentioned earlier the retail sector in India can be widely split into the organized and the unorganized sector. The unorganized sector is predominant. The unorganized retail sector basically includes the local Kiranas, hand cart, the vendors on the pavement etc.

240

Key Drivers of Organizational Excellence

This sector constitutes about 98% of the total retail trade. But Foreign Direct Investment in the retail sector is expected to shrink the employment in the unorganized sector and expand that in the organized one. In the organized sector the licensed retailers who have registered themselves to sales as well as income tax undertake trading. The organized retail sectors have in their ambit, corporate backed hypermarkets and retail chains. The private large business enterprises are also included under the organised retail category.

GROWTH OF RETAILING IN INDIA Indian retailing industry has seen phenomenal growth in the last five years (2001-2006). Organized retailing has finally emerged from the shadows of unorganized retailing and is contributing significantly to the growth of Indian retail sector. RNCOS' "India Retail Sector Analysis (2006-2007)" report helps clients to analyze the opportunities and factors critical to the success of retail industry in India. A sample of this evolving opportunity can be grasped from the following statistics about retailing in India: l

Organized retail will form 10% of total retailing by the end of this decade (2010).

l

From 2006 to 2010, the organized sector will grow at the CAGR of around 49.53% per annum.

l

Cultural and regional differences in India are the biggest challenges in front of retailers. This factor deters the retailers in India from adopting a single retail format.

l

Hypermarket is emerging as the most favorable format for the time being in India.

l

The arrival of multinationals will further push the growth of hypermarket format, as it is the best way to compete with unorganized retailing in India.

India's top retailers are largely lifestyle, clothing and apparel stores followed by grocery stores. Following the past trends and business models in the west retail giants such as Pantaloon, Shoppers' Stop and Lifestyle are likely to target metros and small cities almost doubling their current number of stores. These Wal-Mart wannabes have the economy of scale to be low -medium cost retailers pocketing narrow margin.

RETAILING SCENARIO IN INDIA The retail scenario in India is unique. Much of it is in the unorganized sector, with over 12 million retail outlets of various sizes and formats. Almost 96% of these retail outlets are less than 500 sq. ft. In size, the per capita retail space in India being 2 sq. ft. Compared to the US figure of 16 sq. ft. India's per capita retailing space is thus the lowest in the world. With more than 9 outlets per 1, 000 people, India has the largest number in the world. Most of them are independent and contribute as much as 96% to total retail sales. Because of the increasing number of nuclear families, working women, greater work pressure and increased commuting time, convenience has become a priority for Indian consumers. They want everything under one roof for easy access and multiplicity of choice. This offers an excellent opportunity for organized retailers in the country who account for just 2% (and modern stores 0.5%) of the estimated US $180 billion worth of goods that are retailed in India every year. The growth and development of organized retailing in India is driven by two main factors - lower prices and benefits the consumers can't resist. According to experts,

Retail Management in the Growth of Indian Economy

241

economies of scale drive down the cost of the supply chain, allowing retailers to offer more benefits offered to the customer. The retail business in India in the year 2000 was Rs.400,000 crore and is estimated to go to Rs.800,000 crore by the year 2005, an annual increase of 20%.The contribution of the organized retail industry in the year 2000 was Rs.20,000 crore and is likely to increase to Rs.160,000 crore by 2005.

Figure 1: Retail Growth Potential

GROWTH OF RETAIL OUTLETS IN INDIA India is rapidly evolving into a competitive marketplace with potential target consumers in the niche and middle class segments. The market trends indicate tremendous growth opportunities. Global majors too are showing a keen interest in the Indian retail market. Over the years, international brands like Marks & Spencer, Samsonite, Lacoste, McDonald's, Swarovski, Domino's among a host of others have come into India through the franchise route following the relaxation of FDI (foreign direct investment ) restrictions. Large Indian companies - among them the Tata, Goenka and the Piramal groups - are investing heavily in this industry. Organizations ready to take on this challenge can leverage the opportunities offered by a population of more than a billion. The prospects are very encouraging. Buying behaviour and lifestyles in India too are changing and the concept of "Value for Money" is fast catching on in Indian retailing. This is evident from the expansion of the pantaloons chain into a large value format, Big Bazaar, and the entry of new discount stores in food retailing in the South, namely, Subhiksha and Margin Free.

TRENDS IN RETAILING The single most important evolution that took place along with the retailing revolution was the rise and fall of the dotcom companies. A sudden concept of `non-store' shopping emerged, which threatened to take away the potential of the store. More importantly, the very nature of the customer segment being addressed was almost the same. The computer-savvy individual was also a sub-segment of the `store' frequenting traffic. Internationally, the concept of net shopping is yet to be proven. And the poor financial performance of most of the companies offering virtual shopping has resulted in store-based

242

Key Drivers of Organizational Excellence

retailing regaining the upper hand. Other forms of non-store shopping including various formats such as catalogue/mail order shopping, direct selling, and so on are growing rapidly. However, the size of the direct market industry is too limited to deter the retailers. For all the convenience that it offers, electronic retailing does not suit products where `look and see' attributes are of importance, as in apparel, or where the value is very high, such as jewellery, or where the performance has to be tested, as of consumer durables. The most critical issue in electronic retailing, especially in a country such as ours, relates to payments and the various security issues involved. l

Retailing in India is witnessing a huge revamping exercise.

l

India is rated the fifth most attractive emerging retail market: a potential goldmine.

l

Estimated to be US$ 200 billion, of which organized retailing (i.e. modern trade) makes up 3 percent or US$ 6.4 billion.

l

As per a report by KPMG the annual growth of department stores is estimated at 24%.

l

Ranked second in a Global Retail Development Index of 30 developing countries drawn up by AT Kearney.

Figure 2: Retail Sales in India

RETAILING IN INDIA: THE PRESENT SCENARIO The present value of the Indian retail market is estimated by the India Retail Report to be around Rs. 12, 00,000 crore ($270 billion) and the annual growth rate is 5.7 percent. Retail market for food and grocery with a worth of Rs. 7, 43,900 crore is the largest of the different types of retail industries present in India. Furthermore around 15 million retail outlets help India win the crown of having the highest retail outlet density in the world. The contribution of retail sector to GDP has been manifested below: Country

Retail Sector's share in GDP (in %)

India

10

USA

10

China

8

Brazil

6

(Source:CII-AT Kearney Retail Study)

Retail Management in the Growth of Indian Economy

243

Availability of Retail Stores

Country

Number of stores per 1,000 people

India

22

Japan

10

USA

3.8

The above table reinforces our view that India has done a great job in retailing.

INDIAN RETAIL: PAST VS PRESENT It is widely accepted that the retail industry has undergone a drastic change in last five years and there is yet more to come. Let us compare the image of Indian retailing in 2004-05 to that of its status in 2007-08 in the following table: Magnification of the Indian Retail Industry

Yardstick Value of retail sales Annual growth rate Value of organized market Share of organized market in the sector Forecasts (after 5 years) about size of organized retail market Forecasts about growth rate of organized retail market

Situation in 04-05 Rs. 10,20,000 crore 5% Rs 35,000 crore 3.4% Over Rs. 1,00,000 crore

Situation in 07-08 Rs 12,00,000 crore 5.7% Rs 55,000 crore 4.6% Rs. 2,00,000 crore

Around 30%

Around 40%

Source: Economy watch team, at www.economywatch.com, 2008

The above table clearly shows that the retail market as well as the mindset required for it has experienced a thorough revisal in the last three years. This is just the beginning and Indians are sanguine that the sector will see rosy days in the future. This confidence has helped India acquire the No.1 position among 30 most attractive retailing destinations in the world according to the Global Retail Development Index of 2008 (AT Kearney, India, 2008). Among emerging markets, India holds the second position after China in the list of most favored retail destinations. The retail industry employs a huge share of the total workforce in India. It is the second largest employer after India. Presently 7 percent of the total labor force is employed in the retail sector. According to available data it is also the largest employer in the services sector and maximum growth in the non-agricultural sector has been witnessed by retail trade. According to market analysts 300 new malls, 1,500 supermarkets and 325 departmental stores are going to come up in India in the next few days. The shopping revolution that has led to this retail boom is going to continue and this is good news for the government as well as those who wish to work in the organized sector.

CONTRIBUTION OF FDI IN RETAILING Permitting Foreign Direct Investment in the retailing sector can have immense benefits. It can generate huge employment for the semi-skilled as well as illiterate population which otherwise can't be employed in the already confined rural and organized sector. The retail sector is highly dependent on the rural sector. Thus it can facilitate the improvement of the standard

244

Key Drivers of Organizational Excellence

of living of farmers by purchasing commodities at a reasonable cost. It also stems out an indirect employment generation channel by training and employing people in the transportation and distribution sectors such as drivers, mechanics etc. It is also evident that real estate is a genuine challenge for organized retailing. Traditional retailers can use this situation in their favor by taking franchisees of the mega players of this industry. On the other hand, the consumer gains from the wide variety of choices and a more diversified basket of prices available under one roof. Secondly the indirect benefits like better roads, online marketing, expansion of telecom sector etc. will give a 'big push' to other sectors including the rural one itself. Last but not the least the huge tax revenue generated from these retail biggies and collected in government coffers will gradually wipe out the ugly looking fiscal and revenue deficits. Besides the transaction in foreign currencies by these MNCs will create a balance in exchange rate and will bring in stable funds in the economy as opposed to FII's hot money. This will in turn act as a boost to the developing (or 'transforming', as suggested by the USAID) economy of India. The phobias relating to FDI in the retail sector are unfounded as neither the retailing sector in India is an infant industry, nor it can outweigh the paramount local tastes and preferences.

SNAPSHOT OF INDIAN RETAILING l

The Retail Sector in India can be split up into two, the organised and the unorganized. The organized sector whose size is expected to triple by 2010 can be further split up into departmental stores, supermarkets, shopping malls etc.

l

In terms of value the size of the retail sector in India is $300 billion. The organised sector contributes about 4.6% to the total trade.

l

The retail sector in India contributes 10% to the Gross Domestic Product and 8% to the employment of the country.

l

In terms of growth the FMCG retail sector is the fastest growing unit and the retail relating to household care, confectionery etc, have lagged behind.

l

The foreign retail giants were initially restricted from making investments in India.

CONCLUSION Indian retailing is still mostly carried out in an unorganized sector, though this is changing in a fast way. This paper has focused on the evolution and development of retailing in India. It has also touched upon issues such as FDI and others. In a nutshell, the retail industry in India appears to have bright future prospect. It is expected to enrich the Indian Economy in terms of income and employment generation.

References Dominic, K (2007), Indian Retail: An Overview, Networkmegazineindia.com, downloaded on 12th Feb., 2008. Economy Watch Team (2008), Indian Retail Industry and Indian Economy , www.Economywatch.com, downloaded on Feb, 2008 A T Kearney, India (2008), Emerging Opportunity for Global Retailers: Global Retail Development Index, downloaded from www. Atkearney.pt on Feb, 2008.

24

CRM to e-CRM: Promises and Pitfalls Seema Mehta Tarika Singh

Electronic Customer Relationship Management (e-CRM) has become the latest paradigm in the world of Customer Relationship Management. E-CRM is becoming more and more necessary as businesses take to the web. No longer can companies rely on the traditional brick & mortar strategies that have taken them to where they are today. Organizations have to evolve with the market instead of behind it. This paper explores CRM and e-CRM Marketing opportunities e-CRM provides i.e enhance customer interaction opportunities to develop stronger situations with the customers through personalized options that in turn lead to customer satisfaction and customer loyalty, which are potential sources of competitive advantage for the companies. The discussion outlines the focal points to address prior to implement, opportunities of eCRM and outlines potential pitfalls during implementation. The paper also includes incites on recent trends. The discussion also acknowledges the formidable challenges which eCRM adoption and implementation pose for companies in the areas of customer relationships and data integration issues. This paper explores the directions for future research.

CUSTOMER RELATIONSHIP MANAGEMENT Customer Relationship Management (CRM) has attracted the expanded attention of practitioners and scholars. More and more companies are adopting customer-centric strategies, programs, tools, and technology for efficient and effective customer relationship management. They are realizing the need for in-depth and integrated customer knowledge in order to build close cooperative and partnering relationships with their customers. The emergence of new channels and technologies is significantly altering how companies interface with their customers, a development bringing about a greater degree of integration between marketing, sales, and customer service functions in organizations. For practitioners, CRM represents an enterprise approach to develop full-knowledge about customer behavior. In the marketing literature the terms customer relationship management and relationship marketing are used interchangeably. As Nevin (1995) points out, these terms have been used to reflect a variety of themes and perspectives. Some of these themes offer a narrow functional marketing perspective while others offer a perspective that is broad and somewhat

246

Key Drivers of Organizational Excellence

paradigmatic in approach and orientation. A large number of organisations consider CRM as a tool for seeking customer retention by using a variety of after marketing tactics that lead to customer bonding or staying in touch with the customer after a sale is made (Vavra, 1992). A more popular approach with the recent application of information technology is to focus on individual or one-to-one relationships with customers that integrate database knowledge with a long-term customer retention and growth strategy (Peppers & Rogers, 1993). Shani and Chalasani (1992) have defined relationship marketing as “an integrated effort to identify, maintain, and build up a network with individual consumers and to continuously strengthen the network for the mutual benefit of both sides, through interactive, individualized and value added contacts over a long period of time”. Jackson (1985) applies the individual account concept in industrial markets to suggest CRM to mean, “individual accounts” In other business contexts, Doyle and Roth (1992), O’Neal (1989), and Paul (1988) have proposed similar views of customer relationship management. A narrow perspective of customer relationship management is database marketing emphasizing the promotional aspects of marketing linked to database efforts (Bickert, 1992). CRM and relationship marketing are not distinguished from each other in the marketing literature (Parvatiyar & Sheth, 2000). For example, Gronroos (1990) states: “Marketing is to establish, maintain, and enhance relationships with customers and other partners, at a profit, so that the objectives of the parties involved are met. This is achieved by a mutual exchange and fulfillment of promises”. The implication of Gronroos’ definition is that forming relationships with customers is the “raison de etre” of the firm and marketing should be devoted to building and enhancing such relationships. Similarly, Morgan and Hunt (1994) draw upon the distinction made between transactional exchanges and relational exchanges by Dwyer, Schurr, and Oh (1987) to suggest that relationship marketing “refers to all marketing activities directed toward establishing, developing, and maintaining successful relationships.” The core theme of all CRM and relationship marketing perspectives is its focus on a cooperative and collaborative relationship between the firm and its customers, and other marketing factors. This paper utilizes the operational definition of CRM is that Customer Relationship Management as a comprehensive strategy and process of acquiring, retaining, and partnering with selective customers to create superior value for the company and the customer. The de-intermediation process and consequent prevalence of CRM is also due to the growth of the service economy. Since services are typically produced and delivered at the same institution, it minimizes the role of middlemen. Between the service provider and the service user an emotional bond also develops creating the need for maintaining and enhancing the relationship. It is therefore not difficult to see that CRM is important for scholars and practitioners of services marketing (Berry & Parsuraman, 1991; Bitner, 1995; Crosby & Stephens, 1987; Crosby, Evans, & Cowles, 1990; Gronroos, 1995). For successful implementation of CRM full integration of people process and technology is required.

INFORMATION TECHNOLOGY AND CRM In recent years however, several factors have contributed to the rapid development and evolution of CRM. These include the growing de-intermediation process in many industries due to the advent of sophisticated computer and telecommunication technologies that allow

CRM to e-CRM: Promises and Pitfalls

247

producers to directly interact with end-customers. CRM is carried out across the customer lifecycle with the help of information technology (Day and Van den Bulte 2002; Reinartz, Krafft and Hoyer 2004). Firms that build relationships over time should interact with customers and manage relationships differently at each phase (Srivastava, Shervani and Fahey 1998). CRM focus on a systematic and lifecycle-congruent management of specific activities to develop customer relationship. Thus, CRM has to deal with the identification and management of hundreds of thousands of profitable customers. An important aspect of CRM is the association with technology to handle those large customer bases properly. A major part of CRM implementation is development of a set of capabilities related to CRM Technology (Croteau and Li 2003) for gaining customer understanding (Bose and Sugumaran 2003; Zahay and Griffin 2004). This incorporates the acquisition, storage, dissemination and usage of customer information (Sinkula, Baker and Noordewier 1997; Slater and Narver 1995) for the purpose of initiating, maintaining and retaining better customer relationships. Empirical results regarding the performance impact of technology at the firm level are mostly positive (e.g. Hitt and Brynjolfsson 1996; Menon, Lee and Elden-burg 2000). Less attention has been paid to the degree of usage of information technology in the context of CRM (Jayachandran et al. 2005). Technology usage is regarded as a key driver of organizational success (Devaraj and Kohli 2003; Mahmood, Hall and Swanberg 2001). Studies have mostly focused on the acceptance of technology in CRM as given by the intention to use technology (Avlonitis and Panagopoulos 2005). For example, in many industries such as the airline, banking, insurance, computer software, household appliances and even consumables, the de-intermediation process is fast changing the nature of marketing thereby making relationship marketing more important. The recent success of on-line banking, online investment programs, direct selling of books, automobiles, insurance, etc., on the Internet all attest to the growing consumer interest in maintaining a direct relationship with marketers. The Internet plays a key role in building up and sustaining customer relations. Thus new term electronic customer relationship management (eCRM) was born. ECRM extends the existing CRM through an electronic component and uses the possibilities of the new information and communication technologies. In this context the task of marketing and IT specialists is to search for new eCRM applications. Whereas marketing experts know the customer needs, IT experts understand the potential IT provides. Several software tools and technologies claiming solutions for various aspects of CRM have recently been introduced for commercial applications. The majority of these tools promise to individualize and personalize relationships with customers by providing vital information at every point in the interface with the customer. Several benefits can be derived from IT as a facilitator for CRM activities. Role of IT is important to track consumer behavior to gain insights into customer tastes and evolving needs. By organizing and using this information, firms can design and develop better products and services (Davenport, Harris, and Kohli 2001 Nambisan 2002). For example, Davenport and Klahr (1998) argue that customer knowledge may be derived from multiple electronic sources and media. Glazer (1991) provides examples of how FedEx and American Airlines used their investments in IT systems at the customer interface to gain valuable customer knowledge. In addition, firms with a greater deployment of CRM applications are likely to be more familiar with the data management issues involved in initiating, maintaining, and terminating a customer relationship.

248

Key Drivers of Organizational Excellence

This familiarity gives firms a competitive advantage in leveraging their collection of customer data to customize offerings and respond to customer needs. For example, Piccoli and Applegate (2003) discuss how Wyndham uses IT tools to deliver a consistent service experience across its various properties to a customer. Recently however, skepticism has replaced the initial enthusiasm about CRM, because the rate of success of CRM-projects is low and varies from 30% to 70%. Not surprisingly, organizations are often disappointed about the implementation results of their CRM-programs. The failure of many CRM-projects can however also be blamed on the difficulties that managers encounter in embedding CRM in their strategy and organization. To date the body of research on CRM has ignored these strategic implementation issues. The thinking reflected in CRM is based on three aspects of marketing management viz. Customer orientation; relationship marketing, and; database marketing. These aspects join in CRM due to developments in Information and Communication technology. CRM-practices may be divided under two brand categories for implementation purpose: (1) programs that aim to build intimate relationships with customers, and; (2) practices that use data-mining techniques to improve targeting, cross selling and market research. The first CRM-practice focuses on satisfying customers and fulfilling customers’ needs. Thereby it is expected that the resulting increase in customer loyalty will enhance profits. The second CRM-practice focuses on a more efficient and effective use of marketing tools and is sometimes referred to as ‘cost reduction management’. An example is more efficient targeting of direct mailings, which can lead to substantial reductions in marketing costs. The distinction between these two CRM-practices can be very useful to explain the success or failure of CRM-projects. However, for a better understanding of the implementation of CRM one also needs to consider how CRM is strategically embedded within the organization.

CRM AND ECRM CRM has changed over the years from being a customer service business unit loosely linked to Marketing to an electronic dynamo that is attempting to maximize the value of existing customer relationships. Dyche (2002) in her CRM Handbook suggests that CRM is the infrastructure that enables the delineation of an increase in customer value, and the correct means by which to motivate valuable customers to remain loyal, to buy again. The key words are infrastructure and enables. The infrastructure is the people and processes that an organization has at its disposable to understand, motivate, and attract its customers. It is the technology that enables the organization to improve customer service, differentiate customers and deliver unique customer interactions. For one company, Wal-Mart (Swift, 2001) the infrastructure is an enterprise data warehouse. This e-CRM system enables the company to collect massive amounts of data to manage the ever changing needs of customers and the marketplace. This coupled with people and processes permits the integration of operational data with analytics, modelling, historical data and predictive knowledge management to provide its customers with what they need at the right time. CRM and e-CRM are about capturing and keeping customers through the Internet in real time (Greenberg, 2002). CRM is about customers interacting with employees, employees collaborating with suppliers, and every interaction being an opportunity to maintain and improve a relationship.

CRM to e-CRM: Promises and Pitfalls

249

In the following table, there is a comparison between traditional CRM and e-CRM in the fields of media, marketing, time, information type, customer behavior, focus, service time, and feature. Comparison between Traditional CRM and e-CRM

Traditional CRM

E-CRM

Media

Telephone, fax, mail

The Internet

Market

General/targeted individuals (High costs)

General/targeted individuals (Low costs)

Time

Limited service hours (Company hours)

Unlimited service hours (Customer hours)

Information type

Information giving

Information seeking

Customer behavior

PassiveLess interaction

Active More interaction

Focus

Product-centric

Customer-centric

Service time

Slow (usually by sales representative)

Almost immediately

Feature

Mass customization

Personalization

ECRM Customer relationship management (CRM) is about identifying a company’s best customers and maximizing the value from them by satisfying and retaining them. As a business philosophy CRM is seen to be firmly rooted in the concept of relationship marketing, which is aimed at improving long-run profitability by shifting from transaction based marketing to customer retention through effective management of customer relationships (Christopher et al., 1991). Recently it has been acknowledged that company relationships with customers can be greatly improved by employing information technology (Karimi et al. Ryalsand Payne, 2001) which can facilitate and enhance customer relationships in various ways but mainly enables companies to attain customization, which is the essence of a customer-centric organization (Stefanou et al., 2003). In this context CRM has emerged as the ideal vehicle for implementing relationship marketing within companies, with some practitioners suggesting that CRM provides a platform for the operational manifestation of relationship marketing (Plakoyiannaki and Tzokas, 2002). For many organizations the most obvious way to implement CRM is through the use of software applications in the form of electronic customer relationship management (eCRM) technology. This type of CRM software provides the functionality that enables a firm to make the customer the focal point of all organizational decisions (Nematiet al., 2003). Innovations in such technology and the Internet are just some of several factors that now make relationships through one-to-one initiatives a reality (Chen and Popovich,). The Internet has allowed new patterns of intermediation to emerge, allowing firms to adopt CRM to focus on effective customer relationship management as well as harnessing the application of on-line technologies to facilitate customer supplier relationships (Wright et al., 2002). This promotes the value of eCRM by exploring the opportunities created for companies and the net benefits they have realized in practice such as enhanced customer interactions and relationships, possibilities for personalization and the creation of a competitive advantage in the marketplace eCRM describes the broad range of technologies

250

Key Drivers of Organizational Excellence

used to support a company’s CRM strategy. It can be seen to arise from the consolidation of traditional CRM with the e-business applications. (Bradway and Purchia, 2000) and has created a flurry of activities among companies. ECRM is the proverbial double-edged sword, presenting both opportunities and challenges for companies considering its adoption and implementation. ECRM is sometimes referred to as web-enabled or web-based CRM and emerging from this view eCRM has been defined by Forrester Research as ‘a web centric approach to synchronizing customer relationships across communication channels, business functions and audiences. (Lee-Kelley et al., 2003) highlight the relative lack of literature in this domain and suggest as a working definition that eCRM refers to ‘the marketing activities, tools and techniques delivered via the Internet which includes email, world wide web, chat rooms, e-forums, etc., with a specific aim to locate, build and improve long term customer relationships to enhance their individual potential. Typically electronic and interactive media such as the Internet and email are seen as playing the most significant role in operationalising CRM as they support effective customized information between the organization and customers. However, eCRM can also include other e-technologies and new e-channels including mobile telephony, customer call and contact centers and voice response systems. The use of these technologies and channels means that companies are managing customer interactions with either no human contact at all, or involving reduced levels of human intermediation on the supplier side. The emergence of mobile commerce has led to the introduction of new products, new ways of selling products to customers and new learning curves for companies in terms of how to manage interactions with customers (Wright et al., 2002). For example, financial organizations are now beginning to take advantage of mobile marketing services and in particular mobile banking, based on wireless application protocol (WAP) technology, as a powerful new marketing tool to build long lasting and mutually rewarding relationships with new and existing customers (Rilvari, 2005). Most major banks are using mobile CRM in some form as a new channel for customer acquisition, and to project a new image for the company. Mobile operators such as Vodafone and health care providers such as VHI have also used SMS text messaging to enhance customer relationships. Mobile channels, especially SMS, are seen as immediate, automated, reliable, personal and customized options providing an efficient way to reach customers directly (Sinisalo et al., 2005) and to manage customer relationships. Alongside SMS and WAP functions, multimedia messaging is also available for banking transactions for the first time in the world. The service is initially available to the banks, online banking customers (Rilvari,). Other sectors exploring mobile eCRM include retailing. This implies that eCRM using mobile marketing may indeed offer an effective way to reach, and build relationships with, demanding customers in rapidly changing markets (Sinisalo et al., 2005) Another e-technology offering companies opportunities for managing customer interactions is voice response systems. Since the on-line world and e-technologies have become such an integral part of day-to-day business and as they appeal to such a mass global universe of consumers, businesses are constantly searching for innovative yet costeffective ways to reach remote customers, moving eCRM from a ‘nice to have’ methodology to a ‘must have’ methodology. Recent developments in the field of eCRM include a CRM package evaluation/procurement service, hosting of CRM component applications and the use of Online Analytical Processing (OLAP) tools to develop Customer Intelligence in order

CRM to e-CRM: Promises and Pitfalls

251

to enhance the effectiveness of ECRM. Full integration of People, Process and Technology is essential for ECRM.

P E O P LE

PROCESS ECRM

TE C H N O LO G Y

THE SIX E’S The “e” in e-CRM not only stands for “electronic” but also can be perceived to have many other connotations. Though the core of e-CRM remains to be cross channel integration and organization; the six “e” in e-CRM can be used to frame alternative decisions of e-CRM based upon the channels which e-CRM utilizes. The six ‘e’s of e-CRM are briefly explained as follows: 1.

Electronic channels: New electronic channels such as the web and personalized emessaging have become the medium for fast, interactive and economic communication, challenging companies to keep pace with this increased velocity. e-CRM thrives on these electronic channels.

2.

Enterprise: Through e-CRM a company gains the means to touch and shape a customer’s experience through sales, services and corner offices-whose occupants need to understand and assess customer behavior.

3.

Empowerment: e-CRM strategies must be structured to accommodate consumers who now have the power to decide when and how to communicate with the company through which channel, at what frequency. An e-CRM solution must be structured to deliver timely pertinent, valuable information that a consumer accepts in exchange for his or her attention.

4.

Economics: An e-CRM strategy ideally should concentrate on customer economics, which delivers smart asset-allocation decisions, directing efforts at individuals likely to provide the greatest return on customer-communication initiatives.

5.

Evaluation: Understanding customer economics relies on a company’s ability to attribute customer behavior to market programs, evaluate customer interactions along various customer touch point channels, and compare anticipated ROI against actual returns through customer analytic reporting.

252

Key Drivers of Organizational Excellence

IMPLEMENTING ECRM The times are gone when organizations could expect positive competitive and differentiation effects with a simple company presentation in the internet. Web presences are measured more and more by their realized degree of personalization and their efficient information and services. Appropriate eCRM systems are available but the task of completely implementing and integrating the systems is a technical and organizational challenge. From a technology perspective an eCRM system represents a mass of seams that need to be tightly stitched together, in essence a mass of integration. No single software application is able to fill the gap, nor is it likely to be filled internally. To implement eCRM companies will need a variety of hardware/software applications and tools. (Anon, 2001). This suggests significant resource and cost implications, which companies must incorporate into their overall strategic planning. An eCRM system is also highly dependent on neighboring systems to be effective, for example: traditional ‘front office’ CRM that has to be consistent with an eCRM system at both data and process levels. This reinforces the need for companies to have well-developed business processes and information and technology infrastructures on which to build and sustain eCRM competences. Integrating data from multiple sources, both on line and off-line channels, is a critical issue in facilitating successful and valuable eCRM analytics (Nemati et al., 2003) but will represent a challenge for even the most progressive of companies initially. Nemati et al. (2003) suggest that although on-line, off-line and external data integration has its complexities, the value added is significant. Their research suggests that companies integrating data from various customer touch points achieve significant benefits with higher user satisfaction and ROI rates than companies that do not deal with the data integration challenge. Bradway and Purchia (2000) see the ability to effectively integrate eCRM with other CRM initiatives, which require a great deal of technology integration, in addition to process integration, as a core capability that will distinguish best eCRM practice among companies. Such integration represents a formidable challenge for companies but if successfully managed can generate a source of competitive advantage in the marketplace. Chen and Chen (2004) have explored the key success factors of eCRM strategies in practice and suggest that system integration dimensions consisting of sub-factors such as functional integration, marketing integration, supply chain integration, data integration, system compatibility, experience comparability to offline CRM and integration with other CRM channels were critical factors for companies. The IT function is an essential enabler of business development within an organization. Marketing users often focus on the front end of applications and assess the functionality of the eCRM system with limited understanding of data and Web integration issues, while the IT function tends to assess its technical quality. Successful eCRM strategies necessitate improved levels of integration between functions of business to successfully harness the opportunities available. Chen (2004) reinforces this in research and Chen (2004) highlight the need for a holistic view of business models, system architecture and integration of business and IT strategies. An organization’s success in eCRM will involve creatively using appropriate analytical techniques to exploit the data (Gurau et al., 2003)

CRM to e-CRM: Promises and Pitfalls

253

PRE-IMPLEMENTING CONSIDERATIONS Once a company has identified the need for eCRM, it can begin to plan for implementation. The following focal points should be considered at the pre-implementation phase (Greenberg, 2001).

Developing customer focused business strategies: The objective of this step is not to mold the customer to the company’s goals but to listen to the customer and try to create opportunities beneficial to each. It is important to offer customers what they are currently demanding and anticipate what they are likely to demand in the future. This can be achieved by providing a variety of existing access channels for customers, such as e-mail, telephone and fax, and by preparing to provide for future access channels such as wireless communication Retooling business functions: Starting to do business via eCRM will require disruptive organizational change in order to determine which departments/functions are truly servicing the customer and which ones are only adding to overhead. A major factor here is that some changes would always be required during an eCRM implementation. Work process re-engineering: The departmental role and responsibility changes from retooling business functions will necessitate adopting new work processes. The choices here are to take the traditional step-wise approach or an integrated one toward improving work efficiency Technology choices: The focus here is to consider the company’s industry, the company’s position within its industry, and which eCRM implementations are good candidates for the company in particular. Criteria for technology selections include: Scalability of software, Tool set flexibility for customization, Stability of the existing eCRM application code, Compatibility of eCRM application with legacy and Internet systems, Level of technical support available during and after implementation, Upgradable support, Availability of additional modules (Sims, 2001), and Security. Training and preparation: This is arguably the most important one in eCRM implementation. Depending on the number of users, training times will vary from company to company A Structural Process for Implementing e-CRM: Based on the review of literature the steps for implementing e-CRM include: l

The strategy development Process: In this the business should define the e-CRM goals keeping in view the core organizational objectives in the mind.

l

Identifying and capturing the data: The different channels viz. Mobile, contact center, automated interaction, Direct/Branch interaction, Mail/Fax, Touch points enables the company to get the customers data.

l

The integration process: should be done in such a manner that the customer is presented as the integration process irrespective of the channel with which the interaction takes place.

l

The personalization effect: The information obtained about the customer should enable the company to focus on providing maximum value to the customers and should enable the company to focus on profitable customers by segmenting the customer and ensuring the need fulfillment.

254

Key Drivers of Organizational Excellence

l

Integration with Core Enterprise Operations: An integration with other core business activities viz supply chain management, human resource management, financial management is required to answer customer queries and for tracking the prospects.

l

Performance Evaluation: to measure the effectiveness of e-CRM system with reference to development expenses and productivity.

THE CHALLENGES

Source: IBM

For creating a flexible, world-class customer friendly eCRM system organization must deliver the necessary feedback and reports for corporate management to define and adapt its resources and customer programs to maximize revenue and profit. The challenges that a business may face when deploying an eCRM solution are:

The Business Challenges: Service is perhaps the last remaining way for a business to effectively differentiate itself. Effective service involves managing each customer interaction to ensure a consistent experience and an outcome that is in line with each individual customer’s wants, needs, and expectations—as well as being in line with the economics the business desires from a relationship with that customer. Organizations are looking for customer management solutions that provide the ability to: l

Identify a unique situation of a customer;

l

Prescribe rules on how to treat and influence that situation;

l

Execute those rules consistently across all contact channels; and

l

Measure the effectiveness of the program on that unique situation

The Technology Challenges: Truly effective e-CRM solutions involve complex architectures, time-consuming product selection and acquisition, and integration challenges to existing and future investments. For example, today’s multi-channel solution involves email routing, web-chat, web-collaboration, web personalization, intelligent call routing, and contact management, to name just a few. Products providing these capabilities take significant research and skill to integrate, implement, maintain, and continuously upgrade when you

CRM to e-CRM: Promises and Pitfalls

255

consider the software, servers, database, hardware, and telephone switches involved. Plus, with new technology being developed every day, it’s very difficult to remain on the leading edge, while continuously researching how new technologies will complement and coexist with an existing architecture.

The Operational Challenges: Businesses typically focus most of their energy and investment on management resources to develop the required marketing and customer influence programs, and the staff resources to execute those programs. For competitive advantage, both these investments should typically remain “in-house” and strategic to the business. The retention of key staff resources to communicate with your customer base is a vital issue. A business has to consider many things as it enters new markets, grows in an existing market, and attempts to retain its key personnel who interact with customers. A business needs to consider how its resources can effectively access an eCRM solution and any impact the eCRM solution may have on retention and physical organization of those service resources. However, there is an important additional investment required—the people skills to manage, upgrade, and enhance the eCRM solution once it has been developed. A multi-channel eCRM solution involves many players, including support skills for the products involved, and any outside guidance and direction from integrators, product companies, and management consultants.

The Financial Challenges: Financially, the cost of “turning the lights on” for a required eCRM solution is daunting. One way to get your arms around the extent of the investment is to consider different investment categories. The categories of investment required to create, operate, and maintain eCRM solutions are: 1.

Production hardware and software;

2.

Labor expense—operating staff expense to support, change, and upgrade the business use of the technology solution;

3.

Test and staging hardware and software;

4.

Maintenance expense—production and test environments;

5.

IT research and development expense—to assess new enabling technologies in customer service management and their impact on the current or planned business requirements. The powerful way to visualize the total cost of an eCRM solution is to consider the annual cost for a company against the number of key users of the solution.

The Time Challenges: Businesses are expecting to get results and measurement of the investments made on their eCRM investments. The business results are directly related to the leverage an eCRM solution gives them—the information and data about customer behavior and the impact their resources and customer programs are having across all key contact channels. Multi-channel eCRM solutions can take 12 to 15 months or even longer to deploy. The market must address this and provide for solutions and techniques to deploy the required eCRM architecture and its subsequent enhancements to provide measurement and business benefit sooner.

256

Key Drivers of Organizational Excellence

OPPORTUNITIES All organizations need to educate themselves about the new phenomenon of electronic customer relationship management (e-CRM). According to Romano (2001), e-CRM is concerned with attracting and keeping economically valuable customers and eliminating less profitable ones. In addition, Keen (in Greenberg, 2001) predicts that the share of businessto-business marketplaces using e-CRM solutions will grow from 3% in 1999 to more than 50% in 2004. Investing in e-CRM solutions will give companies the tools they need to create, maintain and extend competitive advantage in their market spaces. A recent McKinsey & Co. study revealed that a 10% gain in repeat customers could add about 10% to the company’s profits (Sims, 2000). On the other hand, a 10% reduction in the total marketing expenditures needed to attract new visitors adds only .7% to the bottom line (Sims, 2000). In essence, keeping existing customers happy is more profitable than going after greater numbers of new customers, even when a company is able to pare down the cost of attracting those new potential customers. The best way to keep these existing customers happy is to deliver value to them on their own terms (Jutla et al. 2000). In a recent study (Sims, 2000), Anderson Consulting found that a typical $1 billion high-tech company can gain as much as $130 million in profits by improving its ability to manage customer relationships. Anderson Consulting also found that as much as 64% of the difference in return on sales between average and high performing companies is attributable to e-CRM performance. Such evidence indicates that the well-planned implementation of an e-CRM system produces a winning situation for customers and companies alike. Improvements in the overall customer experience lead to greater customer satisfaction, which in turn has a positive effect on the company’s profitability. The objectives such as increased customer loyalty, more effective marketing, improved customer service and support and greater efficiency and cost reduction can be achieved with a proper e-CRM implementation. The customer-facing edge of the firm is the site of increasingly fierce technological and organizational innovation as firms scramble to implement current Customer Relationship management (e-CRM) systems. According to recent surveys, executives regard “improvement of customer service and support” and “gaining access to new customers” as among the primary anticipated benefits of e-business enablement (AMA, 2000), and demand for e-CRM applications is growing quickly. The establishment of an effective interactive customer-facing interface is a necessary condition to the attainment of “sense and respond” (Haeckel, 1999) or adaptive capability. CRM is presently transitioning from stand-alone marketing, service, and sales operations supported by proprietary hardware platforms and software applications to integrated, multimedia, multi-channel operations supported by standard hardware platforms and packaged software solutions.

THE E-CRM BENEFITS Increased customer loyalty: An effective e-CRM system lets a company communicate with its customers using a single and consistent voice, regardless of the communication channel. This is because, with e-CRM software, everyone in an organization has access to the same transaction history and information about the customer. Information captured by an e-CRM system helps a company to identify the actual costs of winning and retaining individual customers. Having this data allows the firm to focus its time and resources on its most

CRM to e-CRM: Promises and Pitfalls

257

profitable customers (epiphany.com, 2001a). Classifying one’s “best” customers in this way allows an organization to manage them more efficiently as a premium group, with the understanding that it is neither necessary nor advisable to treat every customer in the exact same way.

More effective marketing: Having detailed customer information from an e-CRM system allows a company to predict the kind of products that a customer is likely to buy as well as the timing of purchases. In the short to medium term, this information helps an organization create more effective and focused marketing/sales campaigns designed to attract the desired customer audience (epiphany.com, 2001a). e-CRM allows for more targeted campaigns and tracking of campaign effectiveness. Customer data can be analyzed from multiple perspectives to discover which elements of a marketing campaign had the greatest impact on sales and profitability (Greenberg, 2001). In addition, customer segmentation can improve marketing efforts (Rong, 2001). Grouping customers according to their need similarities allows a company to effectively market specific products to members of the targeted groups. Improved customer service and support: An e-CRM system provides a single repository of customer information. This enables a company to serve customer needs quickly and efficiently at all potential contact points, eliminating the customer’s frustrating and time-consuming “hunt” for help (epiphany.com, 2001a). e-CRM-enabling technologies include search engines, live help, e-mail management, news feeds/content management and multi-language support. With an e-CRM system in place, a company can more accurately: l

Receive, update and close orders remotely

l

Log materials, expenses and time associated with service orders

l

view customer service agreements

l

search for proven solutions and best practices

l

Subscribe to product-related information and software patches

l

access knowledge tools useful in completing service orders (peoplesoft.com, 2001).

Greater efficiency and cost reduction: Data mining, which is the analysis of data for exploring possible relationships between sets of data, can save valuable human resources (whatis.com, 2001). Integrating customer data into a single database allows marketing teams, sales forces, and other departments within a company to share information and work toward common corporate objectives using the same underlying statistics (epiphany.com, 2001a). Examples of this are identifying unproductive/underutilized resources, closer tracking of costs, better forecasting for the pipeline and setting realistic project metrics and measurements to quantify return on investment.

MAJOR ECRM PITFALLS In an attempt to quickly implement e-CRM strategy many companies start spending money before developing a comprehensive e-CRM strategy. Research by Gartner Group (Patton, 2001a) indicates that more than half of all CRM projects are not expected to produce a measurable return on investment (Goldberg, 2001). A study by Forrester Research also indicated that 57% of companies surveyed could not justify investment in customer service programs due to the difficulty in measuring their impact on profitability (Goldberg, 2001). A

258

Key Drivers of Organizational Excellence

Bain & Co. study (Patton, 2001a) in June showed that 19% of customer relationship management users decided to stop funding their CRM projects. Two out of five respondents said that their CRM projects are either “experiencing difficulty” or are “potential flops,” according to a Data Warehousing Institute survey released in May. These points echo a warning from experts like Berkeley Enterprise Partners that, in spite of their popularity, most CRM projects do not result in measurable benefits (Patton, 2001b).successfully implement a CRM system, the firm’s decision-makers must of identify and define their corporate strategy in order to see positive returns for their investment. The potential pitfalls with e-CRM implementation subsume Mismatch between a company and the vendor’s CRM software, A poor understanding of the company’s business processes, eCRM implementations that take more than 90 days have a high failure rate, Non stability of the vendor, Size of project (Some e-CRM implementations have failed because their initial scope was too broad), and Rejection by end users.

KEY SEARCH ISSUES There are a fair number of issues and challenges that remain for researchers and practitioners alike. In this section, some of the issues and challenges that have received little attention in academia and industry to offer new directions for continued inquiry into eCRM have been discussed. There are actually many topics that could be discussed; however based on the review of researches done before, it can be delineated that e-CRM can be implemented if the following topics are addressed: Resistance and Usability. Rosen (2001) asserts that eCRM involves people, processes, and technology. Both people and the process are vital to success. How then should we design systems that focus on people and processes? Firms have not focused on these two important areas during eCRM implementations in light of the fact that Atkinson (1993) pointed out a decade ago that resistance to change was one key factor that led to failure to reap full benefits of 80% of Total Quality Management initiatives in the area of customer-supplier relationships and that AMR research analysts have recently suggested that poorly designed user interfaces and employee resistance are major factors leading to eCRM failures (McKenzie 2001). More study is needed into if and how resistance and usability (or lack thereof) contribute to eCRM implementation failures and methods to overcome these potential barriers to success also need to be developed and tested (Fjermestad and Romano2003).

CONCLUSION According to Romano (2001), e-CRM is concerned with attracting and keeping economically valuable customers and eliminating less profitable ones. Romano and Fjermestad (2002) are convinced that eCRM will continue to develop as an important area of study in relevant referent disciplines as Computer Science, Marketing and Psychology When eCRM works, it helps to focus on customer centric by meshing everyone together and focusing the entire organization on the customer. Like all strategic initiatives, eCRM requires commitment and understanding throughout the company, not just in marketing. Business decisions based on complete and reliable information about customers are very difficult for competitors to replicate and represent a key and sustainable competitive advantage hence is the people, processes, and technology necessary to assure the organization is in touch with the customer and suppliers.

CRM to e-CRM: Promises and Pitfalls

259

References Anon, (2001), A short guide to e business; getting close to customers: leapfrogging with eCRM, CRM Market Watch, issue 8, 28th Febuarary 2001. Assion Lawson-Body and Moez Limayem. The Impact of Customer Relationship Management on Customer Loyalty: The Moderating Role of Web Site Characteristics Avlonitis, G. J. and Panagopoulos, N. G., Antecedents and consequences of CRM technology acceptance in the sales force, Industrial Marketing Management, vol. 34, no. 4, pp 355-368. Bradway, B. and R. Purchia, (2000) Top strategic IT initiatives in e-CRM for the new millennium; www.FinancialInsights.co, Berry, L. L. & Parsuraman, A. (1991), Marketing Services – Competing Through Quality, New York: Free Press. Bickert, J. (1992, May), The Database Revolution, Target Marketing, pp.14-18. Bitner, M. J. (1995, Fall), Building Service Relationships: It’s All About Promises, Journal Of Academy of Marketing Science, pp. 246-251. Bose, R. and Sugumaran, V (2003), Application of Knowledge Management Technology in Customer Relationship Management, Knowledge and Process Management, vol. 10, no.1, pp 3-17. Gurau, C., A. Ranchhod and R. Hackney (2004), Customer-centric strategic planning: integrating CRM in online business systems, Information Technology and Management, vol. 4, nos, 2/3 April–June, pp 199-214. Chen, Q. and H.M. Chen (2003), Exploring the success factors of eCRM strategies in practice, Database Marketing and Customer Strategy Management, vol. 2, no.4, pp 333-43. Chen, I.J and K. Popovich (2003), Understanding customer relationship management (CRM): people, process and technology, Business Process Management Journal, vol.9, no.5, pp 672-88. Christopher, M., A. Payne and D. Ballantyne (1991), Relationship Marketing, Butterworth-Heinemann, Oxford. Crosby, L. A., Evans, K. R., & Cowles, D (1990, April), Relationship Quality in Services Selling—An Interpersonal Influence Perspective, Journal of Marketing, vol. 52, pp 21-34. Croteau, A. -M. and Li, P. (2003), Critical Success Factors of CRM Technological Initiatives, Canadian Journal of Administrative Sciences, vol. 20, no. 1, pp 21-34. Davenport, T.H., Harris, J.G., and Kohli, A.K., How Do They Know Their Customers So Well, MIT. Day, G. S. and Van den Bulte, C. (2002), Superiority in Customer Relationship Management: Consequences for Competitive Advantage and Performance, Marketing Science Institute, Report No. 02, pp 123. Devaraj, S. and Kohli, R.(2003), Performance Impacts of Information Technology: Is Actual Usage the Missing Link?, Management Science, vol. 49, no.3, pp 273-289. Doyle, S. X. & Roth, G. T. (1992, Winter), Selling and Sales Management in Action: The Use of Insight Coaching to Improve Relationship Selling, Journal of Personal Selling & Sales Management, pp. 59-64. Dwyer, F. R., Schurr, P. H., & Oh, S. (1987, April), Developing Buyer-Seller Relationships, Journal of Marketing, vol. 51, pp 11-27. Dyche, J. (2002), The CRM Handbook, Addision-Wesley. Goldberg, Harold, (2001), 10 Ways to Avoid CRM Pitfalls, B to B, September 17, 2001. Francis Buttle, The CRM Value Chain Greenberg, Paul (2001), Capturing and Keeping Customers in Internet Real Time, McGraw-Hill. Glazer, R (1991), Marketing in an Information Intensive Environment: Strategic Implications of Knowledge as an Asset, Journal of Marketing, vol. 55, no.4, pp 1-19.

260

Key Drivers of Organizational Excellence

Greenberg, P (2002), CRM at the Speed of Light, McGraw-Hill. Gronroos, C. (1990, January), Relationship Approach to Marketing In Service Contexts: The Marketing and Organizational Behavior Interface, Journal of Business Research, vol. 20, pp 3-11. Hitt, L. M. and Brynjolfsson, E.(1996), Productivity, Business Profitability, and Consumer Surplus: Three Different Measures of Information Technology Value, MIS Quarterly ,vol. 20, no. 2, pg 121-142. Jackson, B. B. (1985), Winning and Keeping Industrial Customers: The Dynamics of Customer Relationships, Lexington, MA: D.C. Heath. Jayachandran, S., Sharma, S., Kaufman, P. and Raman, P. (2004), The Role of Relational Information Processes and Technology Use in Customer Relationship Management, Moore School of Business. Karimi, J., T.M. Somers and Y.P. Gupta.(2001), Impact of information technology management practices on customer service, Journal of Management Information Systems, vol. 17, pp.125-58. Lee-Kelley, L., G. David and M. Robin. (2003), How eCRM can enhance customer loyalty, Marketing Intelligence and Planning, vol. 21, no. 4, pp239-48. Mahmood, M. A., Hall, L. and Swanberg, D. L (2001), Factors Affecting Information Technology Usage: A Meta- Analysis of the Empirical Literature, Journal of Organizational Computing, vol 11, no.2, pp 107-130. Menon, N. M., Lee, B. and Eldenburg, L. (2003), Productivity of Information Systems in the Healthcare Industry, Information Systems Research, vol. 11, no.1, pp 83-92. Morgan, R. M. & Hunt, S. D. (1994), The Commitment-Trust Theory of Relationship Marketing, Journal Of Marketing, vol. 58, no.3, pp 20-38. Nemati, H.R., C.D. Barko and A. Moosa (2003), E-CRM analytics: the role of data integration, Journal of Electronic Commerce in Organisations, vol. 1, no. 3 July–Oct, pp 73-90. Nevin, J. R. (1995, Fall), Relationship Marketing and Distribution Channels: Exploring Fundamental Issues, Journal of Academy Marketing Sciences, pp. 327-334. O’Neal, C. R. (1989, February), JIT Procurement and Relationship Marketing, Industrial Marketing Management, vol. 18, pp 55-63. Parvatiyar, A. & Sheth, J. N. (2000), The Domain and Conceptual Foundations of Relationship Marketing, In J. N. Sheth & A. Parvatiyar (Eds.), Handbook of Relationship Marketing, pp. 3-38. Thousand Oaks, CA: Sage Publications. Patton, Susannah (May 1, 2001a), The Truth About CRM, CIO Magazine. Patton, Susannah (September 15, 2001b), Talking to Richard Dalzell, CIO Magazine. Paul, T. (1988), Relationship Marketing for Health Care Providers, Journal of Health Care Marketing, vol. 8, pp 20-25. Peoplesoft.com,http://www.peoplesoft.com/en/us/products/applications/crm/product_content.html Peppers, D. & Rogers, M. (1993), The One to One Future: Building Relationships One Customer at a Time, New York: Doubleday. Piccoli, G., and Applegate, L. (2003), Wyndham International: Fostering High-Touch with High-Tech, Harvard Business School Case Study, pp. 1-42. Plakoyiannaki, E. and N. Tzokas(2002), Customer relationship management: a capabilities portfolio perspective, Journal of Database Marketing, vol. 9, no. 3, pp 228-37. Reinartz, W., Krafft, M. and Hoyer, W. (2004), The Customer Relationship Management Process: Its Measurement and Impact on Performance, Journal of Marketing Research, vol.41, no. 3, pp 293-305. Rilvari, J (2005), Mobile banking: a powerful new marketing and CRM tool for financial services companies all over Europe, Journal of Financial Services Marketing, vol. 10, no. 1, pp 11-20.

CRM to e-CRM: Promises and Pitfalls

261

Romano, N.C., and Fjermestad, J. (2002), Electronic Customer Relationship Management: An Assessment of Research, International Journal of Electronic Commerce, vol. 6, no.2, pp 61-113. Rong, G., Wang, M., Liao, S. (2001), Building an ECRM Analytical System with Neural Network, Seventh Annual Conference on Information Systems. Shani, D. & Chalasani, S (1992), Exploiting Niches Using Relationship Marketing, Journal Of Consumer Marketing, vol. 9, no. 3, pp 33-42. Shannon Scullin. (2002), Electronic Customer Relationship Management: Benefits, Considerations, Pitfalls and Trends. Sims, David (April 2000), A New ROI for New Economy CRM And Just Why Doesn’t High-Tech Get It?. Sinisalo, J., J. Salo, M. Leppäniemi and H. Karjaluoto.(2005), Initiation stage of mobile customer relationship management, The E-Business Review, vol. 5, pp 2005-9. Sinkula, J. M., Baker, W. E. and Noordewier, T. (1997), A Framework for Market-Based Organizational Learning: Linking Values, Knowledge, and Behaviour, Journal of the Academy of Marketing Science, vol. 25, no.4, pp 305-318. Slater, S. F. and Narver, J. C. (1995), Market Orientation and the Learning Organization, Journal of Marketing, vol. 59, no.3, pp 63-74. Srivastava, R. K., Shervani, T. A. and Fahey, L. (1998), Market- Based Assets and Shareholder Value: A Framework for Analysis, Journal of Marketing, vol. 62, no.1, pp 2-18. Stefanou, C.J., C. Saramaniotis and A. Stafyla. (2003), CRM and customer-centric knowledge: an empirical research, Business Process Management Journal, vol. 9, no.5, pp 615-34. Sunil Mithas, M.S. Krishnan, & Claes Fornell, why Do Customer Relationship Management Application Affect Customer Satisfaction. Verisign.com : http://www.verisign.com/products/site/secure/index.html(Datasheet); Weiss, T.J. (1999), Cyber-relationships and brand building, Integrated Marketing Communications Research Journal, vol. 5, spring, pp 19-22. Whatis.com http://whatis.techtarget.com/whatis_definition_page/0,4152,211901,00.html 2001; Last Viewed February 2001 Wright, L.T., M. Stone and J. Abbott (2002), The CRM imperative: practice vs. theory in the telecommunications industry, Journal of Database Marketing, vol. 2, no. 4, pp 339-49. Zahay, D. and Griffin, A. (2004), Customer Learning Processes, Strategy Selection, and Performance in Business-toBusiness Service Firms, Decision Sciences, vol. 35, no. 2, pp 169-203.

262

Key Drivers of Organizational Excellence

25

Demarketing: Applications in Un-selling Shilpa Sankpal* Praveen Sahu

Demarketing is the marketer's attempt to curtail or limit the consumption of a product or service. It may be attempted when there is a limit on how much can be used or when there is temporary or long-term shortage. There are several other instances of Demarketing, when it may be used for ensuring that the product remains fresh and preserved longer. This paper is a brief attempt to look at the areas where Demarketing is used and what real life instances have already occurred in this regard. It has tried to underline the fact that marketing is not always about selling but may also be concerned with un-selling.

INTRODUCTION It is perhaps common business logic that one should sell as much or more than the customer demands i.e. an eager-to-buy customer is a boon. It is also understood that the marketer must always have enough in the supply to meet the customer demand at all times. However, this is not always the case. In fact, sometimes organizations work towards controlling how much the customer can demand or how much can be sold to any of them at any given point of time. This practice of ensuring that consumers use less of a product or service is called as demarketing. This is not an unusual phenomenon. In fact, it can be commonly seen at umpteen places. Why would an electricity supplier not like it if all customers regularly run up high bills or use more units than are perhaps needed by them? After all, more the consumption more would be the company revenue. But, the resources from which the electricity is being generated are scanty or limited and hence, organizations and governments keep on harping on the idea of conserving energy and switching off unused electrical appliances whenever necessary. If the case above rings a bell, then the reader himself would perhaps be able to pick up several instances from day to day life where the consumers are asked to be careful about the usage of the product – either in terms of quantity that be used or the effects that the product *

Faculty, Prestige Institute of Management, Gwalior.

Demarketing: Applications in Un-selling

263

usage may imply for self and others. However, Demarketing may also be applied selectively, i.e. asking only specific market segments to not use certain services or products. Why would an amusement park not like more and more people using its recreational facilities? Because, if there is overcrowding it would put a strain on the available infrastructure and start impinging on the fun that people can have. So, a higher entry fees on weekends may do the trick and stagger the number of people entering the park at one time. The process of demarketing a product or behavior is defined as an attempt to discourage customers in general or a certain class of customers in particular on either a temporary or permanent basis (Kotler and Levy, 1971). An online dictionary, Answers.com has described Demarketing as Demarketing does not aim to destroy the demand but only to lower it to make it level with the ability to produce the product. Demarketing is not a singular step. There are several strategies that the marketer can adopt. In fact Kotler and Levy (1971) said that the several methods that relate to staggering or reducing demand for product offerings are actually about applying the classic marketing instruments in reverse.

GENERAL DEMARKETING The marketer may work towards general demarketing. In general Demarketing, the marketer tries to comprehensively reduce the total demand for the product. This un-selling happens for all market segments and not just specific ones. If we take the case of the electricity company, the message of conserving energy would be what they propound universally and not just chosen customers. Warning Labels on products are also meant to discourage use of the products in general. For example, if cigarettes are available for the cravings of nicotine, each pack would be boldly marked with the statutory warning, ‘Cigarette Smoking is injurious to health’. This warning is not for first time users or those who are frequent or especially for those who are old or belong to some specific age category or for any one gender over another. This warning applies to anyone and everyone who reaches for one roll of nicotine. This kind of Demarketing is not evident only in India but there are several other countries that have similar promulgations. Comm. (1997) posited that demarketing tends to apply to such unsafe (hazardous) products as tobacco or alcohol. Reasons for general demarketing can be sub-categorized as temporary shortages, chronic over popularity, and product elimination. Temporary shortages can occur when the marketer underestimates the demand or believes his production capacity to be adequate when it is actually not. Till the time, he can arrange for other ways of supplying to the market, temporary shortage will be experienced. In such a situation, the marketer would have to work out reasonable product allocation or stagger distribution schedule such that customers can be serviced with minimum waiting time experienced. The marketer may even limit distribution to specific outlets. In doing so, the marketer aims at regulating demand to the net consumer without turning them away. Once the lean period is over the Demarketing strategy may stand suspended. In case of such a rationing, it is seen that only reasonable or sub-extravagant needs are first sated. Chronic over popularity is a problem where the demand of a customer is particularly high

264

Key Drivers of Organizational Excellence

for some product or service. This is particularly troublesome when the over popularity can affect the quality of the product in the long run. According to Zeithaml and Bitner (2003) in order to manage fluctuating demand in a service business, it is essential to have a distinct comprehension of demand patterns, why they change from time to time, and also the niche of the market in which demand will peak at certain times. Tourist destinations are often afflicted by this situation when on certain days or phases, the tourist inflow can be more than the carrying capacity of the place in question. Higher entry charges on peak days, lower entry charges on off-peak days, suitable promotional schemes are used to limit the entry into the destination. The over popularity may erode the charisma of tourist places and fatally affect the sustainability. Quan (2000) in a study spoke of the broader applicability of the marketing strategy when he spoke of its role when crowds and overdevelopment hurt such heavily visited national parks as the Grand Canyon in the USA. Here, the national parks service encourages visitors to stay away, albeit momentarily. Groff (1998) in a study on recreation attempted at refocusing Demarketing as a concept, and expand, adopt and adapt it to create an umbrella theory relevant to parks and recreation administration. He focused on different states of demand and their effects on natural resources and natural resourcebased recreation experiences. The Demarketing is used when the regenerative capacity of the recreational facility starts taking a beating.

General Demarketing may also be done when a company is eliminating a certain product from its offerings, but clientele still exists for the product being phased out. In such cases, the company may totally curtail promotion of the product being phased out or in general declare its intention to stop offering a certain product. This perhaps may alienate the customers who continue their loyalty towards the product or may encourage them to switch over. In such a case, the idea of Demarketing is to shoo away the customers to some new offering that the marketer has provided as a substitute or to ensure that the demand over time reduces to negligible levels.

Demarketing: Applications in Un-selling

265

SELECTIVE DEMARKETING Then there is selective Demarketing. In selective demarketing, companies try to reduce demand from chosen group of customers. Here, the concept of profiling customers and identifying non-profitable ones would become paramount. Customer Analysis is the cornerstone of identifying which customer the company wants to retain and which it wants to refuse. Clements (1989) discussed ways of keeping brash, rowdy, young tourists away from visiting Cyprus. In this regard, he talked of selective Demarketing as the method of choice. His concern was not with the overuse of the resource but more with the way the resource was being exploited. All in all, he was concerned with reducing influx of undesirable visitor segments. To that end, the tourists’ market segments were derived based on demographic variables such as age, income and the resultant propensity to spend on tourism product. Clements recognised demarketing as a deliberate positive action toward an undesirable market segment, while simultaneously employing classical targeted marketing strategies to the desired market segments. According to Hoffman and Bateson (1997), Creative pricing is frequently used by service industries to help in leveling demand fluctuations. Madill (1999) has suggested that demarketing can be used by government areas where a department may want to advise and/or persuade targeted groups not to use government programmes that have been available to them in the past. This is very interesting. Today in developed countries where consuming is the norm, governments have been trying to impart lessons on de-consuming or functional consuming. Mark and Elliott (1997) in a research on reducing dysfunctional demand for United Kingdom’s National Health Services considered the persuasion of those customers to not use a service when it is not actually needed by them. Their paper spoke of appearance of Demarketing in all four modes in the NHS as general, selective, ostensible and unintentional Demarketing with an emphasis on supply-side applications. Mark and Elliott (1997) further elaborated with regards to Demarketing that as a demandside strategy, it would allow purchasers of health care on behalf of communities, to discern values, attitudes and beliefs which predicate current behavior through the use of the Theory of Planned Behaviour; and, subsequently, to develop appropriate Demarketing alternatives to change these behaviours where they are dysfunctional for both consumer and provider. Post and Baer (1974) outlined the prospects of Demarketing of infant formula in third world countries. In the developing world, where consumption of consumer goods and services and the cost incurred on acquiring the aforesaid is different than their counterparts of the wealthy world, it is crucial to find ethical fault lines on market scapes. Hence the authors urged marketers to control their impulse of pushing goods on those who could do well for their infants even without the food supplements. Post and Baer (1974) emphasized that Demarketing can involve both the imposition of restraints on the manner and method of selling goods and services, and affirmative actions to undo the effects of past marketing efforts. Demarketing may appear to be an anti-thesis for the modern marketer, but it ensures that consumers do not end up buying products that they do not really need. As has been mentioned previously as well, customer analysis is perhaps the key stage of identifying profitable buyers. The net analysis is the generation of something called as

266

Key Drivers of Organizational Excellence

customer profile. Customer profiling is one of the best prospecting tools available. Customer Profiling allows the marketers to fully exploit data about customers buying patterns and behavior and gain a greater understanding of consumer motivation. Profiling can help in zeroing on aspirations and thus increase the response rates of marketing campaigns. Reducing fraud, demand anticipation, enhancing customer acquisition are all the bundled benefits of profiling. In fact, customer profiling is also the essential brick of Customer Relationship Management because it can be used to develop lifelong relationships with Customers by anticipating and fulfilling their needs. Retailers use profiling to assess the effectiveness of coupons and special events. In fact, ‘Know Thy Customer!’ will become the first stage of applying this selective Demarketing. Gordon (2006) has taken the case of a retailer that classified its customers as ‘angel’ and ‘devil’ customers. And, then it focused on reducing its marketing investments for devil customers. Essentially, selective demarketing helps the companies to space out the unprofitable customers and key into those who would develop viable relations with the marketers. But de-weeding the bad customers is a tricky business and it takes concerted efforts to apply the same. A change in price structure or advertising strategy is often a route of choice for demarketing. Recently, when Indore was about to get its first shopping mall, the city experienced a surge in the traffic in the area where the mall was being constructed. Awe-struck people would often stop their vehicles to get a better look at the giant being raised. This wonder transcended into very heavy footfalls as soon as the mall was opened. But most of the people who were walking in were not so much interested in shopping or enjoying the entertainment zones, than in just absorbing the ambience which was novel and strangely thrilling for them. To take care of the phenomenon of mall rats and hanger-ons, a scheme of entry was launched. A nominal amount fee was kept for entering into the mall. This amount was returned back to the visitor, if he could prove to have spent some pre-fixed amount (for e.g. a tiny Rs 100) while he was inside. This immediately reduced the number of people who were casually entering. It also pushed the shoppers inside to spend at least as much as was enough to get back the entry fees they had initially paid. This was one way that casual visitors were chopped off by some smart moves. Discotheques, Bowling Alleys and pubs also use selective Demarketing when they charge higher entry fees on weekend visits and only allow couple entry. They might even have ‘happy hours’ in the afternoon or early evenings so that they have some footfalls in the offhours as well.

OSTENSIBLE DEMARKETING The last or the third type of demarketing is called the ostensible demarketing. Here, it creates the feature of refusing a large amount or number of customers by hoping that the product seems to be more valuable than the customers themselves. Most marketers decide upon the principle or concept that people need what they feel may probably not that easy to achieved or feel happy if they were being ignored by the seller. Actually, this concept relates to

Demarketing: Applications in Un-selling

267

increasing the exclusivity and hence the desirability quotient of the product being marketed to them. If an art house says that it has all the rare paintings of Raja Ravi Verma and would open them for exhibition or sale, only if the interested people dole out a sum of twice the initial bid price, it would create an atmosphere of intense rivalry among prospects. Also, the curiosity to find out what exactly is so special and the desire to be an owner would proportionately rise. However, it would also mean that the next set of interesting people will be lesser in number than who had initially approached the art house, since it would tough to arrange such kind of funds. So, what the art house has simply managed is to push the benchmark higher for those who might have wanted to acquire the paintings and subsequently multiplied the prestige of owning one or more such rare art. Entry into elite clubs, high-end credit cards or such exclusive items, where it may take several years to procure a good is all a part of ostensible demarketing. The refusal to immediately service an eager customer creates a yearning that adds hype to product ownership. Thus, demarketing can also become a way of product differentiation. Gerstner, Hess and Chu (1993) in a study have investigated demarketing as a differentiation strategy and posited that demarketing can be a profitable alternative when differentiation through product improvements is not cost effective. The impact of differentiating demarketing on profit, market share, consumers, and total welfare was also investigated by the researchers. This kind of Demarketing would probably work well with people for whom the chase is as good as the catch itself.

CONCLUSION Kotler (1974) has highlighted several items that need to be considered and issues that need to be resolved which push companies to reverse the pull and push of customers for commercially available product offerings. What provokes the marketers to think for Demarketing is the level of advertising and its nature that can be justified, the salesman’s role, clients who need to be dropped, modifying product prices to balance demand and supply, product allocation to distributors especially when there is limited supply and bases for product substitution. Overall, there are several strategies that can be adopted and they have been broadly classified as general, selective and ostensible.

References Clements, M.A. (1989), Selecting Tourist Traffic by Demarketing, Tourism Management, 10(2), 89–94. Comm, Clare L. (1997), Demarketing Products Which May Pose Health Risks: An Example of the Tobacco Industry, Health Marketing Quarterly, 15 (1), 95-102. Gerstner, Eitan, Hess, James and Wujin Chu (1993), Demarketing as a differentiation strategy, Marketing Letters, 4(1), 49-57. Groff, C. (1998), Demarketing in park and recreation management, Managing Leisure, 3(3), 128-135. Hoffman, K. & Bateson, K. (1997), Essentials of Services Marketing, Forth Worth: The Dryden Press. Kotler, Philip (July 1974), Marketing during periods of shortage, Journal of Marketing, 38, 20-29. Kotler, Philip and Levy, Sidney J. (Nov/Dec 1971), Demarketing, Yes, Demarketing, Harvard Business Review, 49(6), 74-80.

268

Key Drivers of Organizational Excellence

Madill, J. (1999), Marketing in government, Optimum - the Journal of Public Sector Management, 28(4), 9–18. Mark, Annabelle and Elliott, Richard (1997), Demarketing dysfunctional demand in the UK National Health Service, The International Journal of Health Planning and Management, 12(4), 297-314. Post, J E and Baer, J (1978), Demarketing Infant Formula: Consumer Goods in the Developing World, Journal of Contemporary Business, 7(4), 17-35. Quan, Holly (2000), Please Don’t Visit: Crowds and Overdevelopment are hurting our National Parks. But What if Parks Canada Were to Try a Little Demarketing to Encourage Potential Visitors to Stay Away? Marketing Magazine, 105 (33), 14. Zeithaml, V. & Bitner, M.-J. (2003), Services Marketing - Integrating Customer Focus Across the Firm, New York: McGraw-Hill. Gordon, Ian (2006), Relationship demarketing: Managing wasteful or worthless customer Relationships, Available at http://www.iveybusinessjournal.com/view_article.asp?intArticle_ID=625 (Page Saved in January 2008)

Gender Identity: A Marketer's Perspective

269

26

Gender Identity: A Marketer's Perspective Shilpa Sankpal Shaily Anand Nishchaya Vaswani

Gender is often being confused with the sex of the person which is not necessarily correct. Gender and sex are two different things all together which unknowingly are being used interchangeably. This paper provides an insight to the fact that the advertisements shown are segregated on the basis of gender. Basically, the ads shown nowadays are gender sensitive. But it purely depends on the consumer as to what qualities does it look in to the products and which product is being used by which gender category and can the company bring the product in the other gender category as well. Apart from just letting us know about how advertisements show which gender they are targeting, also there are products which people associate with there personality even if it is of the opposite gender and do affects a large part of the society using the specific product.

INTRODUCTION Gender identity refers to how one thinks of one’s own gender: whether one thinks of oneself as a man (masculine) or as a woman (feminine.) Society prescribes arbitrary rules or gender roles (how one is supposed to and not supposed to dress, act, think, feel, relate to others, think of oneself, etc.) based on one’s sex (whether one has a vagina or a penis.) These gender roles are called feminine and masculine. Anyone who does not abide by these arbitrary rules may be targeted for mistreatment ranging from not being included in people’s circle of friends, through the cold shoulder, snide comments, verbal harassment, assault, rape, and murder based on one’s (perceived) gender identity.

IMPACT OF GENDER ON AD PROCESSING: A SOCIAL IDENTITY PERSPECTIVE Advertising is typically thought of as one of many external influences on buyer behavior. Some may argue that it does not always have as much impact on behavior as other external influences such as salespeople, culture, family, reference groups, and social and situational influences. Additionally, as consumers become bombarded with more and more advertisements, many claim that ads have little or no influence on their judgments or actions.

270

Key Drivers of Organizational Excellence

In spite of these criticisms, advertising is considered an efficient way of reaching many consumers. Therefore, marketers continue to seek ways to increase the influence of advertising on their audience. The question becomes how can the activation of a gender group identity result in favorable ad and brand judgments (Maldonado, Tansuhaj and Muehling, 2003).

Figure 1: Proposed Model of Social Identity Activation of Ad Effectiveness

The model in Figure 1 depicts the process by which ads may activate a gender group identity and thereby influence ad and brand judgments. The discussion of the first box in the model addresses the question: Can an ad activate a gender group identity? The activation of a gender group identity is thought to result in that identity gaining salience over other existing identities. The second box focuses on the issue of salience and the internal consequences of a salient gender group identity. The third box indicates that salience may be influenced by how strongly one identifies with one’s gender group. Finally, the discussion of the fourth box addresses the question of whether or not the internal consequences of a salient gender identity can influence ad and brand judgments.

Gender group identity activating ads We begin our discussion of the dynamics involved in using gender group identity to influence ad and brand judgments by providing a theoretical underpinning for the notion that ads can be prime identification with a gender group and cause that identity to become salient. This requires an understanding of the concept of gender identity from a social identity perspective. Identification with a social group such as one’s gender is one of the key tenets of social identity theory. A social identity approach to understand the impact of gender on ad processing may best be understood within the gender framework provided by Risman (1998). Risman describes four theoretical traditions to help understand sex and gender.

Gender group identity salience and strength Overall, the process of identification in which one self-identifies with a category, privately accepts the group norms, sees oneself as an interchangeable representative of the group and defines oneself in terms of the group is called depersonalization and is a very crucial cognitive component of social identity theory (Abrams 1994; Simon, Turner 1987). Activation of a social identity is thought to be sufficient to result in depersonalization (Burke and Stets 1998) and may be true of activating a gender group identity.

p1: Ads that contain depictions of the gender group prime gender salience.

Gender Identity: A Marketer's Perspective

271

Salience In social identity theory, context, contrast, and identity strength are all tied together in salience. Salience is the activation of a relevant self category. “By a salient group membership we refer to one which is functioning psychologically to increase the influence of one’s membership in that group on perception and behavior…The term salience is not being used to refer to some ‘attention-grabbing’ property of a stimulus” (Turner 1987, p. 118). Accessibility is the readiness of a perceiver to use a particular self-category, and fit is the degree to which the stimuli in the given context actually match the criteria which define the category (Turner 1987; Turner et al. 1994). Strength of identification is a factor of both past experiences with the category and the current situation. It is tied to the current emotional or value significance of a given categorization (Deaux 1996; Turner 1987). Individuals who are more highly identified with their group are believed to be more likely to experience that identity as salient, independent of situational context (Ethier and Deaux 1994). Previous work has shown that advertisements can make a social identity such as gender salient. Grier and Deshpandé (2001), for example, found that their ad primed a gender identity. Although self-identifying and defining oneself in terms of a group are thought to be indicative of high levels of group identification (Terry and Hogg 1996), several measures have been developed to gauge an individual’s strength of identification with a group (e.g., Biernat, Green and Vescio 1996; Ethier and Deaux 1994; Terry and Hogg 1996).

p2: A gender identity is more likely to be salient when the individual’s identification with the gender group depicted in the ad is strong than when it is weak. A gender identity should be highly salient in women. A gender identity is thought to be highly accessible (Deaux 1992) because it is a central part of one’s self-definition. It is an identity with which the individual has a great deal of past experience, it has a high emotional and value significance and, along with age and race, it represents a higher hierarchical level. According to Risman (1998), gender structure can be conceived as both cultural rules and cognitive images, as tacit knowledge or expectations attached to a sex category.

Ad and brand judgments The effects of a salient social identity can be seen in conformity to group norms, and an ingroup bias. Social identity theory depicts a fairly direct relationship between selfcategorization as a group member and normative in-group attitudes and behavior based upon the social group’s prototype and norms (Abrams 1994; Biernat, Green and Vescio 1996). In-group bias is represented by a favorable evaluation and attitude toward in-group members. Studies have shown that the mere perception of belonging to a group is sufficient to trigger inter-group discrimination favoring the in-group (Tajfel and Turner 1986).

p3: Ads that activate a social identity (e.g., gender) are more likely to result in favorable p3 judgments than ads that do not activate a social identity. In the proposed model, the outcome is favorable ad and brand judgments, and is represented in the last box in the model. Favorable ad and brand judgments could include such outcomes as brand inclusion in a consideration set, likelihood of purchase, attitude toward the ad and attitude toward the brand. In addition to initiating cognitive processes, semantic-stimulating ad content can be used to prime for affect. For example, if semantic information is highly

272

Key Drivers of Organizational Excellence

accessible, judgments could be based on affect referral from the invoked category (Malaviya, Kisielius and Sternthal 1996). Since one of the outcomes of such a social identity is an in-group bias represented by favorable evaluation and attitude toward the in-group (Tajfel and Turner 1986), it is likely that the activated attitudes would be positive. A recent study found that when a social identity was salient, participants in the study were more apt to describe the group using the same positive traits as the salient identity (Haslam et al. 1999).

GENDER IDENTITY: A MARKETER’S PERSPECTIVE A once-famous pop star revealed in an interview that she preferred to use masculine fragrances than the feminine ones. What tipped her choice was not substantiated. When Shahrukh Khan was used as a model in Lux’s ad it created a flutter. The advertisement did not live a long media life and a pretty film star, of course a female, quickly replaced him. The first case reveals that our product choices are not always dictated by who we think we are – masculine or feminine. And the second case indicates that often not just products, but even brands get gendered. Any mismatch as perceived by the customer can upset the brand preferences. Gender identity, sometimes referred to as an individual’s psychological sex, and has been defined as the fundamental, existential sense of one’s maleness or femaleness by Spence (1984). Terminology is another issue. Several similar sounding concepts have been used such as sex-role identity (Kahle and Homer, 1985) and sex-role orientation (Gentry and Doering, 1979). But there is still a dearth of concrete studies not just in global marketing, but even Indian marketing scene.

BRAND GENDERING: A NEW STRATEGY FOR BUILDING BRANDS Today’s society has undergone a tremendous change as there can be a “Fair and Lovely Menz Active” beauty cream for males also, which indicates that sex roles have broadened over a period of time. The brand manager needs to question the types of symbols and models that he should associate with the brand. As masculine and feminine roles overlap, it becomes even more difficult to fine exclusive masculine or feminine images. There are changes other than the sex role changes making gendering a complex strategy. A large part of modern India had ascriptions related to one sex role stereotype and one model for each sex. People deviating from these stereotypes were treated as out laws. People who lacked in either category were likely to be criticized by others of both sexes and would not enjoy social approval. The brand manager must look for safe ground for presenting symbols, images and models to gender brands. Trouble only creeps up when the marketing messages violated the social code of stereotype conducts. Whenever a sex role was presented contrary to the basic ideas of society, it not only failed to improve the image of the product in the eyes of men or women in the market but also developed negative effects. If the main sex roles have a chance of being violated by gendering techniques, the brand manager should avoid this strategy. In a society like India, traditional sex roles discourage people of either sex to crossover and get involved in activities

Gender Identity: A Marketer's Perspective

273

in the domain of the opposite sex and take interest in the peripheral aspects of daily life. In many consumer goods marketing programs, we experience the sex crossover behavior, but these are effective when they are done subtly. The target marketing strategy should help in the process of identifying the key category for the product or the brand. So product or brand gendering is often used in conjunction with such a target marketing strategy. Brand gendering decision is a strategic decision. Such a decision requires a substantial commitment of resources for product creation and modification to make it appeal strongly to one of the genders. A short-term brand gendering approach will not help the brand in anyway. In the case of TVS Scooty, it is not possible to attract the feminine gender in just one year or by spending some amount of money in advertising. The brand manager should evaluate masculine strength and feminine softness while planning for a brand gendering strategy. A brand, which suggests masculine weakness or feminine harshness, is likely to flop as consumer will not accept the brand. These are the two most fundamental concerns of each sex for which feminine brand which promise about enhancing and enriching there attachments to others appeal strongly to most women. The brand manager should also take into account the age of the buyer; the older the buyer, the more he is likely to adhere to the traditional sex roles. Since people adhere to different sex roles, it has a strong effect on the preference of people towards gendered products attractive as it subconsciously satisfies the stereotype mindset of these consumers. CLINIC ALL CLEAR from the house of Hindustan Unilever Limited introduces the first ever anti-dandruff shampoo range formulated exclusively for men. The two Variants in the new range are Clinic All Clear ACTIVSPORT and Clinic All Clear HAIRFALL DECREASE. Commenting on the new launch, N. Rajaram, GM & Category Head, Hair Care, HUL, says, “HUL’s intensive research shows that a man’s scalp differs from a woman’s. The nourishment required by each is, thus, markedly different. To enable our male customers to cultivate a head full of healthy, good looking hair, we have created CLINIC ALL CLEAR MEN, an antidandruff shampoo for men only.”

CONCLUSION Sex refers to biological differences; chromosomes, hormonal profiles, internal and external sex organs. Gender describes the characteristics that a society or culture delineates as masculine or feminine. The advertising manager should keep in mind the psychological connotations of gender i.e., masculine for strength, feminine for gentleness. The marketing manager should also take care of crossover behavior. Traditional sex role prohibits engaging in behavior typical to the opposite sex, named as crossover behavior, is no longer a part of contemporary sex role standards. For example, if an advertisement uses a traditional housewife and model showing her household is role model for the brand, it is likely to receive backlashes from liberated, modern women and men. Similarly, there is a likelihood that women will accept masculine brands, but men are most likely to reject any brand that conveys any idea of feminist. If a brand manager is looking for a brand for both the sexes, then he should avoid the technique of brand gendering. In most cases, it should have a masculine gender, which may find acceptability among women. Presenting the brand/product as masculine or feminine should also have some level of credibility in the eyes of the target audience. If the claims made by the brand are not credible to the target audience, then consumer are less likely to buy the brand.

274

Key Drivers of Organizational Excellence

References Abrams, Dominic (1994), Social Self-Regulation, Personality and Social Psychology Bulletin 20(5), 473-483. Burke, Peter J. and Jan E. Stets (1998), Identity Theory and Social Identity Theory: Two Theories or One? in American Sociological Association. Presentation, San Francisco: American Sociological Association Biernat, Monica, Michelle L. Green and Theresa K. Vescio (1996), Selective Self-Stereotyping, Journal of Personality and Social Psychology 71(6): 1194-1209. Deaux, K. (1985), Sex differences, in Rosenzweig, M.R., Porter, L.W. (Eds), Annual Review of Psychology, Annual Reviews, Palo Alto, CA, Vol. 26 pp.48-82. Deaux, Kay (1992), Personalizing Identity and Socializing Self, in Social Psychology of Identity and the SelfConcept, ed. Glynis M. Breakwell. San Diego, CA: Surrey University Press. Deaux, Kay (1996), Social Identification, in Social Psychology: Handbook of Basic Principles, eds. E. Tory Higgins and Arie W. Kruglanski. New York: Guilford Press, 777-798. Ethier, Kathleen A. and Kay Deaux (1994), Negotiating Social Identity When Contexts Change: Maintaining Identification and Responding to Threat, Journal of Personality and Social Psychology 67(2): 243-251. Fazio, Russel H. (1993), Variability in the Likelihood of Automatic Attitude Activation: Data Reanalysis and Commentary on Bargh, Chaiken, Govender, and Pratto (1992), Journal of Personality and Social Psychology 64(5): 753-758. Fazio, Russell H. and Bridget C. Dunton (1997), Categorization by Race: The Impact of Automatic and Controlled Components of Racial Prejudice, Journal of Experimental Social Psychology 33: 451-470. Fazio, Russell H., Martha C. Powell and Carol J. Williams (1989), The Role of Attitude Accessibility in the Attitude-to-Behavior Process, Journal of Consumer Research 16 (December): 280-288. Gottman, John M. (1979), Marital Interactions: Experimental Investigators, New York: Academic Press. Grier, Sonya A. and Rohit Desphandé (2001), Social Dimensions of Consumer Distinctiveness: The Influence of Social Status on Group Identity and Advertising Persuasion, Journal of Marketing Research 38 (May): 216-224. Haslam, S. Alexander, Penelope J. Oakes, Katherine J. Reynolds and John C. Turner (1999), Social Identity Salience and the Emergence of Stereotype Consensus, Personality and Social Psychology Bulletin 25 (7): 809818. Janiszewski, Chris (1990), The Influence of Print Advertisement Organization on Affect Toward a Brand Name, Journal of Consumer Research 17 (June): 53-65. Kahle, L.R., Homer, P. (1985), Androgyny and Midday Mastication: Do Real Men Eat Quiche, in Hirschman, E.C., Holbrook, M.B. (Eds), Advances in Consumer Research, Association for Consumer Research, Ann Arbor, MI, Vol. 12 pp.242-6. Maldonado Rachel, Tansuhaj Patriya and Muehling Darrel D. (2003), The Impact of Gender on Ad Processing: A Social Identity Perspective, Academy of Marketing Science Vol. 3 Malaviya, Prashant, Jolita Kisielius and Brian Sternthal (1996), The Effect of Type of Elaboration on Advertisement Processing and Judgment, Journal of Marketing Research 33 (November): 410-421. Risman, Barbara J. (1998), Gender Vertigo: American Families in Transition/. New Haven: Yale University Press. Simon, Bernd, Claudia Hastedt and Birgit Aufderheide (1997), When Self-Categorization Makes Sense: The Role of Meaningful Social Categorization in Minority and Majority Members’ Self-Perception, Journal of Personality and Social Psychology 73(2): 310-320. Tajfel, Henri and John C. Turner (1986), The Social Identity Theory of Intergroup Behavior, in Psychology of Intergroup Relations, ed. Stephen Worchel and William G. Austin. Chicago, IL: Nelson-Hall Publishers, 7-

Gender Identity: A Marketer's Perspective

275

24. Terry, Deborah J. and Michael A. Hogg (1996), Group Norms and the Attitude-Behavior Relationship: A Role for Group Identification, Personality and Social Psychology Bulletin 22(8): 776-793. Turner, John C. (1982), Towards a Cognitive Redefinition of the Social Group, in Social Identity and Intergroup Relations, ed. H. Tajfel. Cambridge, England: Cambridge University Press, 15-40. Turner, John C. (1987), Rediscovering the Social Group: A Self-Categorization Theory/. Oxford: Basil Blackwell. Turner, John C., Penelope J. Oakes, S. Alexander Haslam and Craig McGarty (1994), Self and Collective: Cognition and Social Context, Personality and Social Psychology Bulletin 20(5): 454-463. Webster, Cynthia and Samantha Rice (1996), Equity Theory and the Power Structure in Marital Relationships, In Kim Corfman and John Lynch Jr. (Eds.), Advances in Consumer Research 23, Provo, Utah: Association of Consumer Research, 491-497. West, Candice and Don H. Zimmerman (1987), Doing Gender, Gender & Society 1(2): 125-151. Yi, Youjae (1990), The Effects of Contextual Priming in Print Advertisements, Journal of Consumer Research 17 (September): 215-222.

276

Key Drives of Organizational Excellence

27

Factors Affecting Customer Satisfaction Towards Mutual Funds in Gwalior Shruti Suri Manu Chaturvedi

Customer satisfaction is an ambiguous and abstract concept and the actual manifestation of the state of satisfaction will vary from person to person and product/ service to product/service. The state of satisfaction depends on a number of both psychological and physical variables which correlate with satisfaction behaviors such as return and recommend rate. The level of satisfaction can also vary depending on other options the customer may have and other products against which the customer can compare the organization’s products. Customer satisfaction for any organization is very essential as if the customer is not satisfied no organization can get success. Therefore every organization makes efforts to make its customers satisfied. The present study aims at identifying the factors affecting customer satisfaction towards mutual funds. In this descriptive study, a survey was conducted on 100 customers of mutual funds from various parts of Gwalior city and Factor Analysis was applied to find out the factors affecting customer satisfaction towards mutual funds.

INTRODUCTION Customer satisfaction has been taken as a variable under this research topic.

Customer satisfaction Customer satisfaction is the profound measure of quality. Customer satisfaction, a business term, is a measure of how products and services supplied by a company meet or surpass customer expectation. There are six parts to any customer satisfaction program: 1.

Who should be interviewed?

2.

What should be measured?

Factors Affecting Customer Satisfaction Towards Mutual Funds in Gwalior 3.

How should the interview be carried out?

4.

How should satisfaction be measured?

5.

What do the measurements mean?

6.

How to use customer satisfaction surveys to greatest effect?

277

A dissatisfied customer will tell seven to 20 people about their negative experience. A satisfied customer will only tell three to five people about their positive experience (Kan 1995).

REVIEW OF LITERATURE Engel, Blackwell el al, 1995. Business management and marketing are concerned with ways of satisfying and retaining customers for the purpose of generating profits, improving companies’ competitiveness and securing market share. Some of the major themes in the business management domain include studies of customer relationship marketing, which analyses how customer satisfaction relates to competitiveness and profits, methods for measuring customer satisfaction (Thomson 1995), and approaches that can help transfer customer satisfaction data into strategies for improvement of customer relations and their retention (Reidenbach and McClung 1998), (Johnson and Gustafsson 2000), (Schellhase, Hardock et al. 2000). According to the model, the customer decision-making process comprises a need-satisfying behavior and a wide range of motivating and influencing factors. The Figure below shows Customer Satisfaction Process adopted from Engel, Blackwell et al, 1995, p 143-154.

278

Key Drives of Organizational Excellence

Kano et al, (1996) Model of Customer Satisfaction The Kano et al. (1996) model of customer satisfaction classifies product attributes based on how they are perceived by customers and their effect on customer satisfaction (Kano, Seraku et al. 1996). According to the model, there are three types of product attributes that fulfill customer satisfaction to a different degree: 1) basic or expected attributes, 2) performance or spoken attributes, and 3) surprise and delight attributes. A competitive product meets basic expected attributes, maximizes performances attributes, and includes as many “excitement” attributes as financially feasible. In the model, the customer strives to move away from having unfulfilled requirements and being dissatisfied.

Kano Seraku et al, 1996 Many studies suggest that there is a fundamental difference between products and services, namely it is the way they are produced and consumed (Grönroos 1990; Grönroos 1998), (Edvardsson 1997; Edvardsson 2000), (Bateson and Hoffman 1999). The time period between service production and consumption is considerably shorter than for products. Most of the services are produced “on a spot” in an interactive process, in which customers and company employees meet. Satisfaction with service quality depends on a large number of dimensions - both tangible and intangible attributes of the product-service offer. The impact of intangible dimensions on consumer satisfaction is of particular interest at this point.

Many psychological studies even show that non-verbal behaviour by the service provider greatly affects service evaluation (Gabbott Mark 2000). For example, the quality of interaction between customer and service provider influences customers’ perception of service quality. In services, a single employee may affect service efficiency and consequent customer

Factors Affecting Customer Satisfaction Towards Mutual Funds in Gwalior

279

satisfaction with the service (Barnard 2002). Even customers own involvement and participation in the service delivery affect customer satisfaction (Kelly, Skinner et al. 1982). Due to the differences in production and provision of products and services, customers evaluate quality and attributes of material goods and services in different ways (Mathe and Shapiro 1993). This realisation has initiated a discussion on the need for special tools for evaluating more diverse and less tangible services (de Brentani 1989). Responding to the growing demands for developing specific and reliable ways to measure customer satisfaction in service industries, a number of studies have been conducted that suggested methodological frameworks for measuring customer satisfaction (Markovic and Horvat 1999). Other studies looked at what measures are used by service companies for measuring customer satisfaction. Studying how financial sector measures customer satisfaction Edgett and Snow (1997) showed that even though it is mostly traditional (financial) measures that are being used by the sector, they do not provide a sufficient basis for innovation in services and multidimensional approaches need to be devised. Customer satisfaction surveys are a questionnaire based information collection tool to determine the level of satisfaction with various product or service features. Developing a good questionnaire is the key to collecting good quality information. Questions must be short and concise, well formulated, easy to interpret and answer, and facilitate unbiased responses. Survey techniques and questionnaire designs are well known to research community and multiple guidance from different disciplines exist (see, for example, (Hayes 1998), (Kessler 1996), (Chakrapani 1998), (Gerson 1994), (Hill, Brierley et al. 1999), (Reidenbach andMcClung 1998)). Along with the development of consumer research, the number of measurement scales used in customer satisfaction surveys is growing (Devlin, Dong et al. 1993), which complicates data analysis. Some studies, for example, may list over 40 different scales (Haddrell 1994). Two broad types of scales, however, could be distinguished: single- and multi-item scales. The single-item scales are simple, for example, many studies have used simple single-item scales such as “very dissatisfied” to “very satisfied” responses. The problem is that these scales are hardly able to capture different nuances related to products and services, which reduces their reliability and the only possibility for assessment is a test-retest format (Yi 1989). The multi- item measures in this case a much offer a better capture of customer satisfaction. Here survey respondents are asked not only to provide an overall evaluation of their satisfaction with the product or service, but are also required to evaluate the key components or dimensions of the offer. The reliability of the result, therefore, is higher than when using single-item scales. The multi-item scales can be presented in a number of different ways: Likert2, verbal, graphic, semantic differential3, and inferential scales. Some authors suggest that the semantic differentia l scale is probably most reliable (Westbrook and Oliver 1981). Michael Buys, and Irwin Brown (2004). Measuring user satisfaction with information systems has attracted widespread research attention, given it is often used as an indicator of success. The Internet has allowed applications to be extended to customers of an organization, where interaction can take place through a web site, typically from home or office. The focus of attention with such applications is customer satisfaction. In this research, a 21-item, 7-factor instrument developed to measure customer satisfaction with web sites that market generic digital products and services was modified slightly, and then empirically tested and validated

280

Key Drives of Organizational Excellence

in the context of Internet banking specifically. A 19-item, 5-factor validated instrument emerged, the factors being Customer Support, Security, Ease of Use, Transactions and Payment, and Information Content and Innovation. The difference in number of factors as compared to the generic instrument was attributed to the unique nature of Internet banking web sites. These and other findings are discussed in the paper, and their implications examined. Michael Buys, and Irwin Brown (2005). Consumer Internet banking has been fairly successful in South Africa, with all major retail banks providing this service to customers. Approximately one million Internet users make use of this channel, with the profile tending to be those with higher incomes and occupying managerial and professional jobs. Recent media attention given to security breaches with Internet banking provides an opportunity to assess what impact this has had on perceptions of security across different cultural groups. In a study examining cultural values of managers from different ethnic groups in South Africa, it was found that groups differed mainly on the dimension of uncertainty avoidance. In this study therefore it was posited that those groups with higher scores for uncertainty avoidance would react more strongly to perceived security threats given the uncertainty that this created, and would therefore be less satisfied with security than those lower in uncertainty avoidance. In order to investigate this proposition, data gathered from a survey of postgraduate and MBA students at two leading business schools in South Africa was analysed. Respondents were surveyed as to their banking habits, cultural values and satisfaction with Internet banking. The findings confirm the above proposition as those groups with higher uncertainty avoidance were less satisfied with security than those groups with lower uncertainty avoidance. The implications of these findings are discussed. Blake Ives, Margrethe H. Olson, Jack J. Baroudi (Oct. 1983). This paper critically reviews measures of user information satisfaction and selects one for replication and extension. A survey of production managers is used to provide additional support for the instrument, eliminate scales that are psychometrically unsound, and develop a standard short form for use when only an overall assessment of information satisfaction is required and survey time is limited. Consumer satisfaction with Internet shopping has been conceptualized in a variety of ways. Studies in this area remain broad and appear relatively fragmented. In view of this, the purpose of this study is to propose a research framework that integrates both end-user computing satisfaction literature and service quality literature. This framework explicitly considers information quality, system quality, and service quality as the key dimensions of consumer satisfaction with Internet shopping. We believe the research framework and propositions serve as salient guidelines for researchers. Dr Tapan K Panda & Dr Nalini Prava Tripathy found significant outcome of the government policy of liberalisation in industrial and financial sector has been the development of new financial instruments. These new instruments are expected to impart greater competitiveness flexibility and efficiency to the financial sector. Growth and development of various mutual fund products in Indian capital market has proved to be one of the most catalytic instruments in generating momentous investment growth in the capital market. There is a substantial growth in the mutual fund market due to a high level of precision in the design and marketing of variety of mutual fund products by banks and other financial institution providing growth, liquidity and return. In this context, prioritization, preference building and close monitoring

Factors Affecting Customer Satisfaction Towards Mutual Funds in Gwalior

281

of mutual funds are essentials for fund managers to make this the strongest and most preferred instrument in Indian capital market for the coming years. With the decline in the bank interest rates, frequent fluctuations in the secondary market and the inherent attitude of Indian small investors to avoid risk, it is important on the part of fund managers and mutual fund product designers to combine various elements of liquidity, return and security in making mutual fund products the best possible alternative for the small investors in Indian market. Researchers have attempted to study various need expectations of small investors from different types of mutual funds available in Indian market and identify the risk return perception with the purchase of mutual funds. Various sophisticated multivariate techniques are applied to identify important characteristics being considered by the Indian investors in the purchase decision. The paper also suggests a product design of an optimum mutual fund and track the positioning gap available in Indian mutual fund market SPSS version 10 is used for data analysis. Nigel Hill. Many organisations fail to apply adequate rigour to their customer satisfaction research process and consequently produce misleading results. This is detrimental even if the results are used only as a guide for service improvement strategies but could be very damaging if they are to contribute to strategic decisions. In the USA, some leading companies have developed Profit Chain models enabling them to forecast the effect on financial performance of improving customer and/or employee satisfaction. Since customer satisfaction measures usually occupy a pivotal place in such models, the reliability of the measures becomes critical. This article highlights the main requirements of an accurate and effective customer satisfaction measurement process.

Objectives of the Study This study is based on the following objectives: 1.

To develop and standardize a measure for customer satisfaction

2.

To find out the underlying dominant factors responsible for customer satisfaction towards mutual funds

3.

To open new vistas for future researches

RESEARCH METHODLOGY The Study The present research is an exploratory in nature for visualizing the underlying dominant factors responsible for customer satisfaction towards Mutual Funds.

Sample Design The population includes the customers of Mutual Funds in Gwalior. Individual respondents were the sampling elements. Sample size was of 100 individual respondents and non probability sampling technique was used for collecting data.

282

Key Drives of Organizational Excellence

Tools used for Data Collection Self designed questionnaires were used to collect the data from the individuals.

Tools used for Data Analysis: the tools used for data analysis included: 1.

Items to total correlation: To check the consistency of the questionnaire.

2.

Reliability Test: To check the reliability.

3.

Factor Analysis: To find out the underlying factors responsible for customer satisfaction.

RESULTS AND DISCUSSION Consistency Measure: Firstly consistency of all the items in the questionnaire is checked through item to total correlation. Under this correlation of every item with total is measured and the computed value is compared with standard value(i.e. 0.2722) and if the computed value is less than the standard value then whole factor/statement is dropped and will be termed as inconsistent. However, after the calculation of the Item-to-total correlation all 20 items were accepted and none of the items is dropped (See Table 1). Reliability Measure: Reliability test was carried out using Cronbach’s Alpha, SpearmanBrown and Guttman Split-Half Coefficient. The reliability test measures are given below: Cronbach’s Alpha

-

0.893

Equal Length

-

0.840

Unequal Length

-

0.840

-

0.838

Spearman-Brown

Guttman Split-Half

It is considered that the reliability value more than 0.7 is good and it can be seen that in almost all the reliability methods applied here, reliability value is quite higher than the standard value, so all the items in the questionnaire are highly reliable.

Validity The face validity was checked and found to be high.

Factor Analysis The raw scores of 20 items were subjected to factor analysis to find out the factors that contribute towards customer satisfaction. After factor analysis, 6 components were identified (See Table 2).

Suggestions 1.

Under this research, the targeted population is of Gwalior city but the population can be increased.

2.

This research has been conducted only on 100 respondents, as soon as the respondents increase, result may vary.

Factors Affecting Customer Satisfaction Towards Mutual Funds in Gwalior

283

3.

This research consists of the analysis of only one variable; some other variables can also be included.

4.

The research on customer satisfaction can also be conducted in other sectors.

5.

One important thing is that the companies should reduce the lock in period of the Mutual Funds.

6.

Financial intermediaries should make their customers fully aware with their services.

CONCLUSION This study examined the factors affecting customer’s satisfaction towards mutual funds in Gwalior city. Questionnaires were filled by the customers of mutual funds from various parts of the Gwalior city having different financial intermediaries and by applying tests like items to total correlation, validity, reliability, factor analysis it was observed that all the data used in this research is valid and reliable according to the research. After all the findings and observations it was concluded that the main factor that affects their satisfaction is the role of intermediaries which is here found satisfactory as per the customer’s responses. Customer’s expectations with products are much higher and somewhat matching with the product performance increases their confidence in the products of MFs.

References Blake Ives, Margrethe H. Olson, Jack J. Baroudi (1983), The Measurement of User Information Satisfaction, Communications of the ACM, 26(10), 785-793. Broadbent D (1980), Giving New Life to Old Products, Marketing, 17 (Sept):37-9 Christy M. K. Cheung, Matthew K. O. Lee (2005), Consumer Satisfaction with Internet Shopping: A Research Framework and Propositions for Future Research, Proceedings of the 7th international conference on Electronic commerce, Xi’an, China. C. R. Kothari (2004), Research Methodology: Methods & Techniques 2nd Revised Edition 2004, New Age International, Publishers. Dowdy WL, Nikolchev J. (1986), Can industries demature, Applying new Technologies to Mature Industries, Long Range Planning, 19(2), 38-49. Garvin DA (1984), What Does Product Quality Really Mean? Sloan Management Review 1984; Fall: 25-39. Huomo, and T., M. Mäkelin, A. Vuoria (1995), Visio 2000, Uuden vuosituhannen mahdollisuude,. M&V Research Oy, Espoo, 227 p. (in Finnish). James H. Myers & Mark Alpert, Determinant buying attitudes: meaning and measurement, Journal of Marketing 32 (October), 65-68, 1968. Jae on Kim, & Charles W. Muller (1982), Factor analysis, Statistical methods And practical issues, California, Sage, 1982. Kotler, P. (1997), Marketing Management: Analysis, Planning, Implementation and Control, 9th ed, Prentice Hall International, New Jersey, 789 p. Kotler P (2000), Marketing Management: analysis, planning, implementation and control,10th edition, Englewood Cliffs, New Jersey: Prentice Hall, Inc,2000. Lillrank, P. (1997), The quality of information in business processes, HUT Industrial Management and Work and Organizational Psychology, Working Paper No 10, Espoo, 34 p.

284

Key Drives of Organizational Excellence

Meuter, M. L., Ostrom, A. L., Roundtree, R. I. and Bitner, M. (2000), Self-Service Technologies: Understanding Customer Satisfaction With Technology-Based Service Encounters, Journal of Marketing, 64, 50-64. Michael Buys, and Irwin Brown (2004), Customer satisfaction with internet banking web sites: an empirical test and validation of a measuring instrument, White River, South Africa, Pages: 44 - 52 Michael Buys, and Irwin Brown (2005), A Cross-Cultural Investigation Into Customer Satisfaction With Internet Banking Security, White River, South Africa, Pages: 200 – 207 Naumann, E., and K. Giel (1995), Customer Satisfaction Measurement and Management. ASQ Quality Press, Wisconsin, 457 p. Sen S, Morwitz VG (1996), Is it better to have loved and lost than never to have loved at all? The effects of changes in product features over time, Marketing Letters, 7(3), 225-35 Starr Lillrank, P. (1997), The quality of information in business processes, HUT Industrial Management and Work and Organizational Psychology, Working Paper No 10, Espoo, 34 p. Wang, Y. and Tang, T. (2001), An Instrument for Measuring Customer Satisfaction Towards Web Sites that Market Digital Products and Services, Journal of Electronic Commerce Research, 2(3), 1-28. Zviran, M. and Erlich, Z. (2003), Measuring IS user Satisfaction, Communications of the Association for Information Systems, 12, 81-103.

Factors Affecting Customer Satisfaction Towards Mutual Funds in Gwalior

285

Annexure Table 1: Item to Total Correlation for Customer Satisfaction Items

Computed Correlation Value

Consistency

Accepted Dropped

Awareness of Mutual Funds Features

0.539685

Consistent

Accepted

Performance of MFs

0.639275

Consistent

Accepted

Capital Appreciation

0.520477

Consistent

Accepted

Safety of Funds

0.543012

Consistent

Accepted

Performance Guarantee

0.607650

Consistent

Accepted

Exclusively for Small Investors

0.622458

Consistent

Accepted

Liquidity

0.581480

Consistent

Accepted

Delivery Schedule

0.659287

Consistent

Accepted

Regular Income

0.719727

Consistent

Accepted

Tax Benefits

0.382069

Consistent

Accepted

Lock in Period

0.544778

Consistent

Accepted

Transparency

0.471849

Consistent

Accepted

Emergency need Fulfillment

0.598806

Consistent

Accepted

Assured Return

0.404025

Consistent

Accepted

Awareness of All Services of Financial Intermediaries

0.512336

Consistent

Accepted

Recommendation through Financial Intermediaries

0.684829

Consistent

Accepted

Advertisement through Financial Intermediaries

0.687793

Consistent

Accepted

Benefit Awareness Provided by Financial Intermediaries

0.768503

Consistent

Accepted

Knowledge Based Investment Advisory Services

0.545189

Consistent

Accepted

0.621772

Consistent

Accepted

Service Behaviour of Financial Intermediaries

Table 2: Results of Principal Component Analysis Factor Name

Eigen values Total

Consistency & Role of financial Intermediaries

Service Behavior of Financial Intermediaries

7.044

2.412

Variable converged/ statement

Loadings

Delivery Schedule.

.790

Advertisement through Financial Intermediaries.

.779

Liquidity.

.679

Benefit Awareness Provided by Financial Intermediaries.

.558

Knowledge Based Investment Advisory Services by Financial Intermediaries.

.820

% of Variance 35.220

12.060

contd...

286

Product Features

Investors Expectations

Performance

Investors Confidence

Key Drives of Organizational Excellence

1.906

1.662

1.218

1.060

9.529

8.310

6.091

5.300

Awareness of All Services of Financial Intermediaries.

.737

Recommendation through Financial Intermediaries.

.733

Service Behaviour of Financial Intermediaries.

.589

Performance of MFs.

.765

Awareness of Mutual Funds Features.

.721

Lock in Period.

.629

Assured Return.

.515

Emergency Need Fulfillment.

.716

Transparency.

.681

Exclusively for Small Investors.

.663

Regular Income.

.627

Safety of Funds.

.865

Performance Guarantee.

.858

Tax Benefits.

.875

Capital Appreciation.

.737

Customer Satisfaction as Key Driver of Excellence in Banking Organisations

287

28

Customer Satisfaction as Key Driver of Excellence in Banking Organisations Shweta Saraswat

In the present scenario of competitive banking, excellence in customer service is the most important tool for sustained business growth. Being a service organization, the task of customer satisfaction becomes a challenging one. Meeting the legitimate aspirations of its customers will enable the bank to maintain its image, create confidence and attract funds comparatively at low cost in a competitive environment. Ensuring improvement in the customer service rendered by the banks has been the constant endeavor of RBI. RBI had set up various committees for the purpose, which from time to time, suggested measures to improve customer service systems of the public sector commercial banks of India. This chapter studies significant steps taken by Reserve Bank of India and other organizations relevant to the service quality of retail banking.

INTRODUCTION Customer complaints are part and parcel of the business life of any corporate entity. This is more so for banks because banks are service organizations. The concept of service quality & customer satisfaction acquired relevance in the context of recommendations of various committees constituted by the GOI & the RBI. Financial services are inherently intangible and high on experience and credence qualities. In order to promote them effectively, a service provider must first identify the dimensions used by consumer to evaluate the service quality of banks prior to becoming a customer. According to Reserve Bank of India’s (RBI’s) latest report, Trend and Progress of Banking in India, public sector banks rule the roost in customer satisfaction. One, however, needs to look at all the aspects of customer satisfaction. The first part of the paper deals with the steps taken by RBI in this regard and second part deals with the various dimensions of customer satisfaction with reference to private & public banks.

288

Key Drives of Organizational Excellence

MEASURES TAKEN BY RBI Most of us are aware that RBI has a fairly diverse functional mandate and one of the very important aspects of its operations in the banking sector has been the protection of the interests of the bank depositors. This responsibility was assigned to the RBI in an era probably long before the concepts like customer service, customer experience, customer satisfaction, customer delight and ‘customer centricity’ found an entry into the lexicon of the banking or business world and became rather fashionable. The Reserve Bank’s enduring and abiding concern for the quality of services extended to the bank customers has been reflected in its ongoing regulatory initiatives taken, over the decades, from time to time. The issue of services rendered by the banks to the common person dates back to 1970s when the R.K. Talwar Committee was appointed in 1975, followed by the Goiporia Committee constituted in 1990 & CPPAPS (S.S. Tarapore committee) in 2003. Wide-ranging financial sector reforms were also initiated after the report of the Committee on Financial System (the first Narasimham Committee -1991), Financial reforms were expected to spur competition in the banking sector, through deregulation and entry of new private sector banks, which, in turn, was expected to lead to provision of high-quality customer service to meet the long-standing aspirations of the bank customers. However, there has been an increasing realization, both in India and several other countries that the forces of competition alone do not ensure the fair treatment of the customer or adequate quality of customer service, at a justifiable price, determined in a transparent manner. A need for institutionalizing the mechanism was felt by the regulators for securing better customer service for the public at large. Some of the initiatives of the RBI to put in place the requisite institutional mechanisms are discussed in brief hereunder-:

Banking Ombudsman Scheme The Reserve Bank had introduced the Banking Ombudsman Scheme for the first time in 1995 to provide an expeditious and inexpensive forum to bank customers for resolution of their complaints relating to banking services. The Scheme was revised in 2002 mainly to cover Regional Rural Banks and to permit a review of the Banking Ombudsman’s’ Awards against the banks by the Reserve Bank. The RBI recently announced the revised Banking Ombudsman Scheme, 2006, effective from January 1, 2006, which has much wider scope and includes several new areas of customer complaints. The Scheme is applicable to all commercial banks, regional rural banks and scheduled primary cooperative banks functioning in India and provides a forum to the bank customers to seek redressal of their most common complaints against the banks.

Customer Service set up in the Banks The RBI had appointed the Committee on Procedures and Performance Audit of Public Services (CPPAPS – Tarapore Committee) in December 2003 to suggest measures for bringing about improvement in the quality of customer service rendered by banks. Based on the recommendations of the CPPAPS, banks were advised, among other things, to put in place an institutional machinery comprising of (a) a Customer Services Committee of the Board including, as invitees, experts and representatives of customers, to enable the bank to formulate policies and assess the compliance thereof internally; (b) Standing Committee of Executives on Customer Service, in place of the earlier ad hoc committees, to periodically review the

Customer Satisfaction as Key Driver of Excellence in Banking Organisations

289

policies and procedures, and working of the bank’s own grievance redressal machinery; and (c) a nodal department/ official for customer service at the Head Office and each Controlling Office, whom customers with grievances could approach in the first instance, and with whom the Banking Ombudsman (BO) and RBI could liaise.

Customer Service Department in the RBI A new department called Customer Service Department was created in the RBI, on July 1, 2006 by regrouping various customer-service-related activities handled by different departments of the RBI, under a single department. The functions of the department encompass a variety of activities relating to customer service and grievance redressal in the RBI and the banking sector, including the aspects relating to the Banking Ombudsman Scheme and the Banking Codes and Standards Board of India, Such an organizational dispensation has enabled a more focused policy attention to the customer service dimension of the banking sector.

Banking Codes and Standards Board of India (BCSBI) One can recognize an institutional gap in measuring the performance of the banks against codes and standards based on established best practices, the RBI took the initiative for setting up the Banking Codes and Standards Board of India ((BCSBI). It is an autonomous and independent body, adopting the stance of a self-regulatory organization. The dispensation of the BCSBI provides for voluntary registration of banks with the Board as its members and committing to provide customer services as per the agreed standards and codes. The Board in turn, monitors and assesses the compliance with codes and standards which the banks have agreed to. The Board released in July 2006 a Code of Bank’s Commitment to Customers to provide a framework for a minimum standard of banking services.

Fair Practices Codes for Lenders The RBI, apart from safeguarding the interest of the bank depositors, has also been concerned to ensure that the borrowing community too gets a fair deal from the bankers. The Reserve Bank had accordingly formulated a Fair Practices Code for Lenders, which was communicated to the banks in 2003 to protect the rightful interest of the borrowers and guard against undue harassment by the lenders. The Code was revised in March 2007 to include the requirement that the banks should provide to the borrowers’ comprehensive details regarding the loans as also the reasons for rejection of the loan applications of the prospective borrowers, regardless of the amount or type of the loan involved.

Transparency and Reasonableness of Bank Charges In order to ensure fair practices in banking services, RBI has made it obligatory for the banks to display and update on an ongoing basis, in their offices/branches as also on the home page of their websites, the details of various service charges and fees, in a format approved by the Reserve Bank, to provide for better comparability.

Customer Service and Financial Inclusion Customer, broadly defined, does not mean only the existing clientele of the banks but also includes the potential user of the banking services who could enter the domain of banking,

290

Key Drives of Organizational Excellence

some time in future. In this context, therefore, the role of the banks does not end with only serving their existing customers. They also need to endeavor to ensure that the large part of the under-privileged Indian population that does not have access to a bank account and other banking services, is also brought within the fold of the formal banking sector so that at least the basic banking services are made available equitably to all sections of the society.

Banks Customers and Financial Education In the context of increasing focus on financial inclusion, and the past episodes of financial distress observed in certain segments of the farming community, a need has been felt to provide a mechanism for improving the financial literacy and level of financial education among the consumers of banking services. Such education has become an imperative in the current era of financial deregulation, which has led to availability of a variety of complex financial products in the markets. The banks have a role to play in the area of providing financial education to their customers and would also have a beneficial interest in providing such education, as timely counselling of the borrowers could have positive implications for the asset quality of the banks. It can be drawn from the above discussion that RBI, as the banking regulator, has taken wide-ranging proactive measures aimed at securing better customer service for the bank customers and has sought to improve financial inclusion and financial literacy of the underprivileged in our society. After discussing RBI’s participation in customer satisfaction; we can undertake a study of private & public banks with reference to their contribution to the satisfaction of their customers. This can be explained through the various dimensions of customer satisfaction.

DIMENSIONS OF CUSTOMER SATISFACTION Service quality can only be assessed during and after consumption, whereas credence qualities are virtually impossible to evaluate even after consumption. Search quality, on the other hand, includes aspects of a product or service that consumers can evaluate before making the purchasing. Services tend to be inherently low on search quality dimensions (Lovelock, 1996; Stafford 1996). Nevertheless, financial services providers struggle to distinguish themselves from the competition. In sum, investigating service quality in the financial services industry is difficult as well as interesting. While reviewing the contributions of various scholars like Lovelock (1996) and Parasuraman et. al. (1988, 1994), some of the major areas which could be identified as the basis for evaluation of financial service industry are reliability, responsiveness, assurance and tangibles. These major areas could be further subdivided into other related dimensions

Reliability Very much sought after subject of reliability takes into consideration the following points: (i)

Being sincere to solve problems

(ii)

Providing services at promised times

(iii) Promise to do something on time (iv) Keeping records correctly (v)

Performing the service right at first time.

Customer Satisfaction as Key Driver of Excellence in Banking Organisations

291

Responsiveness (i)

Telling customer exactly what they do

(ii)

Prompt services to customers

(iii) Employees willingness to help (iv) Employees oblige the request of customers

Assurance (i)

Employees are trustworthy

(ii)

Knowledgeable employees

(iii) Consistent courteous (iv) Feeling safe in bank transactions

Tangibles (i)

Up to date equipments

(ii)

Physical facilities

(iii) Neatness of employees (iv) Communication material Although private sector banks reveal a better score card for these dimensions in comparison to public sector banks, public sector banks show a commanding position at few places, these are - telling customers exactly what they do, trustworthiness of employees & safety in bank transactions. Keeping of proper records is quite prevalent in public banks still some of the private banks like ICICI & IDBI are also maintaining a high order of efficiency. As far as knowledge level of employees is concerned customers are indifferent between both the types of banks.

SUGGESTED MEASURES After studying the abovementioned details it can be said that in spite of RBI’s continuous & significant contribution to the field of customer satisfaction through the development of mechanism and the appointment of various committees; Public sector banks have not done well. The reasons could be attributed to lack of object oriented approach, improper accountability and organization structure as well as lack of motivation and interests among the public sector members. Some of the steps which could prove effective are:

Broader perspective of the term “customer” An adage goes that no one can deliver better than what one has got. This proves correct in the case of employee customer relationship in the banks. The term customer in its broader term means both internal as well as external customers. That is a fairly treated employee (internal customer) can only treat customers (external) in a better manner.

292

Key Drives of Organizational Excellence

Management by Objective Implementation of this approach will help public banks in identifying KRAs and thus yielding better results through the development of proper organizational structure and accountability of employees.

Training & Development Regular training & development of employees, addressing and involving issues such as human resource management and customer relationship management, is desirable.

Award for outstanding performance Announcement as such could motivate employees to do better & better.

Implementation of recommendations Standing committees instead of adhoc committees shall be established at the level of individual banks to monitor the implementation of recommendations made by several committees.

Infrastructural Arrangements People standing in long queues shall feel comfortable while getting their things done. Centralized air conditioning and proper drinking water and sanitation facilities and “May I Help You Counters” are some such examples.

CONCLUSION Suggestions as above are many in number but their implementation requires an all together change in the whole system of banking. There could be a view that the too much of customer orientation or customer centricity in provision of banking services could impose a heavy cost on the banks, which are after all commercial organizations engaged in the pursuit of profit. It can also be argued that the customer-centric regulatory guidelines stipulated by the RBI or other regulators do entail the burden of compliance cost for the banks, albeit it might be in the ultimate interest of the customer. But ultimately one can conclude that cost is a long-term investment for nurturing and developing a healthy and robust bank-customer relationship. The challenge for the bank management, therefore, is to devise innovative and cost-effective means of delivering banking services efficiently, evolve remunerative business models, while fully leveraging modern technology so that there is an optimal trade off between their bottom line and the extent of customer centricity they choose to deploy in their business operations.

References Lovelock, C.H. (1996), Services Marketing, Prentice – Hall, Upper Scddle River. Meuter, M.L., Osform, A.L.Round tree, R.I. Bitner, M.J. Journal of Marketing, 64 (3), pp.50-64. Parasuraman, A., Zeithaml, V.A. and Beny, L.L. “Reassessment of expectations as a comparison standard in measuring service quality: Implications for further research”, Journal of Marketing, 58 (January), 1994, pp.111-124. Parasuraman, A., Zeithaml, V.A., and Beny, L.L. (1988), SERVQUAL: A Multiple Scale for Measuring Consumer Perceptions of Service Quality, Journal of Retailing, 64(1), 1988, pp.12-40. Stafford, M.R. (1996), Demographic Discriminant of Service Quality in The Banking Industry, Journal of Services Marketing , 10(4), pp.6-22.

Reinforcement of Green Marketing as a Sustainable Marketing & Communications

293

29

Reinforcement of Green Marketing as a Sustainable Marketing & Communications Tool & Practice Tanu Narang Snehal Mistry

Green Marketing has evolved over a period of time. Business is increasingly recognizing the competitive advantages and business opportunities to be gained from Eco-sustainability and Green Marketing. Green marketing involves developing and promoting products and services that satisfy customers want and need for Quality Performance, Affordable Price and Convenience, without having a detrimental impact on the environment. As more companies — and their advertising, marketing, and public relations partners — engage in green marketing and as public awareness and concern over environmental issues rises, the challenges of green marketers will become increasingly vexing and complex. People generally want to do the right thing, so the challenge & opportunity for the green marketer is to make it easy for people to do so. When all else is equal – Quality, Price, Performance and availability – environmental benefit will most likely tip the balance in favor of a product. Making a green product or service is one thing. Getting people to buy it is another. That requires marketing, an age-old activity that is equal parts art and science. In the world of “green,” marketing has unique challenges, to be a green product — or a green company. That creates the opportunity for just about anything to be marketed as green, from simple packaging changes to products and services that radically reduce materials, energy, and waste.

INTRODUCTION Green marketing has evolved over a period of time. According to Peattie (2001), the evolution of green marketing has three phases. First phase was termed as “Ecological” green marketing, and during this period all marketing activities were concerned to help environment problems and provide remedies for environmental problems. Second phase was “Environmental” green marketing and the focus shifted on clean technology that involved designing of innovative new products, which take care of pollution and waste issues. Third phase was “Sustainable” green marketing. It came into prominence in the late 1990s and early 2000.Environmental sustainability is not simply a matter of compliance or risk management.

294

Key Drives of Organizational Excellence

Business is increasingly recognizing the competitive advantages and business opportunities to be gained from Eco-sustainability and Green Marketing. Green marketing involves developing and promoting products and services that satisfy customer wants and needs for Quality, Performance, Affordable Pricing and Convenience without having a detrimental input on the environment. As more companies — and their advertising, marketing, and public relations partners — engage in green marketing and as public awareness and concern over environmental issues rises, the challenges of green marketers will become increasingly vexing and complex. Making a green product or service is one thing. Getting people to buy it is another. That requires marketing, an age-old activity that is equal parts art and science. In the world of “green,” marketing has unique challenges, to be a green product — or a green company. That creates the opportunity for just about anything to be marketed as green, from simple packaging changes to products and services that radically reduce materials, energy, and waste.

BENEFITS OF GREEN MARKETING Worldwide evidence indicates people are concerned about the environment and are changing their behavior accordingly. As a result there is a growing market for sustainable and socially responsible products and services. The types of businesses that exist, the products they produce and their approaches to marketing are changing. People generally want to do the right thing, so the challenge & opportunity for the green marketer is to make it easy for people to do so. When all else is equal – Quality, Price, Performance and availability – environmental benefit will most likely tip the balance in favor of a product. As resources are limited and human wants are unlimited, it is important for the marketers to utilize the resources efficiently without waste as well as to achieve the organization’s objective. So green marketing is inevitable. There is growing interest among the consumers all over the world regarding protection of environment. Worldwide evidence indicates people are concerned about the environment and are changing their behavior. As a result of this, green marketing has emerged which speaks for growing market for sustainable and socially responsible products and services. There are basically five reasons for which a marketer should go for the adoption of green marketing. They are l

Opportunities or competitive advantage

l

Corporate social responsibilities (CSR)

l

Government pressure

l

Competitive pressure

l

Cost or profit issues

Environmental marketing is more complex than conventional marketing. It serves two key objectives: 1.

To develop products that has minimal impact on the environment and environmental compatibility with convenience.

2.

Environmental sensitivity to both product attributes and its manufactures’ track record for environmental achievement.

Reinforcement of Green Marketing as a Sustainable Marketing & Communications

295

THE GREEN CONSUMER MARKET SEGMENT Most people agree exercise is important, in the same way they also agree that a sustainable, clean environment is important. But only some walk their talk. To better understand which consumers buy green and why, we have to look beyond what consumers say they do, to examine what they actually do. First, there is no question that consumers are changing the way they buy. A variety of societal factors are driving consumers to increasingly seek out unique and differentiated products that fit their lifestyle. Consumer purchases of green or sustainable products are not just motivated by the products themselves, but by the values they represent. For some consumers, these values do not translate into actual behavior, while for others; they become a way of life, (Wagner, 1997). In today’s market, the choice for consumers has increased manifold with increased in the range of models. Under such circumstances, choosing an appropriate product that fits one’s value propositions has become all the more important. There is no denying the fact that choice making has become very important task for a buyer, but it often does not end with that. There are additional things that they want to know before / after they buy a product. Today’s marketplace is driven by the emergence of the “Green Consumer” or “Environmentalism” and will become even more responsive to products and services promising environmental responsibility well into the 21st century. Today’s consumers are more concerned more than ever about the environmental impact of products they buy. Pragmatic consumers purchase those products and packages that can be recycled or otherwise safely disposed off in their communities. As a result, the number of industries under fire from environmentalists has grown very rapidly. Green Consumerism has helped to spur significant shifts in the way in which some industries view the environmental challenge. Although green consumers express their environmental concerns in individual ways, they are motivated by universal needs, these needs translate into new purchasing strategies with implications for the ways product are developed and marketed. Terms such as “recyclable”, “biodegradable”, “environmentally friendly,” “Sustainable,” “Compostable” and “biobased” are the latest buzzwords which green consumers looks for when they buy products, (Namagiri, 2007). The broad scope of these buzzwords suggests that green consumers scrutinize products at every phase of their life cycle, from raw material procurement, manufacturing and production straight through to product reuse, repair, recycling and eventual disposal. While in use attributes continue to be of primary importance, environmental shopping agendas now increasingly encompass factors consumers can’t feel or see. They want to know how raw materials are procured and where they come from, how food is grown, and what their potential impact is on the environment once they land in the trash bin. Successful green marketers no longer view consumers as people with appetite for material goods but as human beings concerned about the condition of the world around them. The corporations that excel in green marketing are those that are basically pro-active in nature. The success stories of companies from developed countries like P & G, Compaq, McDonalds, Pepsi, Stonyfield, Toyata, 3M, Phillips, have set the ball rolling and paved a new way to do business for conscious and demanding Green Consumer. Because of this transformation of consumers, companies have shifted their priorities from conventional marketing to what is called “Green Marketing”.

296

Key Drives of Organizational Excellence

In fact some of the researchers have gone to the extent of profiling green product purchasers, to know there demographic composition and market behaviour, thus marketing products according to these green segments liking. These organizations consider themselves to be interdependent with nature’s processes. Outside they join hands environmental stakeholders in cooperative, positive alliances, and they work hand in hand with suppliers and retailers to manage environmental issues throughout the value chain. Internally – cross functional teams convene to find the best possible holistic solutions to environmental challenges. These companies essentially have a long term rather than short term orientation approach with an intention of not only making profits but also contributing to the society by socio causerelated marketing approach.

Challenges for Green Marketers (Environmental Marketers) There are three main challenges for green marketers which include 1.

The Role of incentives and structural factors

2.

Information Disclosure Strategies

3.

Greening products versus Greening firms

Being environmentally responsible is important, but today’s Awakening Consumers are looking for more. They’re looking at how your brand addresses all three pillars of sustainability: environmental impact, social impact, economic feasibility. Companies should just not make any sustainability claims until they can back them up completely. Stating the facts surrounding the companies’ sustainability efforts allows the marketers to talk about them without coming across as self-congratulatory, (Wasik, 1996). Other Challenges include: 1.

Green products require renewable and recyclable material, which is costly

2.

Requires a technology, which requires huge investment in R & D

3.

Water treatment technology, which is too costly

4.

Majority of the people are not aware of green products and their uses

5.

Majority of the consumers are not willing to pay a premium for green products

STRATEGIES FOR GREEN MARKETING Credible Endorsement Communicating simply and through sources that people trust holds the key .Natural products have a potential to be associated with individualism. The only thing required is to pick the right ambassador for them. Not just the usual glamour and fame but blazing individualism, and self belief to go against the crowd. A Credible Third –party endorsement helps in form of a partnership with a respected non-profit organization or NGO’s that shares your values. This allows the partner to tell the world what you’re doing together.

Synergy of Sustainability with Brand The sustainability initiatives of the company should feel like a natural extension of their brand. The brand should support a sustainability (environmental) campaign, along with

Reinforcement of Green Marketing as a Sustainable Marketing & Communications

297

finding for synergies between your brand and the cause. Involving people on both sides i.e Involving consumers and employees of the company in its sustainability initiatives. Sustainability initiatives and sustainable marketing has to be real and authentic, (Polonsky, Alma and Wimsatt, eds., 1995). It needs to be embraced by everyone involved with the brand, from the person who answers the phone to the CEO. It should be part of the brand’s DNA.

Communication strategy Promoting the green credentials and achievements; Publicizing stories of the company’s and employees’ green initiatives is the main thrust here; and Entering for environmental awards programs to profile environmental credentials to customers and stakeholders. The communication should present a corporate image of environmental responsibility

GREEN INITIATIVES BY COMPANIES Many companies in the financial industry are providing electronic statements by email. E Marketing is rapidly replacing more traditional marketing methods, and printed materials can now be produced using recycled materials and efficient processes, such as waterless printing. Retailers, for example, are recognizing the value of alliances with other companies, environmental groups and research organizations when promoting their environmental commitment. To reduce the use of plastic bags and promote their green commitment, some retailers sell shopping bags. Some brands have started to scratch surface of being natural Santoor, Khadi, Margo etc. But they still have to find their space under the sun. They can get there by stepping more fully into the bright side of naturals. Other initiatives include McDonald’s restaurant’s napkins, bags are made of recycled paper, Coca-Cola pumped syrup directly from tank instead of plastic which saved 68 million pound/year, the Badarpur Thermal Power station of NTPC in Delhi is devising ways to utilize coal-ash that has been a major source of air and water pollution and Barauni refinery of IOC is taken steps for restricting air and water pollutants

CONCLUSION The Marketing Industry can “Walk the Talk” and become the new corporate champions of the environment but only the successful green marketers will reap the rewards of healthy profits and improved shareholder value, as well as help make the world a better place in the future.

References Peattie, K. (2001), Golden Goose or Wild Goose? The Hunt for the Green Consumer, Business Strategy and the Environment 10: 187-199. Wagner, A. (1997), Understanding Green Consumer Behavior: A Qualitative Cognitive Approach, Taylor & Francis: Routledge Namagiri, Rajeswari (2007), Are You a Green Consumer? The Hindu, Friday, September 21. Wasik, John F. (1996), Green Marketing and Management – a Global Perspective, Blackwell Publishers. Polonsky, Michael J. and Alma T. Mintu Wimsatt, eds. (1995), Environmental Marketing: Strategies, Practice, Theory and Research, The Haworth Press, Inc: New York. Roberts, James A. (2002), Green Consumers in the 1990s: Profile and Implication for Advertising, Journal of Business Research Vol.36, P. 226

III HUMAN RESOURCE MANAGEMENT

30

Knowledge Management- A Strategic Tool For HRM Alpana Trehan Shine David Saurabh Mukherjee

Knowledge Management (KM) is about developing, sharing and applying knowledge within the organization to gain and sustain a competitive advantage (Petersen and Poulfelt 2002). Scholars have argued recently that knowledge is dependent on people and that HRM issues, such as recruitment and selection, education and development, performance management, pay and reward, as well as the creation of a learning culture are vital for managing knowledge within firms. Knowledge Management the term is used loosely to refer to a broad collection of organizational practices and approaches related to generating, capturing, disseminating knowledge relevant to the organization’s business (World Bank 1998). Some see knowledge as a commodity like any other that can be stored and made independent of time and place, while others see knowledge as social in nature and very dependent on context.

INTRODUCTION Knowledge management (KM) is about developing, sharing and applying knowledge within the organization to gain and sustain a competitive advantage (Petersen and Poulfelt 2002). How, then, is human resource management (HRM) related to knowledge management? Scholars have argued recently that knowledge is dependent on people and that HRM issues, such as recruitment and selection, education and development, performance management, pay and reward, as well as the creation of a learning culture are vital for managing knowledge within firms (Evans 2003; Carter and Scarborough 2001; Currie and Kerrin 2003; Hunter et al 2002; Robertson and Hamersley 2000). Stephen Little, Paul Quintas and Tim Ray go as far as to trace the origin of KM to changes in HRM practices: One of the key factors in the growth of interest in knowledge management in the 1990s was the rediscovery that employees have skills and knowledge that are not available to (or ´captured´ by) the organization. It is perhaps no coincidence that this rediscovery of the central importance of people as possessors of knowledge vital to the organization followed an intense period of corporate downsizing, outsourcing and staff redundancies in the West in the 1980s (2002: 299). The aim of this

302

Key Drives of Organizational Excellence

paper is, first, to analyze which impact HRM practices, such as strategy, selection and hiring, training, performance management, and remuneration have on the creation and distribution of knowledge within firms. Second, the paper attempts to assess whether or not knowledge management requires a particular human resource strategy.

Knowledge Management The popularity of KM has increased rapidly, especially after 1996, and it has become a central topic of management philosophy and a management tool. This popularity is reflected in the growing number of articles and books on the topic. In 1997, the Journal of Knowledge Management and Knowledge and Process Management were introduced, and the Journal of Intellectual Capital was introduced in 2000 (Petersen and Poulfelt 2002). Many organizations have also introduced knowledge management programmes. Scarbrough and Swan (2001) argue that the rise and growth of KM is one of the managerial responses to the empirical trends associated with globalization and post industrialism. These trends include the growth of knowledge worker occupations, and technological advances created by Information Communication Technology. Little, Quintas and Ray (2002) point out that outsourcing and staff redundancies made organizations vulnerable regarding the knowledge of core processes. Kluge et al. (2001) argued that the value of knowledge tends to perish quickly over time and that companies need to speed up innovations and enhance creativity and learning. Finally, Daft (2001) stresses the shift in the environment and markets of organizations. Ever more organizations have been transformed recently due to the shift from stable to unstable environments. Accordingly, the uncertainty of the business has escalated, with more external elements to consider and frequent, unpredictable changes. A growing number of organizations have adopted team working, organic structures and knowledge-centric cultures as a consequence. The term ‘Knowledge Management’ is used loosely to refer to a broad collection of organizational practices and approaches related to generating, capturing, and disseminating knowledge relevant to the organization’s business (World Bank 1998). Some see knowledge as a commodity like any other that can be stored and made independent of time and place, while others see knowledge as social in nature and very dependent on context. Of particular importance is the need to separate the concepts of data, information, tacit knowledge and explicit knowledge (Daft 2001; Hunter et al. 2002). Data can be viewed either as factual, raw material or as signals with no meaning. Information is data related to other data, has meaning and is refined into structured or functional forms within a system – for example, client database or directories. The most fundamental and common classification of organizational knowledge is along the explicit-tacit dimension.

Explicit Knowledge In this classification, explicit knowledge is considered to be formal and objective, and can be expressed unambiguously in words, numbers and specifications. Hence, it can be transferred via formal and systematic methods in the form of official statements, rules and procedures and so is easy to codify.

Tacit knowledge Tacit knowledge, by contrast, is subjective, situational and intimately tied to the knower’s experience. Thus, it is difficult to formalize, document and communicate to others. Insights,

Knowledge Management- A Strategic Tool For HRM

303

intuition, beliefs, personal skills and craft and using rule-of-thumb to solve a complex problem are examples of tacit knowledge (Daft 2001; Hunter et al. 2002; Chua 2002). These two categories are closely interlinked so a bipolar map is difficult to draw in practice. To understand completely a written document (explicit knowledge) often requires a great deal of experience (tacit knowledge): ‘A sophisticated recipe is meaningless to someone who has never stood in a kitchen, and legal text can be all but incomprehensible without some legal training’ (Kluge 2001:10) Given the different nature of explicit and tacit knowledge the knowledge management process varies for the two types of knowledge (see Figure 1). Lynn Markus (2001) sets out to improve the use of ICT in knowledge management, and in particular in knowledge re-use. Her attempts are therefore entirely of explicit knowledge character. She takes knowledge creation (as in research or new product development) for granted, but divides the knowledge management process into the following stages: capturing or documenting knowledge, packaging knowledge, distributing knowledge (providing people with access to it) and reusing knowledge.

Figure 1: Explicit and tacit knowledge management processes

Capturing or documenting knowledge can occur in at least four ways, according to Lynn Marcus. First, documenting can be a passive by-product of the work process of virtual teams or communities of practices, who automatically generate archives of their informal electronic communications that can be searched later. Second, it can occur within a structure such as that provided by facilitators using brainstorming techniques, perhaps mediated by the use of electronic meeting systems. Third, documenting can involve creating structured records as part of a deliberate, before the- fact knowledge re-use strategy. Finally, it can involve a deliberate, after-the-fact strategy for later re-use, such as learning histories, expert help files or the creation of a data warehouse. Packaging knowledge is the process of culling, cleaning and polishing, structuring, formatting or indexing documents against a classification scheme. Lynn Markus argues that knowledge distribution can be passive as sending mass mail, newsletters, or establishing a notice board. An active distribution of knowledge involves After Action Reviews, selective knowledge pushing and specialized conferences. In the end, using knowledge is divided by Lynn Markus into recall, i.e. that information has been restored, in what location, and under which classification, and recognition (that the information meets the user’s needs, and actually applying the knowledge).

304

Key Drives of Organizational Excellence

The tacit knowledge management process has fewer parts than the explicit one. Although the knowledge creation process is similar in both cases, the main differences lie in the distribution of knowledge. Distribution of tacit knowledge has been most successfully achieved by apprenticeship, communities of practices, dialogue, meetings, informal talks, conferences, lectures and mentorship. The solid arrows in Figure 1 show the primary flow direction, while the broken arrows show the more recursive flows. The recursive arrows show that KM is not a simple sequential process. Thus it is likely that in the distribution phase some problems in the packaging stage might be discovered, leading to changes in the packaging of knowledge. Similarly, in the knowledge use stage it is often revealed that a certain type of knowledge is not available or that specific knowledge is outmoded. More often than not, that would mean that new knowledge has to be developed (knowledge creation). Probably no company starts at square one, as it already has knowledge that is waiting to be distributed and used. Although KM did originally concentrate on ICT-issues – to codify explicit knowledge in databases for re-use – managers soon faced knowledge management problems. These problems included encouraging employees to use existing knowledge stored in databases instead of reinventing the wheel. Furthermore, for a knowledge worker it tends to be a challenge to find a new solution to a problem, but from a company’s perspective that is a waste of time if a good solution is already available. Also, knowledge sharing within organization is an issue. Hoarding knowledge is common for various reasons, such as power relations, property rights over who owns the knowledge and job insecurity. The following HRM practices have been found essential for knowledge management (Evans 2003; Scarbrough 2003): KM and HRM strategy; recruitment and selection; training and development; performance management; reward and recognition; career management, and creating a learning environment.

HUMAN RESOURCE MANAGEMENT Human Resource Management gained much popularity in the 1980s. Beardwell (2001) points out in a summary of the field that there exists considerable controversy as to the origin, characteristics and philosophy of HRM, and its capacity to influence the nature of the employment relationship. Moreover, the debate surrounding HRM can be characterized by four predominant approaches (Beardwell 2001:9): 1.

That HRM is no more than a renaming of basic personnel functions, which does little that is different from the traditional practice of personnel management.

2.

That HRM represents a fusion of personnel management and industrial relations that is managerially focused and derives from a managerial agenda.

3.

That HRM represents a resource-based conception of the employment relationship, some elements of which incorporate a developmental role for the individual employee and some elements of cost minimization.

4.

That HRM can be viewed as part of the strategic managerial function in the development of business policy, in which it plays both a determining and contributory role.

Knowledge Management- A Strategic Tool For HRM

305

KM AND HRM STRATEGIES The creations of HRM strategies are bound to recognize these interests and fuse them as much as possible into the human resource strategy and ultimately the business strategy. Schuler and Jackson (2003) link competitive strategies with HRM practices in an interesting way. They make use of Porter’s competitive advantage, as the essence of competitive strategy (a prescriptive approach). Emerging from his discussion is that there are three competitive strategies that firms can use to gain competitive advantage. The first, innovative strategy is used to develop products or services different from those of competitors. Enhancing product and/or service quality is the primary focus of the second strategy, quality enhancement strategy, while with the final one, cost reduction strategy, firms typically try to gain competitive advantage by being the lowest-cost producer (Schuler and Jackson 2003). The key HRM practices of firms that pursue quality-enhancement strategy are likely to have (1) relatively fixed and explicit job descriptions, (2) high levels of employee participation in decisions relevant to immediate work conditions and the job itself, (3) a mix of individual and group criteria for performance appraisal that is mostly short term and results orientated, (4) relatively egalitarian treatment of employees and some guarantees of employment security and (5) extensive and continuous training and development of employees. These practices facilitate quality enhancement by helping to ensure highly reliable behavior from individuals who can identify with the goals of the organization and, when necessary, be flexible and adaptable to new job assignments and technological changes. The main features of Schuler and Jackson’s link between strategy and HRM practices are presented in Figure 2 below. Hansen et al. (1999) argue that there are basically two strategies for managing knowledge. They term these strategies ‘codification’ and ‘personalization’. The former refers to the codification of knowledge and its storage in databases where it can be accessed and used readily by anyone in the company. Such organizations invest heavily in ICT for projects like intra net, data warehousing and data mining, knowledge mapping (identifying where the knowledge is located in the firm), and electronic libraries.

Figure 2: The link between strategy and HRM practices

306

Key Drives of Organizational Excellence

The reuse of knowledge saves work, reduces communications costs, and allows a company to take on more projects. It is thus closely related to exploitative learning, which tends to refine existing capabilities and technologies, forcing through standardization and routinization, and is risk-averse (Clegg and Clarke 1999). The codification and personalization strategies help to frame the management practices of the organization as a whole as outlined in Table 1. Table 1: Knolwedge Management Strategies

Creating a Learning Environment Evans (2003) stresses the role of HR managers in helping their organization to develop an organizational culture that supports knowledge building and sharing. The steps necessary in such transformation process include: agreeing strategic priorities and areas for change, helping demystify knowledge management by linking knowledge management activity to established business processes and HRM practices, and engaging others in the knowledge management dialogue. Specifically HR can add value by developing a knowledge awareness programme as a separate development activity, ensuring that the right leadership and receiving the right developmental support. Most importantly, HR has to build a culture in which learning from day-to-day practice is valued, encouraged and supported by providing time, public and private spaces for learning, providing learning resources (information centers, special learning laboratories, virtual university), and reward sharers and learners.

CONCLUSION AND IMPLICATIONS This paper has concentrated on how HRM practices can encourage knowledge sharing and re-use. Table 2 summarizes some of the many possible relationships between HRM and KM. Most importantly, the table illustrates that management practices do not operate alone, divorced from the rest of the organization. Practices are, instead, interrelated and require a degree of compatibility and careful coordination.

Knowledge Management- A Strategic Tool For HRM

307

Table 2: Exploltative and explorative learning strategies towards knowledge management

The KM and HRM strategies presented previously have many things in common. Thus the codification strategy and low -cost strategy both focus on effectiveness, lowering cost and standardization. Knowledge management and the role of human resource management in knowledge management are still in their infancy. Also, basic concepts of the debate have to be defined and theories developed. Future research should be address further refinement.

References Beardwell, I. (2001), An introduction to human resource management: strategy, style or outcome in I. Beardwell and L. Holen (eds.), Human Resource Management: A contemporary approach, Harlow: Prentice Hall. Carter, G. and Scarbrough, H. (2001), Towards a second generation of KM? The people management challenge, Education and Training, 43(4/5), 215-224. Chua, A. (2002), Taxonomy of organizational knowledge, Singapore Management Review, 24(2), 69-76. Clegg, S. and Clarke, T. (1999), Intelligent Organizations in S. R. Clegg, E. Ibarra- Colado and L. BueonoRodriquez (eds.) Global Management: Universal Theories and Local Realities, London: Sage. Currie, G. and Kerrin, M. (2003), Human resource management and knowledge management: enhancing knowledge sharing in a pharmaceutical company, International Journal of Human Resource Management, 14(6), 1027-1045. Daft, R.F. (2001), Organization Theory and Design, Cincinnati: South-Western College Publishing. Davenport, T.H., De Long, D.W. and Beer, M.C. (1998), Successful Knowledge Management Projects, MIT Sloan Management Review, 39(2), 43-57.

308

Key Drives of Organizational Excellence

Despres, C. and Hiltrop, J.M. (1995), Human Resource Management in the Knowledge Age: Current practice and perspectives on the future, Employee Relations, 17(1), 9-24. Evans, C. (2003), Managing for Knowledge: HR’s Strategic Role, Amsterdam: Butterworth-Heinemann. Hansen, M.T., Nohria, N. and Tierney, T. (1999), What’s Your Strategy for Managing Knowledge? Harvard Business Review, 77, 106-116. Herzberg, F. (1997), The Motivation – Hygiene Theory’ in D.S. Pugh (ed.) Organization Theory: Selected Readings, London: Penguin Books. Hlupic, V., Pouloudi, A. and Rzevski, G. (2002), Towards an Integrated Approach to Knowledge Management: “Hard”, “Soft” and “Abstract” Issues’, Knowledge and Process Management, 9(2), 90-102. Horowitz, F.M., Heng, C.T., and Quazi, H.A. (2003), Finders Keepers, Attracting, Motivating and Retaining Knowledge Workers, Human Resource Management Journal, 13(4), 23-44. Iles, P. (1999), Managing Staff Selection and Assessment, Buckingham: Open University Press. Ingi Runar Evardsson (2003), Knowledge Management and Creative HRM, Occasional Paper 14, University of Akureyri: Iceland Judge, T.A. and Cable, D.A. (1997), Applicant Personality, Organizational Culture, and Organizational Attraction, Personnel Psychology, 50, 359-394. Kluge, J., Wolfram, S. and Licht, T. (2001), Knowledge Unplugged, the McKinsey & Company Global Survey on Knowledge Management, Houndsmills: Palgrave. KPMG Consulting (2000), Knowledge Management Research Report 2000, Annapolis: London. Kristof, A.L. (1996), Person-Organization Fit: An Integrative Review of Its Conceptualizations, Measurement, and Implications, Personnel Psychology, 49, 1-49. Little, S., Ouintas, P. and Ray, T. (2002), PART III Knowledge, Innovation and Human Resources in S. Little, P. Ouintas and T. Ray (eds.) Managing Knowledge: An Essestial Reader, London: The Open University in association with Sage Publications. Lynne Markus, M. (2001), Toward a Theory of Knowledge Reuse: Types of Knowledge Reuse Situations and Factors in Reuse Success, Journal of Management Information Systems, 18(1), 57-93. McAdam, R. and Reid, R. (2001), SME and Large Organization Perception of Knowledge Management: Comparisons and Contrasts, Journal of Knowledge Management, 3(3), 231-241. Moffett, S., McAdam, R., Parkinson, S. (2003), An Empirical Analysis of Knowledge Management Applications, Journal of Knowledge Management, 7(3), 6-26. Mount, M.K., Barrick, M.R. and Steward, G.L. (1998) ‘Five -Factor Model of Personality and Performance in Jobs Involving Interpersonal Interactions’, Human Performance, 11, 145-165. Petersen, N.J. and Poulfelt, F. (2002), Knowledge Management in Action: A Study of Knowledge Management in Management Consultancies, Working Paper 1-2002, Kaupmannahöfn: Copenhagen Business School. Scarbrough, H. (2003), Knowledge Management, HRM and the Innovation Process, International Journal of Manpower, 24(5), 501-516. Scarbrough, H. and Swan, J. (2001), Explaining the Diffusion of Knowledge Management: the Role of Fashion, British Journal of Management, 12, 3-12. Schuler, R.S. and Jackson, S. E. (2002), Linking Competitive Strategies with Human Resource Management Practices’ in S. Little, P. Ouintas and T. Ray (eds.) Managing Knowledge: an Essential Reader, London: The Open University in association with Sage Publications. Swarts, J. and Kinnie, N. (2003), Sharing Knowledge in Knowledge-Intensive Firms, Human Resource Management Journal, 13:2, 60-75. Torrington, D. and Hall, L. (1998), Human Resource Management, London: Prentice Hall.

Knowledge Management- A Strategic Tool For HRM

309

World Bank (1998), What is Knowledge Management? A Background to the World Development Report, Washington, DC: World Bank. Zárraga, C. and Bonache, J. (2003), Assessing the Team Environment for Knowledge Sharing: An Empirical Analysis, International Journal of Human Resource Management, 17(7), 1227-1245.

310

Key Drives of Organizational Excellence

31

Knowledge Management: A Strategic Approach Towards Organizational Effectiveness Babita Agrawal Vishal Sood Shikha Upadhyay

The arrival of the knowledge economy has brought a lot of new challenges. The human resource professionals may be uniquely positioned to take advantage of the challenges through knowledge management and to act as pathfinders in the knowledge jungle. Knowledge sharing and distribution are, in truth, a normal part of doing business; Knowledge Management institutionalizes such sharing and related activities. Since it requires a different emphasis, the principles of Change Management can be employed to facilitate the acceptance of Knowledge Management within an organization. The sustained interest in Knowledge Management is justified by the widespread realization that knowledge has become the principal of source of sustainable competitive advantage. The survival in the new economy will be possible only by managing and leveraging corporate knowledge assets. This, in turn, requires linking information, people and processes in order to spawn continuous innovation and corporate renewal. The purpose of this study is to provide an overview of the concept of Knowledge Management, its effectiveness on Organizational Excellence and to explain the major areas of study and thought related to the phenomenon. This chapter discusses a solid understanding of the measurement issue in knowledge management will have a significant impact on the both a corporation’s chances of success in the knowledge economy and the human resource profession’s influence on corporate knowledge journeys.

INTRODUCTION By sensing and understanding its environment, the knowing organization is able to engage in continuous learning and innovation. By applying learned decision rules and routines, the knowing organization is primed to take timely, purposive action. At the heart of the knowing organization is its management of the information processes that underpin sense making, knowledge building, and decision making. The term ‘Knowledge management’ has

Knowledge Management: A Strategic Approach Towards Organizational Effectiveness

311

come to describe almost everything that goes on inside an organization-from organizational learning to change management, from document management tools to corporate intranets. The sustained interest in KM is justified by the widespread realization that knowledge has become the principle source of sustainable competitive advantage. Organizations typically manage their knowledge either through a personalization strategy, i.e. knowledge flows through direct person-to-person contacts, or through a codification strategy, where knowledge is stored in databases and can be accessed by anyone in the company - typically through IT based search systems. According to Hansen et al. (1999), firms typically emphasize either the personalization or the codification strategy. However, regardless of which knowledge management system is implemented, individuals will typically hoard the knowledge they possess (Husted & Michailova, 2002). A lack of trust in fellow employees is one example of an obstacle to knowledge sharing (Szulanski, 1995), particularly when knowledge diffusion on the same hierarchical level is perceived to bring about a personal value loss and destroy future career opportunities (Empson, 2001). Another example is the “Not Invented Here” syndrome, for which Katz & Allen (1982) provided an early description of how tenure became a hindrance for knowledge sharing, since it led to a belief in the individual’s superiority and rejection of knowledge from the outside. Intellectual capital is intellectual material - knowledge, information, intellectual property, experience - that can be put to use to create wealth. It is the collective brainpower. Professionals specializing in KM are constantly striving to determine what constitutes “knowledge”. The term knowledge is subjective, with different social and organizational contexts shaping the definition. Knowledge includes experience, judgment, intuition, and values. This broad scope of knowledge translates into an equally broad definition of KM as an articulated philosophy that cuts across many disciplines. l

Karl Erik Sveiby, founding father of knowledge management, defines it as “the art of creating value by leveraging the intangible assets.” According to Sveiby, an individual’s competence, consists of the following factors:

l

Explicit knowledge involves knowing facts acquired mainly through information, often through formal education.

l

Skill involves practical proficiency acquired mainly through training and practice. Experience learnt from mistakes and successes.

l

Value judgments, what the individual believes to be right, that act as filters for the process-of-knowing.

l

Social networks made up of the individual’s relationships with other human beings in an environment and a culture that is transferred through tradition.

Knowledge management includes the active creation, transfer, and application and re-use of (tacit) individual knowledge and of codified (explicit) collective knowledge, supported by new approaches, relationships and technologies, to increase the speed of innovation, decision-making and responsiveness to organizational objectives and priorities.

312

Key Drives of Organizational Excellence

TACIT, EXPLICIT AND CULTURAL KNOWLEDGE Understanding the differences between “explicit” and “tacit” knowledge is crucial to the application of KM. In fact, one of the most valuable assets in an organization, next to its corporate memory, is the tacit knowledge of individuals. According to I. Nonaka and H. Takeuchi “explicit knowledge can be articulated in formal language and transmitted through, for example, manuals, written specifications etc. Tacit knowledge is seen as personal knowledge, based on individual experience and values and therefore not as easily transmitted.” Extracting tacit knowledge has also been identified by some as storytelling, a unique way to gain knowledge from an individual, on any given subject of expertise or experience, based on the individual’s unique perspective. Nonaka and Takeuchi make the point that once an organization incorporates this tacit knowledge into its organizational memory banks, then this knowledge will not be lost once the individual moves on. Tapping into the tacit knowledge of individual employees in public sector organizations can have long-term benefits. Professor Chun Wei Choo of the University of Toronto adds another dimension to the types of knowledge inherent in organizations. He articulates that, in addition to “tacit” and “explicit” knowledge there is “cultural” knowledge which is “expressed in the assumptions, beliefs and norms used by members (of the organization) to assign value and significance to new information or knowledge.” He expands on this crucial point by explaining how new knowledge is created by knowledge conversion and knowledge linkage. “In knowledge conversion the organization continuously creates new knowledge by converting between the personal, tacit knowledge of individuals who produce creative insight, and the shared, explicit knowledge which the organization needs to develop new products and innovations.” The emergence of knowledge management as a workable philosophy is also the result of the increasingly sophisticated information and communication technologies now at the disposal of the organization. This has led, in turn, to the identification of specific roles and responsibilities to various categories of KM managers within an organization. This paper explores the subject of knowledge management (KM), or as it is now commonly known, knowledge sharing, particularly as it relates to knowledge-intensive organizations such as scientific and technology based agencies Knowledge sharing is considered by many as an emerging, new discipline to assist organizations to change and adapt in our new, knowledge driven world. KM is now becoming an important discipline for many public and private sector organizations internationally. This paper will explore the meanings and applications of KM with specific attention to how the concepts apply to organizations. The focus here is to look at KM applications in both private sector organizations with the purpose of illustrating and developing particular applications relevant for organizations seeking to use knowledge sharing as a strategic application in the evolution of a national knowledge-based. Organizational culture is a system of shared values about what is important and beliefs about how the world works. In this way, a firm’s culture provides a framework that organizes and directs people’s behavior on the job (Cameron and Quinn, 1998). When respondents were asked about manager’s key concerns regarding KM, the primary issues raised were cultural, managerial and informational. With the cultural issues the managers were concerned over the implications for change management, the ability to convince business departments etc, to share their knowledge

Knowledge Management: A Strategic Approach Towards Organizational Effectiveness

313

with other departments. In many organizations, a major cultural shift would be required to change employee’s attitudes and behavior so that knowledge and insights are willingly and consistently shared. The managerial concerns related to the business value of KM and the need for metrics upon which to demonstrate the system’s value. There was apprehension over how to determine who would be responsible for managing the knowledge, and over the bringing together of the many players involved in developing KMS, including database administrators and the professionals with the knowledge. The respondents expressed concern that senior managers might perceive KM as just another fad and that the concept suffered from immaturity. In particular they expressed a need to understand better the concept and to be convinced that KM worked before pursuing future developments (Nonaka, 1994).

LITERATURE REVIEW Businesses recognize that they need to harness the knowledge of all of their workers, not just the highly skilled, innovative and creative worker, to compete effectively and to respond to the changing marketplace. Since a person with talent and drive can easily move around within the marketplace, we have seen aggressive programs within business and government both to attract new, skilled workers and to keep current employees. It is the function of the knowledge. Knowledge, and the ability to create, access and use it effectively, has long been a tool of innovation, competition and economic success and a key driver of economic and social development. Traditional economist (Schumppeter, 1949), contemporary authors (Porter, 1989) and multilateral organizations (OECD, 1995) agree that innovation represents the engine that brings motion to the economy and growth to nations. However, the theoretical attempt to incorporate innovation as a formal systematic method into a national or supranational economy is only a recent preoccupation (Fagerberg, 2003) Knowledge management is more than data management, it is a social process. Important aspects of knowledge management such as organizational knowledge creation processes are influenced by social phenomena. One approach to facilitating knowledge creation is through geographically dispersed virtual organizations. Dispersed teams have long existed but their prevalence has accelerated with improvements in computer-mediated communications. Dispersed or virtual ways of working is supported primarily by ICTs. Knowledge is being developed and applied in new ways. The information revolution, supported by the technical advances in information and communication Technologies (ICT), has expanded academic, scientific and community networks and provided new opportunities for accessing data, information and knowledge in a timely manner (Economist, 2004). It has also created new opportunities for generating and transferring all kinds of knowledge artifacts such as manuals, interviews, processes and business procedures. Knowledge management and sharing of information have demonstrated increase in innovation output (OECD, 2004a). Defining KM is not only problematic but also varies from person to person based on the context and use (Neef, 1999, Bhatt; 2001, Raub & Rulling 2001). Turban & Aronson (2002), describe KM as a process that helps organisations identify, select, organize, disseminate, and transfer important information and expertise that are part of the organisational memory that typically resides within the organisation in an unstructured manner. On those bases,

314

Key Drives of Organizational Excellence

Kim (1993) explains how individual learning is transformed into organizational learning and how shared visions emerge and provide collective sense to organizational knowledge. Focused on the knowledge that individuals acquire within the organization, and on the knowledge that the organizations acquire through them, Kim remarks that learning has to do with firstly the acquisition of Know Why, implying the ability to articulate a conceptual understanding of an experience, and secondly the acquisition of the skill to know how, implying the actual capacity to display the right behavior. Daniel Kim states that personal knowledge is embodied in mental models of reality and that those models determine the way people understand reality and behave. The idea of mental models is analogous to that of paradigm, in the sense of Kuhn (1970), in as much as mental models contain explicit and implicit assumptions that determines the way as people look at reality, interpret facts, conceptualize ideas and act. By the same token the idea that models evolve describing loops, where stable and unstable representation recurrently alternate, is analogous to the emergence of paradigms and the periods of crisis that precede changes of paradigms as discussed by Kuhn. In today’s knowledge-intensive working environment knowledge creation as the source of sustainable competitive advantage has become widespread among practitioners as well as researchers (Nonaka, 1991, 1998; Pfeffer, 1994). Sharing knowledge among people is one way of new knowledge creation (Nonaka & Takeuchi, 1995). However, people sometimes hesitate in transferring their knowledge especially when there is potential risk that other people would take advantage of that people’s knowledge. In this sense, the risk refers to ambiguity or uncertainty that the other people could exploit some people’s knowledge. Furthermore, in a virtual organization, this risk is higher than in physical ones as most communication occurs virtually without face-to-face interaction. As the risk is closely related with trust (Gambetta, 1998; Jones & George, 1998), it is important to examine how this relationship affects the knowledge transfer in a virtual organization. In today’s knowledge-intensive working environment knowledge creation as the source of sustainable competitive advantage has become widespread among practitioners as well as researchers (Nonaka, 1991, 1998; Pfeffer, 1994). Sharing knowledge among people is one way of new knowledge creation (Nonaka & Takeuchi, 1995). However, people sometimes hesitate in transferring their knowledge especially when there is potential risk that other people would take advantage of that people’s knowledge. In this sense, the risk refers to ambiguity or uncertainty that the other people could exploit some people’s knowledge. Furthermore, in a virtual organization, this risk is higher than in physical ones as most communication occurs virtually without face-to-face interaction. As the risk is closely related with trust (Gambetta, 1998; Jones & George, 1998), it is important to examine how this relationship affects the knowledge transfer in a virtual organization. Additionally, the study of knowledge transfer in organizational settings using a social perspective lense is not relatively well established. Zander & Kogut (1995) have proposed that firms are social communities which use their relational structure and shared coding schemes to enhance the transfer and communication of new skills and capabilities.

Knowledge Management: A Strategic Approach Towards Organizational Effectiveness

315

RESEARCH METHODOLOGY This section presents an overview of the survey procedure and a brief description of the sample used in this study. It then describes how the research variables are operationalized and measured. The hypothesis that was used for this study was H0: There is no significant influence of Knowledge Management on organizational effectiveness. H1: KM has a significant positive influence on organizational effectiveness.

Data Collection Data collection was conducted through a questionnaire survey. It is a comparative study among three sectors respectively: Academic, Banking and IT .The study decided to select these sectors as they have fundamentally changed during recent years owing to an increasingly turbulent environment. Recently, firms have begun to compete in the heterogeneous markets where competitors have access to diverse capital and knowledge asset conditions and must implement various management practices (such as KM practices)to improve their competitiveness.

The Sample The sample of the present study consisted of 120 people of Indore (N=120). Forty respondents were taken to test from each sector. The research was carried out through a survey method with the help of a self-developed, structured and non disguised questionnaire. It consisted of ten statements based on a 5 point Likert type scale on which the respondents were asked to indicate the degree of agreement or disagreement. The close ended questionnaires helped to get a clear idea about the respondent’s opinion.

Statistical Analysis The analysis of F-Test revealed the following results: Table Indicating the F-Value Sources of Variation

Sum of Squares

Degree of Freedom

Mean Square

Between Samples

315

2

157.5

Within Samples

1133

117

9.68

Total

1448

Ratio

16.27 119

For 2 degree of freedom between samples (or greater variance), and 117 degree of freedom within the samples (smaller variance), the critical value of F at 5% level of significance is 16.27. The calculated value being more this difference is significant. The null hypothesis is not accepted and the difference between the mean is significant. Thus we accept the alternative hypothesis.

316

Key Drives of Organizational Excellence

RESULTS AND DISCUSSIONS The above results can help managers provide distinctive advantages to firms. Knowledge is mainly created through organizational structure. The structural KM resource within an organization encourages employees ‘interactions, which are regarded as vital practices in the effective management of Knowledge. To be successful, the leadership within an organization must embrace Knowledge Sharing concepts, and its precepts. More importantly, it must be a key component in the strategic vision of the organization. An additional, vital component is that there need to be designated officials and supporting staff to reorganize the organization and implement the principles of knowledge management to maximum benefit. It is inherently clear that virtually every employee is a potential source of data, information and insights that constitute, in one form or another, a source of knowledge that is or could be invaluable to the goals and aims of the organization. The degree, to which an organization manages knowledge to its advantage and forwarding of its strategic vision, is the degree to which the leaders of the organization can draw upon this source of potential wealth. Management worker to ensure that the knowledge assets of such highly valued employees is also put to the most effective use in pursuit of corporate objectives while the worker is with the organization While the KM structure has been articulated, application has been a challenge as the Bank attempts to achieve its objectives that, wherever a staff member may be in the world, there should be access to the Bank’s knowledge resources. Ensuring the relevance and value of knowledge also requires obtaining information, knowledge and insights from Bank clients in various countries around the world that is then transferred back to the Bank. Field employees are being taught to further this sharing concept. When a Bank employee gains some insight or knowledge in a country where he is working, he or she is encouraged to transmit this back to the appropriate person within the Bank. One of the main ways this is done is through email. Inherent to all of these activities is implicit recognition of the importance of knowledge sharing and knowledge management. Snowden (2002) states that KM involves the identification, optimization, and active management of intellectual assets, either in the form of explicit knowledge held in artifacts or as tacit knowledge possessed by individuals or communities. Alavi and Leidner (1999) explained that knowledge management should deliver top-line growth, improve operations and increase profit margins. Yet many knowledge management systems fail to deliver on this promise. In contemporary business environments, organizations are faced with tremendous competitive pressures. Global issues combined with those of rapid technological change and the increased power of consumers, places huge demands on firms to remain flexible and responsive (Drucker, 1998; Teece et al, 1997). Additionally, at the global level there are common influences including rapid political changes, regional trade agreements, low labour costs, and frequent and significant change in markets (Dam, 2001). In addition, there are change in the nature of the workforce (which is older, better-educated and more independent), government deregulation and reduction of subsidies, and shifts in the ethical, legal and social responsibilities of organization. Digh (1997) found that, technology is playing an increasingly important role in business in this regard and that environment, increased innovation, and new technologies are providing vast improvements in cost-performance and an important impetus to strategy.

Knowledge Management: A Strategic Approach Towards Organizational Effectiveness

317

Barnard (1999) explained that, a variety of techniques were developed to enhance a firm’s ability to react to and cope with such pressures, including business process re-engineering, total quality management, downsizing, outsourcing and empowerment. However, the power of such methods often appeared limited and transient. One fundamental question needs to be answered: how do organizations create and sustain competitive advantage? Recently, strategic management theorists have begun to rethink the concept and underpinnings of competitiveness. In terms of the traditional perspective of organizational capability and competition, the focus has shifted to the internal resources of the firm as a key determinant of competitive advantage (Barney, 1991; Lank, 1997). This new approach is often referred to as the resource-based view of the firm. Alavi and Leidner (2001) argued that the resource-based view of the firm recognizes the importance of organizational resources and capabilities as the principal source of achieving and sustaining competitive advantage. According to this approach, there is a distinction between resource and capability (Lee and Yu, 2001). Corporate resource – including equipment, skills, patents and financial capital – are basic inputs for gaining and maintaining competitive advantage. Ruggles (1998) showed that organizational capability is the capacity of a firm to acquire and utilize its resources to perform tasks and activities for competitive gain. Thus, while resources are the main source of an organization’s capabilities, capabilities are the principal source of its competitive advantage. Pamela (2004) showed that within the resource-based perspective, information and knowledge have become increasingly recognized as competitive differentiators. In the knowledge-based view, organizational knowledge – such as operational routines, skills or know-how – is acknowledged as the most valuable organizational asset. Knowledge-sharing among employees, with customers, and with business partners, has a tremendous potential pay-off in improved customer service, shorter delivery-cycle times and increased collaboration with an important value as a commodity - it can be sold to others or traded for other Knowledge. In business, a deluge of organizational initiatives have appeared, typically based on the use of modern information technologies for managing organization-wide knowledge resources. KM began to emerge most strongly between 1995 and 1997, after the proliferation of the webbrowser (O’Brien and Cambouropoulos, 2002). The browser simplified the development of KM applications because it allowed developers to build a standard interface. The resulting Knowledge Management Systems (KMS) are at the core of enterprise KM approaches. The leading vendors of Knowledge KMS include Autonomy, Business Object, Cognos, Hewlett Packard, Hummingbird and Invention Machine, in a market worth around $8.5 billion for KM software and services in 2002(Carnelley et al.,2001). Some companies have also developed bespoke KMS solutions. An estimated 80 per cent of the largest global corporations now have KMS projects (Lawton, 2001). Recent examples include Intel and Shell Oil. Unfortunately compelling, approaches to KM and the use of KMS have not been without problems. Purvis et al (2001) argued that part of the difficulty stems from the nature of knowledge and defining knowledge is not a simple undertaking. Typically, it is now recognized that there are many types of knowledge present in an organization – contained in the ‘organizational memory’ (Whitfield-Jones, 1999). Whilst there are various typologies, in its simplest form there are two main types of Knowledge – tacit and explicit (Nonaka and Takeuchi, 1995). Explicit Knowledge may be expressed and

318

Key Drives of Organizational Excellence

communicated relatively easily; tacit Knowledge tends to be personal, subjective and difficult to transmit (or sometimes even to recognize). Thus, while some explicit knowledge may lend itself to codification and communication in KMS, tacit knowledge is very strongly embedded in the mind of the individual and is highly context-sensitive (Barnes, 2002). A key challenge of KMS therefore, has been to make appropriate tacit Knowledge explicit and portable.

Managerial Implications A recurring feature across organizational and cultural boundaries is that employees prefer professional and personal development to salary increases and promotions. This surprising finding is essential, since much research has been focused on the “hard”, extrinsicallymotivated incentives that directly relate to income, rather than the “soft”, intrinsicallymotivated incentives that concern job satisfaction and personal development. The use of Human Resource Management practices for incentives is, for this reason, highly recommendable.

CONCLUSION The study reveals that managers must also pay attention to the organizational and national cultures in which they when designing incentive schemes. Apparently, the use of intrinsicallymotivating incentives is more effective, but the context (public versus private or Western Europe versus Eastern Europe) sets the trail for the use of incentives. Furthermore, implemented knowledge management systems create differences in individual preferences for incentives, which managers should remember when designing the incentive structure for an organization Results and Discussions Knowledge Management is a recently observed aspect of business culture. As it relates to the way information resources are applied to an organization, KM has many areas of application that affect modern business. Because of the wide range of business applications, KM merits closer inspection by the academic community. There are many aspects of KM that need to be explored to better understand how KM can be applied. To begin with, KM must be explored to see how it will interact within existing organizational structures. Additionally, research should be undertaken to determine how to set up KMS so that workers can best use and contribute to a system. Furthermore, determinations that establish regulations of content must be clarified so that KMS is applied in the most appropriate manner. Finally the benefits of KM should be examined from the perspective of an organization’s external relationships.

References Alavi, M. and Leidner, D. (2001), Knowledge Management and Knowledge Management Systems: Conceptual Foundations and Research Issues’, MIS Review, 25(6), 95Barnard, J., (1999), Using Total Principles in Business Courses, Business Communication Quarterly, 62(2), 6173. Barnes, S. (2002), Knowledge Management System, Thomson: New York pp. 181. Barnes, S. and Hunt, B. (2001), E-commerce and V-Business, Butterworth-Heinemann, 100-109. Boisot, M. H. (1998), Knowledge Assets – Securing Competitive Advantage in the Information Economy, Oxford University Press, Oxford.

Knowledge Management: A Strategic Approach Towards Organizational Effectiveness

319

Cameron, K. and Quinn, R (1998), Diagnosing and Changing Organizational Culture, Englewood Cliffs, NJ, Addison-Wesley, 96-104. Carnelley, P., Woods, E., and Vaughan, D (2001), Ovum Forecasts: Global Software Markets, Ovum, 159. Davenport, T. and Prusak, L. (1998), Working Knowledge – How Organizations Manage What They Know, Harvard Business School Press, Boston MA. Dam, M. (2001), The Rules of The Global Game, University of Chicago Press, 7-54. Digh, P. (1997), Shades of Gray in The Global Marketplace, HR Magazine, April, 91-98. Drucker, P. (1998), The Coming of The New Organization, Harvard Business Review, January-February, 4553. Empson, L. (2001), Fear of Exploitation and Fear of Contamination: Impediments to Knowledge Transfer in Mergers between Professional Service Firms, Human Relations, 54(7): 839-862. Fagerberg, J. (2003), The Oxford Handbook of Innovation, Oxford University Press. Gambetta, Davide (1998), Can we Trust? Making and Breaking Cooperative Relations. Basil Blackwell: New York. Hansen, M. T., Nohria, N. and Tierney, T. (1999), What’s Your Strategy for Managing Knowledge?, Harvard Business Review, 77(2): 106-116. Hofstede, G. (1980), Culture’s Consequences: International Differences in Work-Related Values, Sage: Beverly Hills CA. Husted, K. and Michailova, S. (2002), Diagnosing and Fighting Knowledge Sharing Hostility, Organizational Dynamics, 31(1): 60-73. Jesse, H. Jones & Jennifer George (1998), The Experience and Evolution of Trust: Implication for Cooperation & Teamwok, Academy of Management Review 23(3):531-546. Katz, R. and Allen, T. J. (1982), Investigating the Not Invented Here (NIH) Syndrome: A Look at the Performance, Tenure, and Communication Patterns of 50 R&D Project Groups, R & D Management, 12(1): 71 Kuhn, T. S. (1970), The Structure of Scientific Revolution, University of Chicago Press, Chicago. Lank, E. (1997), Leveraging Invisible Assets: The Human Factor, Long-Range Planning, 30, 406-412. Lawton, G. (2001), Knowledge Management: Ready For Prime Time? IEEE Computer, 34, 12-14. Lee, G., and Yu H. (2001), Stage Model for Knowledge Management, in Proceedings of the Hawaii International Conference on System Sciences, Maui, Hawaii, 45-62. Martin, J. (1992), Cultures in Organizations: Three Perspectives, Oxford University Press, New York.. Neef, G. (1999), Knowledge Management Year Book, Butterworth: Heenemann Publication. Nonaka, I. (1994), A Dynamic Theory of Organizational Knowledge Creation, Organization Science, 5(1), 1437. Nonaka, I. and Takeuchi, H. (1995), The Knowledge-Creating Company, Oxford University Press, 11-19. O’Brien, C., and Cambouropoulos, P. (2002), Combating Information Over-Load, British Journal of General Practice, 50, 489-490. OECD (1997), Proposd Guidelines for Collecting and Interpreting Technological Innovation Data (The Oslo Manual) Paris: Organization for Economic Co-operation and Development. Pamela, B. (2004), Shedding Light on KM, HR Magazine, May, 49(5), 95-104. Purvis, L., Ramamurthy, V. and Zmud, R. (2001), The Assimilation of Knowledge Platforms in Organizations: An Empirical Investigation, Organization Science, 12, 117-135.

320

Key Drives of Organizational Excellence

Ruggles, R. (1998), The State of The Nation, Knowledge Management in Practice, California Management Review, 40, 80-89. Raub, S., Ruling, C. (2001), The Knowledge Management Tussel, Journal of IT, 113-130. Scarbrough, H., Swain, J. and Preston, J. (1999), Knowledge Management – The Next Fad to Forget People, in Proceedings of ECIS 99, Copenhagen. Schumpeter,J.A. (1949), The Theory of Economic Development (1911) Cambridge,MA: Harvard University Press. Seonghee Kim,(1993), Role of Knowledge Professionals For KM, Paper presented at International Federation of Library Associations (IFLA), 65th IFLA Council and General Conference Bangkok, Thailand, August 20 August 28, 1999, INSPEL 34(2000)1, pp. 1-8. Snowden, R. J. (2002), Visual attention to colour: parvocellular guidance of attentional resources, Psychological Science, 13, 180-184. Szulanski, G. (1995), Unpacking Stickiness: An Empirical Investigation of the Barriers to Transfer Best Practice Inside the Firm, Academy of Management Journal, Best Paper Proceeding: 437-441. Wenger, E., McDermott, R. and Snyder, W. M. (2002), Cultivating Communities of Practice: A Guide to Managing Knowledge, Harvard Business School Press, Boston, Whitfield-Jones, C. (1999), Business as Usual or the End of Life as We Know it, Managing Partner, May, 110.,

32

Trust and Leadership as Correlates of Team Effectiveness: A Study of Manufacturing Units Garima Mathur Shruti Suri Silky Vigg

The Present Era of Liberalization, Privatization and Globalization has paved the way for progressive and forward looking organization to re-engineer the process of providing quality to the target customers and the society at large . This has brought substantial changes in the Human Resource Management sector. With the result the present scenarios of organizations have become flatter and more team - based. For the success of organization team effective has become imperative. If a team is not performing well initially it would create an obstacle for the achievement of organizational short term goals then for the long term objectives. The present study aims at identifying the effect of trust and leadership on team effectiveness. It also analyses the various factors that affects the dependent and independent variables of the study. In this study, 180 employees of manufacturing units were taken as respondents and Regression was used to find out the effect of Trust and Leadership (independent variable) on Team Effectiveness (dependent variable).The study has found that Trust and Leadership plays a dominant role in developing Team Effectiveness.

TRUST Trust is basically having confidence or faith in someone or something. Rosseau and Sitkin and Burt and Camerer (1998) defines that “Trust is a psychological state compromising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another.” Hosmer (1995) defines trust as “…. The reliance by one person or group, or firm upon a voluntarily accepted duty on the part of another person, group, or firm to recognize and protect the rights and interests of all others engaged in a joint endeavor or economics exchange.” Hosmer (1995) also mentions that trust is a result of an expectation of fair behavior by the other party in the partnership, in addition to accepting the rights and interests of the other party. Our working definition of trust is drawn from the recent review by Mayer, Davis,

322

Key Drives of Organizational Excellence

and Schoorman (1995: 712): “The willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustier, irrespective of the ability to monitor or control that other party.” Several aspects of the definition of trust require elaboration. First, trust entails risk, that is, a willingness of the trustier to be vulnerable. Second, trust is based on the expectation that the trustee will perform certain desired behaviors. Consequently, the trustier must believe that the trustee has both the desire and the ability to enact the behavior in question. Third, the behaviors that the trustier expects the trustee to perform may be important to the trustier for a number of reasons.

Determinants of Trust as described by (Mayer, Davis, Schoorman, 1995) 1.

Ability includes skills, competencies and characteristics related with a specific situation.

2.

Benevolence is the degree to which a trustee is believed to want to do good to the trustier, including elements of loyalty, receptivity, and caring, and specific attachment to the trustier bereft of any egocentric profit motive

3.

Integrity on the other hand, refers to the perception that the trustee adheres to a set of principles such as consistency, fairness, reliability, openness and value congruence that are acceptable to the trustier.

LEADERSHIP Leadership can be defined as an ability to influence a group towards the achievement of goals. Basically Leadership is the process of influencing the behavior of others to work willingly and enthusiastically for achieving predetermined goals. Tannenbaum. A., Schmidt, W.H. (1958) have defined “leadership is interpersonal influence exercised in a situation and directed through communication process, towards the attainment of a specified goal or goals.” Terry has defined leadership in the context of enthusiastic contribution. He says that: “leadership is essentially a continuous process of influencing behavior. A leader breathes life into the group and motivates it towards goals. The lukewarm desires for achievement are transformed into a burning passion for accomplishment.” Leadership is the process of influencing and supporting others to work enthusiastically towards achieving objectives. Leadership is a continuous process of behavior; it is not one-shot activity. An analysis of these definitions brings certain features of leadership, which are as follows: 1.

Leadership may be seen in terms of relationship between a leader and his followers (individuals and/or groups) that arises out of their functioning for common goals.

2.

By exercising his leadership, the leader tries to influence the behavior of individuals or groups of individuals around him to achieve common goals.

3.

The followers work willingly and enthusiastically to achieve those goals. Thus, there is no coercive force that induces the followers to work.

4.

Leadership gives an experience of help to followers to attain common goals. It happens when the leader feels the importance of individuals, gives them recognition, and conveys them about the importance of activities performed by them.

Trust and Leadership as Correlates of Team Effectiveness 5.

323

Leadership is exercised in a particular situation, at a given point of time, and under specific set of circumstances. It implies that leadership styles may be different under different situations

TEAM EFFECTIVENESS A team is a collection of individuals who are interdependent in their tasks, who share responsibility for outcomes, who see themselves and who are seen by others as an intact social entity embedded in one or more larger social systems (for example, business unit or the corporation), and who manage their relationships across organizational boundaries. For example, in a production work team, one member may pass on the product of her work to another member to work on, with all members sharing responsibility for the quality and quantity of the final output that is produced. We take a broad approach to effectiveness to include the multiplicity of outcomes that matter in organizational settings. These outcomes occur at several levels: at the individual, group, business unit, and organizational levels. Outcomes can be related to one another in complex and sometimes conflicting ways (Argote & McGrath, 1993). Effectiveness at one level of analysis can interfere with effectiveness at another level. Thus, it is important to be clear about the dimensions of effectiveness that are being considered and the level at which they are being considered. We categorize effectiveness into three major dimensions according to the team’s impact on: (1)

performance effectiveness assessed in terms of quantity and quality of outputs,

(2)

member attitudes, and

(3)

behavioral outcomes.

Examples of performance effectiveness measures include efficiency, productivity, response times, quality, customer satisfaction, and innovation. Examples of attitudinal measures include employee satisfaction, commitment, and trust in management. Examples of behavioral measures include absenteeism, turnover, and safety.

Review of Literature Brockner et al. (1997) suggests that trust in organizational authorities increases support for organisational authorities, higher commitment to the authorities, and willingness to behave in ways that help to further the goals of the organization. Trust in authorities may influence members’ voluntary acceptance of the authorities’ decisions as well as members’ voluntary behaviors on behalf of the organization, and organizational citizenship behaviors. Rotter (1971) has stated that, there are differences in individuals’ general disposition to trust others. These differences are likely to be reflected in social situations such as relationships with a new superior It is difficult to trust someone has a distinct advantage over you. For example, recent studies have found that 68% of employees do not trust their managers and 43% of employees believe their managers cheat and lie to them (McCune, 1998). From a related perspective, many writers on leadership view trust as an essential component of leadership (for example, Bennis and Nanus (1985); Locke et al. (1991); Zand (1972). Some even regard it as a defining component. For instance, Solomon (1996: 80) asserts that

324

Key Drives of Organizational Excellence

‘leadership is an emotional relationship of trust’ and Conger and Kanungo (1998: 46) state that ‘leading implies fostering changes in followers through the building of trust and credibility’. Followers’ trust in the leader occupies a central role in several theories of leadership, either explicitly or implicitly. For instance, the Leader-Member-Exchange theory Graen and Uhl-Bien (1995) focuses on the quality of the dyadic relationships between the leader and group members, and includes trust as a major component of this relationship. In a similar vein, theories of charismatic leadership by House (1977) and Conger and Kanungo (1998) include a follower’s trust in the leader as an essential component of the charismatic relationship. Kramer (1999: 571) noted differences among various theories, but states that ‘Despite divergence in such particulars, most trust theories agree that, whatever else its essential features, trust is fundamentally a psychological state. Kanter, (1983); Bradford and Cohen, (1984); Peters and Austin, (1985); Kouzes and Posner, (1987); Manz and Sims, (1987, 1989) referred that effective leaders used a significant amount of consultation and delegation to empower subordinates and give them a sense of ownership for activities and decisions. The effectiveness of power sharing and delegation finds further support from the research findings on self managed groups. Butler (1991) examined the relationship between ‘conditions of trust’, which include several characteristics of managers’ behavior, such as consistency and honesty, and the overall trust of subordinates in their superiors. Podsakoff et al. (1990) found that subordinates’ trust in their managers mediated the relationship between managers’ transformational leadership behaviours and subordinates’ organizational citizenship behaviours. Conger and Kanungo (1998) have shown that charismatic leader behaviours increase reverence for the leader, which in turn increases trust in the leader. Mayer and Davis (1999); Mayer and Gavin (1998) have carried out a number of studies that support the theoretical propositions put forward by Mayer et al. (1995) in an earlier theoretical paper, namely, that subordinates’ trust in their leader depends on the leader’s perceived levels of ability, benevolence and integrity. These theories consider trust only as a psychological state at the individual level. In the almost same perspective (Yuki, 1989; Kirkpatrick and Locke, 1991) argues that leaders with high emotional maturity and integrity are more likely to maintain cooperative relationships with subordinates, peers and superiors. Wheatley (1999) described the balanced connection between leadership and organization by clarifying the leader’s role not as one ensuring people know exactly what to do and when to do it, but as one who ensures there is strong and evolving clarity about who the organization is. When this clarity of identity is present all members of the organization are served. Barkley, Bottoms, Feagin & Clark (2001) stated that leaders know how to support their visions, retain their focus. They further communicate the necessity of “continuously examining visions and beliefs about the future sets the stage for motivating change and improvement”. For instance, (Diggins, 1997) emphasized that the quality of character is essential for longterm influential leadership. Leader’s character are fair and honest not because they have to be, but because they are ethical, open and trustworthy. Bass, (1990) suggest that the trait approach attributed success of the leader to possession of extraordinary abilities including high energy level, stress tolerance, integrity, emotional maturity and self confidence. Bolman and Deal (1997) indicate the need for leaders “to be deeply reflective, actively thoughtful, and dramatically explicit about their core values and beliefs”. The ability to identify a clear

Trust and Leadership as Correlates of Team Effectiveness

325

and consistent set of beliefs was identified as another essential quality of leaders (Kussrow & Purland, 2001). In another study conducted by (Lunenburg & Ornstein, 1999) credibility or trustworthiness was indicated as a critical factor of subordinates in judging the effectiveness of leaders. Blake and Mouton, (1982) viewed task and people orientation as values rather than as distinct types or leader behaviour. Evans, (1970); House (1971) have referred the aspects of situation such as the nature of the task, the work environment and subordinates’ attributes determine the optimal amount of each type of leader behaviour for improving subordinate satisfaction and performance. Burt and Knez’s (1995) state that team members in the organizations are important conduits of trust because of their ability to diffuse trust-relevant information via gossip and informal communication. Shields (1997) studied consideration and initiating structure in an investigation of cohesion in team sports. He found that group cohesiveness was positively affected by a leader who exhibited both types of these behaviors. Shields found that a leader who fostered friendships, mutual trust, heightened respect, and interpersonal warmth was viewed as high in consideration. Merrill Warkentin, Peggy M. Beranek (1999) found that teams that were given appropriate training exhibited improved perceptions of the interaction process over time, specifically with regard to trust, commitment and frank expression between members. Similarly, (Martha L. Maznevski, Katherine M. Chudoba, 2000) propose that effective global virtual team interaction comprises a series of communication incidents, each configured by aspects of the team’s structural and process elements. In a different perspective the analysis done by (Chris Rowley, 2006) proves that team effectiveness is more influenced by cognitive than demographic similarities. Hackman and Walton (1986); Neck, Steward, and Manz (1996) revealed that the motivation to achieve team goals is highest when the team is allowed to establish its own goals based on management’s mission for the team. Stock (2006), examines the degree to which team inter organizationality influences team performance in a business-to-business context. On the basis of resource-dependence theory and boundary theory, the author argues that team inter organizationality positively influences team effectiveness, particularly when uncertainty is high. Susan G. Cohen & Spreitzer (1996), tests a theoretically-driven model of self-managing work team effectiveness. Self-managing work team effectiveness is defined as both high performance and employee quality of work life. Four categories of variables are theorized to predict self-managing work team effectiveness: group task design, encouraging supervisor behaviors, group characteristics, and employee involvement context.

Objectives of the Study 1.

To develop and standardize the measures for Trust, Leadership & Team effectiveness.

2.

To identify the factors underlying trust, leadership and team effectiveness.

3.

To measure the effect of Trust & Leadership on Team Effectiveness.

4.

To provide new areas for further researches.

326

Key Drives of Organizational Excellence

RESEARCH METHODOLOGY Sample: The study was conducted in different manufacturing organizations located in central India. For this purpose, 180 employees of middle and top level were contacted personally and requested to fill up the questionnaires, comprising measures of trust, leadership and team effectiveness. The questionnaires on a 5-point Likert scale, where 1 indicated ‘Strongly Disagree’ and 5 indicated ‘Strongly Agree’ consisting of 10 items for trust, 20 items for leadership and 20 items for team effectiveness were used. Non-probability (judgmental) sampling technique was used to collect the data. Tools for Data Analysis: Item-to-total correlation was applied to check the consistency of various items used in the questionnaire. Reliability method like Cronbach Alpha has been applied to the items. Underlying factors were identified through Factor Analysis. Multiple regressions were used to measure the combined effect of trust and leadership on team effectiveness.

RESULTS AND DISCUSSION Item to Total Correlation Internal consistency of all the items in all the questionnaires were checked through item to total correlation. Under this co-relation of every item with total is measured and computed value is compared with standard value (0.1455) and all the items had higher correlation coefficient value so all the items were accepted for further analysis (Table 1,2,3).

Reliability SPSS software was used to calculate reliability of the measure. The reliability has come out to be high in the case of all the variables. Reliability was calculated through Cronbach’s alpha method. It has been reported as 0.929 for Trust Trust, 0.929 for Leadership and 0.957 for Team Effectiveness Effectiveness.

FACTOR ANALYSIS Trust The raw scores of 10 items were subjected to factor analysis to find out the factors that contribute towards Trust.. After factor analysis, only one factor was identified named as Supportive and committed’ ‘Supportive committed’. Brockner et. al. (1997), Butler (1991) also found that supportiveness and commitment play an important role in making the employee trustworthy (Table 4).

Leadership For leadership 20 items were subjected to factor analysis and three factors were identified Persuasive and Participative’, ‘Influential’ and ‘Vision’ namely, ‘Persuasive ‘Vision’. Similarly, (Rotter, 1971; Mayer and Davis, 1999) states that persuasive and participative are the key of characteristics of a leader. On the other hand (Salancik and Pfeffer, 1978) argues that a leader should be influential. The key to be a successful leader lies in the vision that a leader have (Table 5).

Trust and Leadership as Correlates of Team Effectiveness

327

Team Effectiveness Again 20 items of Team Effectiveness were subjected to factor analysis and two factors were Integrity’ and ‘Clarity Clarity of Roles’ identified, ‘Integrity’ Roles’. Covey (1990) identified the character trait of team effectiveness as integrity whereas (Wheatley, 1999) described that team’s effectiveness depends on how well the members are aware and clear about their roles and responsibilities (Table 6).

REGRESSION ANALYSIS Null hypothesis (H0) There is no significant effect of trust and leadership on team effectiveness. Table6: Showing the Regression Coefficient Values Model

Unstandardized coefficient

Standardized coefficient

B

Std. error

Beta

(Constant)

7.482

5.986

VAR00002

0.657

.099

VAR00002

0.604

.067

t

Sig.

1.250

.213

.372

6.655

.000

.506

9.069

.000

a Dependent Variable: VAR00001 Y = a + bx +cz Y = 7.482 + 0.372 x + 0.506 x Here, X = Trust (independent variables) Z = Leadership (independent variables) Y = Team effectiveness (dependent variable) The multiple regressions were applied between Trust, Leadership (independent variable) and Team Effectiveness (dependent variable). The result of regression indicates that independent (trust and leadership) has significant impact on the dependent variable (team effectiveness) signified by the coefficient of Beta factor 0.372 and 0.506. Also the t –value is significant even at 0% although if we compare the computed t- value with critical value (1.96) at 5% of significance. The computed value of t is quite high so our null hypothesis is rejected and we can say that trust and leadership have effect on team effectiveness.

CONCLUSION The objective of the study was to find out the effect of trust and leadership on team effectiveness. In fact, a high degree of correlation exists between Trust, leadership and team effectiveness. By going through the study it is observed that trust play an important role in maintaining better interpersonal relations between team members. A leader is able to manage a team easily if he is able to create trust among team members. The results of the study also

328

Key Drives of Organizational Excellence

show that trust and leadership play a dominant role in developing team effectiveness in manufacturing units.

References Barkley, S., Bottoms, G., Feagin, C. H., & Clark, S. (2001), Leadership Matters: Building Leadership Capacity, Atlanta: Southern Regional Education Board. Bennis, W.G. (1989), Why Leaders Can’t Lead: The Unconscious Conspiracy Continues, Jossey-Bass: San Bennis, W., and B. Nanus (1985), Leaders: The Strategies for Taking Charge, New York: Harper & Row. Bolman, L. G., & Deal, T. E. (1997), Reframing Organizations: Artistry, Choice, and Leadership (2nd ed.), San Francisco: Jossey-Bass. Bradford, D.L., Cohen, A.R. (1984), Managing for Excellence, Wiley: New York Conger, J. A., and R. N. Kanungo (1998), Charismatic Leadership in Organizations, Thousand Oaks. CA: Sage. House, R. J. (1977), A 1976 Theory of Charismatic Leadership’ in Leadership: The Cutting Edge. J. G. Hunt and L. L. Larson (eds), 189-207. Carbondale, IL: Southern Illinois University Press. Kouzes, J.M., Posner, B.Z. (1987), The Leadership Challenge: How to Get Extraordinary Things Done in Organizations, Jossey-Bass, San Francisco, CA.. Kussrow, P. G., & Purland, J. (2001), In Search of The Congruent Leader, Unpublished Manuscript, Hobe Sound, FL. Locke, J. (1971), Social Contract, Oxford University Press, London. Locke, E. A. and Associates (1991), The Essence of Leadership, New York: Lexington Books. Lunenburg, F. C., & Ornstein, A. C. (1999), Educational Administration: Concepts and Practices (3rd ed.), Belmont, CA: Wadsworth/Thomson Learning. Mayer R. C. and M. B. Gavin (1998), Trust for Management: It’s All in the Level’. Paper presented at the Academy of Management Annual Meeting, San Diego, CA. Peters, T., Austin, N. (1985), A Passion for Excellence: The Leadership Difference, Random House, New York, NY. Wheatley, M. J. (1999), Leadership and the New Science: Discovering Order in Chaotic World, San Francisco: Berrett-Koehler Publishers, Inc. Butler, J.K., (1991), Toward Understanding and Measuring Conditions of Trust: Evolution 0f Conditions of Trust Inventory. Journal of Management, 17(3), 643-63. Diggins, P. B. (1997), Reflections on Leadership Characteristics Necessary to Develop and Sustain Learning School Communities, School Leadership & Management, l7(3), 13. Hackman, J. Richard and Richard E. Walton (1986), Leading Teams in Organizations, in Designing Effective Work Teams, Goodman, P.S., ed. San Francisco: Jossey-Bass, 72-119. Hosmer. L. (1995), Trust, the Connection Link between Organizational Theory and Philosophical Ethics, Academy of Management Review. Vol. (20), 379-403. Kramer, R. M. (1999), Trust and Distrust in Organizations: Emerging Perspectives, Enduring Questions, Annual Review of Psychology, Vol. (50), 569-598. Mayer, R. C., and J. H. Davis (1999), The Effect of The Performance Appraisal System on Trust for Management: A Field Quasi-Experiment. Journal of Applied Psychology, Vol. (84), 123-136. Manz, Charles C. and Henry P. Sims, Jr. (1987), Leading Workers to Lead Themselves: The External Leadership of Self-Managing Work Teams, Administrative Science Quarterly, 32, 106-128.

Trust and Leadership as Correlates of Team Effectiveness

329

McCune, J.C. 1998, ‘The Elusive Thing Called Trust: Trust in the Workplace’, Management Review, JulyAugust, 87(7), pp. 10-17. Merrill Warkentin, Peggy M. Beranek (1999), Training to Improve Virtual Team Communication, Information System Journal, 9(4), 271-290. Neck, Christopher P., Greg L. Stewart, and Charles C. Manz (1996), Self-leaders within Self- leading Teams: Toward an Optimal Equilibrium, Advances in Interdisciplinary Studies of Work Teams, 3, 43-65. Podsakoff, P. M., S. B. Mackenzie, R. H. Moorman, and R. Fetter (1990), Transformational Leader Behaviors and Their Effects on Follower’s Trust in Leader, Satisfaction and Organizational Citizenship Behaviors, Leadership Quarterly, Vol. (1),107-142. Rotter, J. B. (1971), Generalized Expectancies For Interpersonal Trust. American Psychologist, Vol. (26), 443452. Rousseau, D.M and Sitkin, S. M and Burt, R. S. and C. Camerer (1998), Not So Different After All. A CrossDiscipline View of Trust, Academy of Management Review, Vol. (23), 393-404. Shields, D. (1997), The Relationship between Leadership Behaviors and Group Cohesion in Team Sports, The Journal of Psychology, 131(2), 196-210. Solomon, R. C. (1996), Ethical leadership, Emotions and Trust: Beyond “Charisma”’ in Ethics and Leadership, Kellog Leadership Studies Project, University of Maryland, Ethics and Leadership Focus Group Working Papers, 69-90. Tannenbaum. A., Schmidt, W.H. (1958), How to Choose a Leadership Pattern, Harvard Business Review, Vol. (36), 95-101. Tannenbaum, R.A., Schmidt, W.H. (1973), How to Choose a Leadership Style, Harvard Business Review, Vol. (51), 58-67. Thamhain, H. (1990), Managing Together: A Practical Look at Teambuilding, Management Solutions, Vol. 31 (October). Zand. D.E., (1972), Trust and Managerial Problem-Solving, Administrative Science Quarterly Vol. (17). Uhl-Bien, Mary and Georg B. Graen (1998), Individual Self-management: Analysis of Professionals’ Selfmanaging Activities in Functional and Cross-functional Work Teams, Academy of Management Journal, 41, 340-350.

330

Key Drives of Organizational Excellence

Annexure Table 1: Item to total Correlation for Trust Items

Computed Correlation Value

Consistency

Accepted/ Dropped

1.

Good track record

0.809856

Consistent

Accepted

2.

Expectation to tell truth.

0.786875

Consistent

Accepted

3.

Carrying out of promises.

0.809856

Consistent

Accepted Accepted

4.

Loyalty towards the team

0.648498

Consistent

5.

Colleagues feel free to share secrets

0.694583

Consistent

Accepted

6.

Convey correct information to colleagues

0.856362

Consistent

Accepted

7.

Listening to the problems of others.

0.814721

Consistent

Accepted

8.

Ready to help others.

0.840768

Consistent

Accepted

9.

Colleagues feel free to ask for the help.

0.836374

Consistent

Accepted

0.774812

Consistent

Accepted

10. Colleagues consider me as a reliable person.

Table 2: Item to total Correlation for Leadership Items

Computed Correlation Value

Consistency

Accepted/ Dropped

1. Allowing complete freedom

0.613629

Consistent

Accepted

2. Act as a spokesman

0.613629

Consistent

Accepted

3. Allow members to use their own judgment

0.691963

Consistent

Accepted

4. Put stress on members

0.592568

Consistent

Accepted

5. Don’t tolerate postponements

0.666324

Consistent

Accepted

6. Keep work at rapid pace

0.63779

Consistent

Accepted

7. Settle conflicts

0.686955

Consistent

Accepted

8. I decide what is to be done in the team.

0.581045

Consistent

Accepted

9. I push members for increased production

0.753412

Consistent

Accepted

10. Things usually turn out as I predicted.

0.551958

Consistent

Accepted Accepted

11. Allow high degree of initiative

0.705397

Consistent

12. Do trust on members.

0.651103

Consistent

Accepted

13. Counsel employees to do better work.

0.697739

Consistent

Accepted

14. Persuade others that my ideas are to their advantage

0.649641

Consistent

Accepted

15. Urge group to beat the previous records.

0.617947

Consistent

Accepted

16. Ask the group to follow the rules

0.654614

Consistent

Accepted

17. Easy for me to carry out several task at same time.

0.540835

Consistent

Accepted Accepted

18. Put what I read into action.

0.680108

Consistent

19. Enjoy coaching people on new tasks.

0.692586

Consistent

Accepted

20. Closely monitor the task to be completed in time.

0.647538

Consistent

Accepted

Trust and Leadership as Correlates of Team Effectiveness

331

Table 3: Item to Total Correlation for Team Effectiveness Items

Computed Correlation Value

Consistency

Accepted/ Dropped

1. All members are treated as equal.

0.828102

Consistent

Accepted

2. Objective is clear to all.

0.854166

Consistent

Accepted

3. Creativity and innovation are encouraged.

0.719269

Consistent

Accepted

4. Receptive to new ideas and suggestions

0.78921

Consistent

Accepted

5. Open exchange of views

0.749046

Consistent

Accepted

6. All are busy in achieving results.

0.756962

Consistent

Accepted

7. Roles and responsibilities are clearly defined and understood.

0.777828

Consistent

Accepted

8. Team spirit is high.

0.807518

Consistent

Accepted

9. Meetings are well planned.

0.730494

Consistent

Accepted

10. Regular review of performance against its objectives.

0.71956

Consistent

Accepted

11. Participation is actively encouraged.

0.790952

Consistent

Accepted

12. Consensus is reached on issues .

0.828913

Consistent

Accepted

13. Individuals trust each other.

0.795325

Consistent

Accepted

14. Team objectives are SMART

0.732649

Consistent

Accepted

15. Empathy and understanding are prevalent.

0.537887

Consistent

Accepted

16. Effective communication.

0.746357

Consistent

Accepted

17. Always seeks to improve its processes.

0.643029

Consistent

Accepted

18. All are involved in reviewing performance.

0.694217

Consistent

Accepted

19. Change is welcomed.

0.664167

Consistent

Accepted

20. Awareness of roles in the team.

0.687776

Consistent

Accepted

Table 4: Factor Analysis of Trust Factor Name

Eigen values Total

Supportive and Committed 6.182

Variable convergence/statement

Loadings

% of variance

61. 819

Give correct information.

0.860

Help my colleagues.

0.844

Colleagues feel free to ask for the help.

0.834

I listen to the problems.

0.810

I carry out all the promises.

0.805

I can be expected to tell the truth.

0.796

Good track record.

0.782

Reliable person

0.776

Colleagues feel free to share problems

0.696

I move to another team if possible.

0.631

332

Key Drives of Organizational Excellence Table 5: Factor Analysis of Leadership

Factor Name

Eigen values Total

Variable convergence/statement

% of variance

Persuasive and Participative

8.608

43.039

Persuade others

.729

Counseling

.709

Push others for increased production

.707

Monitors schedule.

.693

Ask members to follow the rules.

.690

Permit them to use their own judgment.

.661

Settle conflicts

.581

Allow complete freedom in their work

.571

Colleagues feel free to share their secrets. Influential

1.522

Vision

1.162

Loadings

7.608

5.810

.751

Carry out several complicated tasks.

.676

Urge to beat previous record.

.621

Act as a spoke man of the group.

.620

Allow high degree initiation.

.579

Enjoy coaching.

.549

Decide what is to be done and how.

.763

Trust the members.

.634

Enjoy putting what I have read into action.

.616

Predicts rightly

.540

Keeps work moving at rapid pace.

.480

Take stress to be ahead.

.439

Trust and Leadership as Correlates of Team Effectiveness

333

Table 6: Factor Analysis of Team Effectiveness Factor Name

Eigen values Total

Variable convergence/statement

% Of variance

Integrity

11.14

55.700

Clarity of Roles 1.524

Loadings

7.621

Effective communication

.834

Combined efforts towards achieving results.

.781

Regular review of performance.

.775

Receptive to new ideas and suggestions.

.773

Team objectives are SMART

.748

Team spirit is high.

.731

Members are treated equal.

.709

All members review performance.

.690

Consensus is the base for decision making

.686

Understanding of objectives.

.652

Active participation.

.605

Well planned meetings

.572

Awareness of roles.

.528

Empathy and understanding

.814

Effective communication

.799

Exchange of views

.761

Clear definition of roles and responsibility

.691

Trust each other

.578

Creativity and innovation

.575

Change is welcomed

.530

334

Key Drives of Organizational Excellence

33

Emergence of Transformational Leadership Vision-The Bridge Between Strategic Leadership and Transformational Leadership Style Mobin Ul Haque Waqar Ahmed

Transformational leadership in an organization has been studied from many dimensions and at various levels. Researchers have tried to understand effect of contextual factors on acceptance of transformational leadership. However there has been a lack of research on emergence of transformational leadership. Most of the research assumes that development of vision is the role of transformational leadership. However strategic management literature ascribes this role to strategic leader-a different construct than transformational leader. In this chapter it is argued that vision and strategies are developed in the strategic leadership mode while these are achieved and implemented in transformational mode. Hence vision acts as an antecedent of transformational leadership and a bridge between transformational and strategic leadership styles. The chapter hypothesizes that during periods of strategic changes, transformational leader first has to be a strategic leader in order to be successful.

INTRODUCTION Given the long history of research on leadership one would expect to find some conclusion or unanimity in views about the definition, role and impact of leadership in the society at large and in organization as a special case. However, this is not there. Prominent researchers of this field have pointed towards this in the long history of research in this area (Bennis and Nanus, 1985; Bass, 1985; Bryman, 1986, Pettigrew, 1987). Main reason that has been ascribed to this is the relative narrowness of the research. In search of establishing the explanatory power of leadership, a worthy quest, however most of the researchers opted for the narrow views of leadership traits, personalities, leader follower relationship, focusing on and by large at the lower levels. Although study of the phenomenon at this level was important and contributed towards the understanding however it moved the focus of leadership construct

Emergence of Transformational Leadership Vision-The Bridge Between Strategic

335

away from executive or top level, the level which in our view has a critical affect upon the performance of the organization (Hambrick & Masson, 1984, Bennis & Nanus, 1985). Although literature focused upon the various outcomes of transformational leadership but as Rubin, Munz, & Bommer (2005) said, “one unintended consequence of the literature’s rather primary focus on transformational leadership’s outcomes has been a relative lack of emphasis concerning the underlying basis of this leadership behavior” (p. 846). A major reason for this narrowness had been the separation of leadership from organizational aspects (Pettigrew, 1987). Apart from few, leadership researchers, neglected studies being conducted in parallel in the area of strategic management where the major issue had been what effects organizational performance and the impact of strategies on firm performance. The extant literature of this area puts leadership in driving seat and acknowledges the critical role of leaders in organizational survival (Hambrick & Masson, 1984). Bass’s (1985) theory of transformational leadership has been hailed as one of the biggest break through in leadership research and has invited extensive research. Most of the research had been in the same spirit as the previous studies of leadership that is, interpersonal relationships and performance of the followers (Pawar & Eastman, 1997). Transformational leaders have been hailed for their ability to link followers with the vision and hence effect organizational change. However there was an overall paucity of research in understanding the impact of organizational factors on emergence and effectiveness of transformational leadership (Pawar & Eastman, 1997). Pawar & Eastman, (1997) article, in this regard, could be considered as ground breaking and truly seminal work. His article was focused upon understanding the contextual factors that effect the acceptance of transformational leadership and its overall effectiveness in any organization. He however accepted that emergence and receptivity of transformational leadership are two different concepts (Pawar & Eastman, 1997, p 97). His article deliberates on the issue how vision is implemented in the organization and what are the barriers to organizational change. He however did not address the issue of emergence of transformational leadership. In Pawar & Eastman (1997) article, like most other articles on transformational leadership, development of vision has been ascribed to the transformational leaders. Strategic leadership literature, on the other hand, ascribes the role vision creation to the strategic leader – a higher level construct than transformational leadership (Pawar & Eastman, 1997). So far researchers have not linked the two concepts. Are these two separate constructs or are these merely two various types of leadership styles depicted by same person at different times that come into play at different settings (Perrow, 1970). What causes leaders to choose between transformational and strategic leadership styles? Is vision an antecedent of transformational leadership? In other words what contextual factors enable the emergence of transformational leadership in an organizational setting? To address the above mentioned gap in existing research we are trying to highlight the nature and the role of contextual factors on leader’s choice of behavior. Few researchers (Bass 1985, Hambrick 1989, Pettigrew 1987) have really tried to fill the gap by linking leadership with the whole picture of organization. Bass(1985) provided the initial framework with his concept of vision and Transformational leadership, Hambrick (1989) with his upper echelon theory placed top management in the right perspective while Pettigrew (1987) gave the frame work of context, content and process for the change process. On the other hand development of Resource Based View (RBV) in the strategic literature (Grant, 1991; Teece, Pissano, & Shuen, 1997; Hooley & Greenley, 2005) puts the process of creating a high

336

Key Drives of Organizational Excellence

performance organization squarely on the resources and the deployment of the resources by the firms- a task that can only be performed by the top executives of the firm. Literature of strategic leadership binds the whole process of visioning, creating and implementing change together (Hambrick, 1989). For the purpose of this article transformational leadership is conceptualized, linking it with strategic leadership with special emphasis on various attributes of strategic leadership along the process of change. Process of responding to change through development of strategy and vision will be discussed in the light of strategic leadership literature. Discussion on role of transformational leader in developing and implementing strategies at the organizational level will be done in the last.

TRANSFORMATIONAL LEADERSHIP: A REVIEW Transformational leadership behavior is one of the widely accepted and researched paradigms in the extant literature on leadership (Rubin et al., 2005). In a broader context this concept deals with both task based (transactional) and relational approaches (charisma, visionary) (Bass, 1985). Transactional leadership explains the task based relationship between leader and follower (Bass 1999). The focus of this relationship is self-interest (Bass 1999) and is mainly based upon three concepts: contingent rewards (how to get the reward); management by exception (take action if there is a problem); and Laissez-faire leadership that is no leadership even at the times of need. The concept covers a wide range of activities in which a leader is engaged with followers like monitoring performance, training for the task etc., however all the activities focus toward achievement of the ends they are not meant to take the relationship beyond immediate goals (Bass 1999). Quality of transformational leadership, that separates it from the rest, lies in its ability to explore the process through which a leader makes the follower go beyond their self interest and are willing to sacrifice for broader good. Although researchers have defined transformational leadership little differently (Bennis & Nanus, 1985; Pawar & Eastman, 1997), major focus of the definition however remained the same. The theory of transformational leadership revolves around the four I’s that explain the phenomenon. Transformational leader improves the follower’s concept of self worth and provides it with an ideal role model who has “Charisma” or Idealized influence. This idealized influence becomes the root cause for inspirational leadership that connects the follower with the vision, values and goals of the leader and allows the entire team to achieve standard of performance that is above the previous levels. The leader provides the follower’s mind with the required intellectual stimulation by putting a big challenge hence creating an environment of innovation and creativity throughout the organization. Individualized consideration allows the leader to address the individual needs of the followers. In the process the leader becomes the coach and the mentor of each person and consequently develops a high quality personal relationship. Impact of transformational leadership on various aspects of organizational performance is well documented in literature (Pawar & Eastman, 1997; Agle, et al., 2006). Transformational leadership takes followers to a higher level of motivation (Bass, 1985); followers of transformational leaders are more satisfied. Researchers agree that transformational leadership of top management is important in implementing quality management culture (Berson & Linton, 2005). Transformational leaders are also ascribed with articulation of vision, setting of organizational values and develop high level of collective cohesion (Waldman, Ramirez, House, & Puranam, 2001).

Emergence of Transformational Leadership Vision-The Bridge Between Strategic

337

For the purpose of clarity and as a base for this article two concepts need to be clarified in the beginning- the difference between formation of vision and implementation of vision. Literature on strategy differentiates between strategy formulation and strategy implementation (Davies & Davies, 2004). Vision and mission are the primary steps of strategy formation (Baker, 2000) and are the function of top management only while implementation is a role for middle and lower management (Hambrick, 1989). As mentioned earlier, most of the literature on transformational leadership is done by studying lower and middle level managers where concept of vision exists as “adherence to” rather than “creation of”. In this article it will be argued that vision formation is a role of strategic leadership and vision implementation is a role of transformational leadership. Moreover Pawar & Eastman(1997) stated that “strategic leadership concept do not explicitly address the aspects of raising followers to higher levels of needs or bonding individual and collective interests” (p 84), thus highlighting that sphere of both transformational and strategic leadership is different. As transformational leadership is also a top management level construct (Bass, 1985) both roles could be assumed by one person or by different person, however, only one hat could be worn at one time.

STRATEGY AND ORGANIZATIONAL PERFORMANCE Concept of strategy is seen as central to organizational performance (Grant, 1991; Narver & Slater, 1990). Many studies have been done to understand the link between strategy and performance (PIMS, Narver and Slater, 1990). Strategy could be defined in many ways (Mintzberg, Ahlstrand, & Lampel, 2005) for example as “seeing ahead”, “seeing behind”, “seeing above”, “seeing below”, “seeing beside”, “seeing beyond”, and “seeing it through”. Performance, in turn, depends upon the strategy the firm chooses to compete. Performance itself has been defined in many ways namely: financial performance, financial plus operational and organizational effectiveness (Venkatraman & Ramanujam, 1986). However in the extant literature strategy has been associated with direction setting and provides the template for organizational planning and activities (Davies, 2004). It has also been argued that the pinnacle of strategy is creation of Competitive advantage (CA) - the edge that the organization has over its rivals that enables the firm to charge sustained premium over a period of time (Day, 1994, Porter, 1996). Success of an organization is largely defined in terms of its performance in the market place. Literature has clearly identified the link between CA and performance and it has been established that securing of CA as a prerequisite for superior performance (Day, 1994; Teece et al. 1997; Day & Wensley 1988; Porter 1985, 1996). Day (1984) floated the idea of CA when he suggested strategies to help “sustain the competitive advantage’. Porter (1985) then provided his concept of generic strategies of low cost leadership, differentiation and focus for achieving CA. His theory posits that opportunities to achieve CA lies in understanding the external environment. Kerin, Mahajan and Varadarajan, (1991) and Barney, (1991) gave two definitions which are being used in the literature. According to Kerin et al (1991) “competitive advantage is the unique position an organization develops vis–a-vis its competitors through its pattern of resource deployment and/or its product market scope decisions”. Barney (1991) defined it as “A firm is said to have a sustained competitive advantage when it is implementing a value creating strategy not simultaneously being pursued by any current or potential competitor and when these firms are unable to duplicate the benefits of this strategy” (Barney 1991, p. 101). Strategy is about aligning organization with the changes that are occurring in its environment (Grant, 1991).

338

Key Drives of Organizational Excellence

Organizations go through convergent and strategic reorientation periods in an organization (Tushman & Romanelli, 1985) where convergent periods are long periods of incremental change while strategic reorientation are discontinuous and are marked with major shift in strategy, structure etc and are driven by major external pressures. Role of top management team (TMT) is the most critical during this period as only executive leadership is able to initiate and implement strategic reorientations (Tushman & Romanelli, 1985, p. 214). Creation of learning organization that has the ability to constantly monitor and adopt to changes is a critical role of TMT (Vera & Crossan, 2004). Since direction setting is a role of strategic leader (Mintzberg, Ahlstrand, & Lampel, 2005), hence an “ideal leader would be able to identifyand exercise- the leadership behavior appropriate for the circumstances” (Vera & Crossan, 2004, p 226).

STRATEGIC LEADERSHIP: LEADERSHIP AT THE TOP MANAGEMENT LEVEL The extant literature on strategy and leadership makes it clear that strategy is a function of higher management only (Mintzberg, 1973; Shrivastava & Nachman, 1989) and there is a distinction between everyday leadership and strategic leadership. Strategic management literature has also differentiated between transformational leadership and strategic leadership. Many researchers have defined transformational leadership in terms of strategic leadership, where as the former is a specific type of latter (Pawar & Eastman, 1997). Although literature do suggest that visionary leadership is also transformational in nature (Davies & Davies, 2004), but this definitional overlap between the concepts of strategic leadership and transformational leadership might be the cause of mixed results in the organizational performance and leadership studies. Strategic leadership literature comprises basically of two schools of thought agency cost theory and strategic leadership theory. Agency theory focuses on separation of ownership and control and views management as agents of share holder (Cannella, Jr. & Monroe, 1997). Strategic leadership theory is a decision making theory and emphasize role of situation upon decisions of managers. Most popular theory is upper echelon theory (Hambrick & Masson, 1984). The strategic leadership theory posits that “companies are reflections of their top managers” (Cannella, Jr. & Monroe, 1997, p. 1). According to strategic leadership theory personality and cognition style affects organizational outcome (Cannella, Jr. & Monroe, 1997). Strategic leaders look, both at the organization and at the environment in which it is operating and then try to align the organization with the demands of the environment. It is this interrelationship between environment, organization and leadership that has been missing from the earlier leadership research although the relationship was envisioned by Bass (1981) who said that role of leader is initiation of structure and he also acts as an instrument of goal achievement along with other roles. Strategic leadership exists only at the top level of the organization (Hambrick 1989) and it has been established that it is clearly different from all other forms of leadership. Ireland & Hitt(2005) have defined strategic leadership as “a person’s ability to anticipate, envision, maintain flexibility, think strategically, and work with others to initiate changes that will create a viable future for the organization” (p. 63). Pawar & Eastman (1997) have specifically put strategic leadership construct as a bigger construct and have placed Transformational Leadership under it as a special form of strategic leadership. However they have taken vision out from the domain of strategic leadership and have put it under the transformational leader, whereas, as mentioned earlier, strategic management

Emergence of Transformational Leadership Vision-The Bridge Between Strategic

339

literature ascribes this role to strategic leadership. Westley & Mintzberg, (1989) have cautioned about subsuming strategic vision under “leadership in general” (p. 17). Most of the researchers on transformational leadership have clearly shown that transformational leader is excellent in creating motivation and improving overall environment of the organization (Shamir, House, & Arthur, 1993; Berson & Linton, 2005), highlighting the fact that transformational leadership is effective in implementing the strategies of the organization and in implementing change (Nadler & Tushman, 1990). The process of strategic leadership involves understanding not only the process through which leader creates his relationship with followers but also the cognitive process through which leader assess the internal and external situations of the company in his effort to make a strategy. Strategic leadership concepts include the entire organization instead of interpersonal or social dimensions associated with everyday leadership concept (Hambrick, 1989). Without effective strategic leadership a firm can not achieve superior performance in the global economy (Ireland & Hitt, 2005). Research on organizational performance and CEO’s charisma has yielded mix results depicting existence of some other variables in the relationship. We propose it is the strategic management that can explain substantial variation of the results. Research efforts to understand the impact of CEO charisma on organizational performance has so far yielded mix results (Agle, Nagarajan, Sonnenfeld, & Srinivasan, 2006), however impact of uncertain environment and charisma was found to be significant (Waldman, Ramirez, House, & Puranam, 2001). On the other hand studies in the area of strategy show the significant relationship between strategy and performance (Kirca et al. 2001). Researchers have highlighted various links between strategy and Leadership. Westley and Miztzberg (Westley & Mintzberg 1989) have discussed how visionary leader achieve their goals, Davies & Davies (2004) have identified strategic leader’s two basic abilities which differentiate them from lower level leader’s namely organizational abilities and individual abilities. Organizational abilities include aligning people with organization; developing strategic competencies and strategic orientation. They also possess individual abilities such as wisdom and adaptive capacity. Perrow (1970) in his book denotes that “leadership style is a dependent variable which depends upon something else. The setting or task is the independent variable.” (p. 6). Researchers in the field of strategy have highlighted the critical role that leaders have in developing a strategy (Hambrick, 1989). In the literature of strategy, both in RBV and in market orientation school of thought, leader is resource which plays a vital role in strategy making. Porter (1996) in his article “What is strategy” wrote “the challenge of developing or reestablishing a clear strategy is often primarily an organizational one and depends upon leadership. With so many forces at work against making choices and tradeoffs in organizations, a clear intellectual framework to guide strategy is a necessary counterweight. Moreover, strong leaders willing to make choices are essential.” One area that requires clarification is that concept of strategic leadership is sometime associated with one person only and sometimes it is a task given to a group of people like board of directors (Davies & Davies, 2004). Using concept of distributed leadership (Davies & Davies, 2004) it could be stated that regardless whoever is responsible he/they need to depict certain characteristics for developing vision which will then be implemented at the organizational level through transformational leadership.

340

Key Drives of Organizational Excellence

STRATEGIC LEADERSHIP AND MANAGING CHANGE Process of change has been very well documented by Pettigrew (1987). He proposed that change occurs along three dimensions viz. the context, content and process of change. Context could be internal or external. External context refers to macro environment in which organization operates like social, political, competitive etc while the internal context refers to the organizational environment like culture, structure etc. Content refers to particular area where change is being considered while the process is how the change is implemented across the organization, Pettigrew called this “strategic change”. Changes in the external environment continuously act upon the CA of the company. Central to the concept of CA is protection form the competitive actions that are targeted at eroding the advantage the company (Porter, 1996; Teece, et al., 1997). CA is created either through aligning organization with the opportunities that exist in the market (Porter, 1996; Kholi & Jaworski, 1990) or through leveraging the hard to copy resources of the organization called resource based view (Teece, et al., 1997). Both of these approaches require top management to understand how the forces are affecting the profitability of the organization? What will be required in the future to be successful? What resources does company has and what resources are required in the future? Identifying what tradeoffs (Porter, 1996) are required? Where are we now? Which business to pursue which not to? What are the corporate objectives? And what will happen to us when we achieve these goals (vision)? Dynamic capability concept (Teece, et al., 1997), in particular, addresses the issue of continuously improving the firm’s resources to align itself with the changing external environment. As there are different types of change (Nadler & Tushman, 1990) different types of executive leadership is also required to address each type of change (Nadler & Tushman, 1990). Hence it could be seen that development of vision requires different mind set and different set of characteristics than transformational leadership which is much more people oriented. Hypothesis 1: During the times of organizational reorientation, there will be a role switching between strategic leadership and transformational leadership behavior at the top management level.

DEVELOPING A VISION As mentioned earlier first step in developing long term strategy is to develop a vision and the visionary style is depicted through strategic process (Westley & Mintzberg, 1989, p. 22). One of the differentiating point between transformational leadership and other types (excluding strategic leadership) had been the concept of vision where by transformational leaders have been hailed as the one having a vision. Referring to Pettigrew (1987) model of change, Pawar and Eastman (1997) used only the internal contextual factors to study their effects upon transformational leadership. However in order to have the full scale leadership- one at the strategic level, we have to include both the aspects, because central to the concept of strategy is the concept of aligning organization to the changes that are occurring in the environment (Grant, 1991). The alignment is done through process of “visioning” or developing a picture of future. Process of vision could be broken down into process of visioning and process of implementation. According to Westley & Mintzberg (1989) process of visioning comprises of following three stages: 1) creating a mental image of the organization in the future, 2) articulation and

Emergence of Transformational Leadership Vision-The Bridge Between Strategic

341

communication to followers, 3) implementation of the vision through followers. This articulated vision is implemented through strategies across the organization. If we neglect the outer context vision can not be developed (Westley & Mintzberg, 1989). This means without strategic leadership there will be no vision. To develop a vision leader has to have characteristics that are different from any other managerial characteristics (Rowe, 2001; Westley & Mintzberg, 1989). Mintzberg (Mintzberg, et al., 2005) highlighted that leader must first develop a mental picture, using vision leader provides: bridge between present and the future of the organization, and operates on the emotional and spiritual resources of organization. Bennis & Nanus (1985) described vision as a necessary attribute of a leader who articulates attractive future for the organization to motivate them towards its achievement. This vision could be vague or precise but it gives direction to the organization. According to Westley & Mintzberg( 1989) there is an origin and evolution of vision. Origin refers to mental processes while evolution refers to development through deliberation. Moreover along with external context, contents of vision are also important. Vision must address various issues of organization like customers, products, markets etc (Westley & Mintzberg, 1989) Hypothesis 2: Strategic leaders possess superior cognitive and analytic skill set than transformational leaders. Hypothesis 2b: Strategic leaders think in terms of picture and link and are less on emotion as compared to transformational leaders who are high on emotion and words.

CONCLUSION Above discussion clearly highlights the relationship between strategic leadership and transformational leadership. Although transformational leadership gains its acceptance in followers through articulation and implementation of vision, but the emergence of this vision takes place at the strategic level. Transformational leader takes the vision from strategic leadership and implements it through four I’s, creating organizational change in the process. Hence it could be argued that it’s the creation of vision that affects the emergence of transformational leadership in the organization. Future research should highlight the characteristics of strategic leader and environmental factors that facilitate role switching in top management team. Moreover the importance of the impact of the link between transformational and strategic leadership upon organizational performance need to be studied. Moreover there has been a dearth of research on existence and impact of both the strategic and transformational leadership in Pakistan. Being a totally untapped market, with a very different cultural context, there exist huge potential for various types study on Pakistani business issues.

References Agle, Bradley; R., Nagarajan, Nandu; J., Sonnenfeld, Jeffrey; A., & Srinivasan, Dhinu (2006), Does CEO Charisma Matter? An Empirical Analysis of the Relationship among Organizational Performance, Environmental Uncertainty and Top Management Team Perception of CEO Charisma, Academy Of Management Journal, 49(1), 161-74. Baker, Michael. J. (2000), Marketing Strategy and Management (3rd), London: Mcmillan Press Ltd. Barney, J. (1991), Firm resources and Sustained Competitive Advantage, Journal of Management, 17(1), 99120.

342

Key Drives of Organizational Excellence

Bass, B. M. (1985), Leadership and Performance Beyond Expectations, New York: Free Press. Bass, B. M. (1999), Two Decades of Research and Development in Transformational Leadership, European Journal of Work And Organizational Psychology, 8(1), 9-32. Bennis, W., & Nanus, B. (1985), Leaders: The Strategies For Taking Charge, New York: Harper and Row. Berson, Y., & Linton, J. D. (2005), An Examination of the Relationship Between Leadership Style, Quality, and Employee Satisfaction in R&D Versus Administrative Environments, R&D Management, 35(1), 51-60. Bryman , A. (1986), Leadership and Organizations, London: Routledge & Kegan Paul. Cannella, Jr., A. A., & Monroe, M. J. (1997), Contrasting Perspectives on Strategic Leaders: Towards a More Realistic View of Top Management Work, Journal of management, 23, 213-37. Davies, B. (2004, February), Developing the Strategically Focused School, School Leadership & Management, 24(1), 11-27. Davies, B. J., & Davies, B. (2004, February), Strategic Leadership, School Leadership & Management, 24(1), 2938. Day, G. S. (1994), The Capabilities of Market Driven Organization, Journal Of Marketing, 58, 37-52. Day, G. S., & Wensley, R. (1988, April), Assessing Advantage: A Framework for Diagnosing Competitive Superiority, Journal of Marketing, 52, 1-20. Grant, R. M. (1991), The Resource Based Theory of Competitive Advantage: Implication For Strategy Formulation, California Management Review. Hambrick, D. C. (1989, Summer), Guest Editor’s Introduction: Putting Top Managers Back in the Strategy Picture, Strategic Management Journal, 10(Special), 5-15. Hambrick, D., & Masson, P. (1984). Upper Echelons” The Organization As A Reflection Of Its Top Managers. Academy Of Management Review, 9, 193-206. Hooley, G., & Greenley, G. (2005, June), The Resource Underpinnings of Competitive Positions, Journal Of Strategic Management, 13, 93-116. Ireland, D. R., & Hitt, M. A. (2005), Achieving and Maintaining Strategic Competitiveness in the 21st Century: The Role of Strategic Leadership, Academy of Management Executive, 19(4), 63-77. Kirca, Ahmet. H., Jayachandran, S., & Bearden, W. O. (2005, April), Market Orientation: A Meta Analytic Review and Assessment of Its Antecedents and Impact on Performance, Journal of Marketing, 69, 24-41. Kerin. R. A; Mahajan, V.; P. Varadarajan, V. (1990), Contemporary Perspectives on Strategic Market Planning, Needam Heights MA: Allyn and Bacon. Kholi, A. K., & Jaworski, B. J. (1990, April), Market Orientation: The Construct, Research Prepositions, and Managerial Implications, Journal of Marketing, 54, 1-18. Mintzberg, H., Ahlstrand, B., & Lampel, J. (2005), Strategy Safari, New York: Free Press. Nadler, David. A., & Tushman, Michael. L. (1990, Winter), Beyond The Charismatic Leader: Leadership and Organizational Change, California Management Review. Narver, J. C., & Slater, S. F. (1990), The Effect Of Market Orientation On Business Profitability. Journal Of Marketing, 54, 21-35. Pawar, B. S., & Eastman, K. (1997). The Nature and Implication of Contextual Influences on Transformational Leadership: A Conceptual Examination, Academy of Management Review, 22(1), 80-109. Perrow, C. (1970), Organizational Analysis, Belmont CA: Wardsworth. Pettigrew, A. M. (1987, Nov), Context and Action in the Transformation of the Firm, Journal of Management Studies, 24(6649), 670.

Emergence of Transformational Leadership Vision-The Bridge Between Strategic

343

Porter, M. E. (1996, Nov-Dec), What is Strategy? Harvard Business Review, Pp. 61-78. Porter, M. E. (1985), Competitive Advantage, NY Free Press Rowe, Glenn. W. (2001), Creating Wealth in Organizations: The Role of Strategic Leadership, Academy of Management Executive, 15(1), 81-94. Rubin, Robert, Munz, David, & Bommer, William. H. (2005), Leading From Within: The Effects of Emotion Recognition and Personality on Transformational Leadership Behavior, Academy of Management Journal, 48(5), 445-458. Shamir, B., House, R. J., & Arthur, M. B. (1993), The Motivational Effects of Charismatic Leadership: A Self Concept Based Theory, Organizational Science, 4(4), 577-94. Shrivastava, P., & Nachman, S. A. (1989), Strategic Leadership Patterns, Strategic Management Journal Stogdill, R. M. (1974), Handbook of Leadership: a survey of Theory and Research, New York: free Press. Teece, D. J., Pissano, G., & Shuen, A. (1997), Dynamic Capabilities and Strategic Management, Strategic Management Journal, 18(7), 509-33. Tushman, M. L., & Romanelli, R. (1985), Organizational Evolution: A Metamorphosis Model of Convergence and Reorientation, in Cummings, L.L and Staw, B (Eds.), (Ed.), Research in Organizational Behavior, Greenwich, Conn: JAI Press. Venkatraman, N., & Ramanujam, V. (1986), Measurement of Business Performance in Strategy Research: A Comparison of Approaches, Academy of Management Review, 1(4), 801-14. Vera, Dusya., & Crossan, Mary. (2004), Strategic Leadership and Organizational Learning, Academy of Management Review, 29(2), 222-40. Waldman, David, Ramirez, G., House, Robert. J., & Puranam, Phanish. (2001), Does Leadership Matter? CEO Leadership Attributes and Profitability under Conditions of Perceived Environmental Uncertainty, Academy of Management Journal, 44(1), 134-43. Westley, F., & Mintzberg, H. (1989), Visionary Leadership and Strategic Management, Strategic Management Journal, 10, 17-32.

344

Key Drives of Organizational Excellence

34

Knowledge Management and Business Intelligence: Importance of Integrating to Build Organisational Excellence Jaydip Chaudhari

Knowledge Management (KM) and Business Intelligence (BI) are closely related with each other, but they are different concepts altogether, even though they are used interchangeably. Therefore, in the paper an attempt is made to distinguish between both the concepts, and further focus has been laid on BI’s role in improving the Knowledge Base. This expanded role also suggests that the effectiveness of a BI will, in the future, be measured based on how well it promotes and enhances knowledge, how well it improves the mental model(s) and understanding of the decision maker(s) and thereby how well it improves their decision making and hence firm competitiveness. BI focuses on explicit knowledge, but KM encompasses both tacit and explicit knowledge. Therefore, findings of the paper suggest that wise and effective use of BI & KM helps firms to build competitive advantage using the information.

INTRODUCTION There is always a dilemma between two concepts knowledge management (KM) and business intelligence (BI). According to one survey, 60 percent of consultants did not understand the difference between the two. According to Gartner consultancy BI is a set of all technologies that gather and analyze data to improve decision making. In BI, intelligence is often defined as the discovery and explanation of hidden, inherent and decision-relevant contexts in large amounts of business and economic data. KM is described as a systematic process of finding, selecting, organizing, distilling and presenting information in a way that improves an employee’s comprehension in a specific area of interest. KM helps an organization to gain insight and understanding from its own experience. Specific KM activities help focus the organization on acquiring, storing and utilizing knowledge for such things as problem solving, dynamic learning, strategic planning and decision making (Hameed, 2004).

Knowledge Management and Business Intelligence: Importance of Integrating

345

Conceptually, it is easy to comprehend how knowledge can be thought of as an integral component of BI and hence decision making. This paper envisages that KM and BI, while differing, need to be considered together as necessarily integrated and mutually critical components in the management of intellectual capital.

BACKGROUND KM has been defined in reference to collaboration, content management, organizational behavioral science, and technologies. KM technologies incorporate those employed to create, store, retrieve, distribute and analyze structured and unstructured information. Most often, however, KM technologies are thought of in terms of their ability to help process and organize textual information and data so as to enhance search capabilities and to garner meaning and assess relevance so as to help answer questions, realize new opportunities, and solve current problems. In most large firms, there is a vast aggregation of documents and data, including business documents, forms, data bases, spreadsheets, e-mail, news and press articles, technical journals and reports, contracts, and web documents. Knowledge and content management applications and technologies are used to search, organize and extract value from these information sources and are the focus of significant research and development activities. BI has focused on the similar purpose, but from a different vantage point. BI concerns itself with decision making using data warehousing and online analytical processing techniques (OLAP). Data warehousing collects relevant data into a repository, where it is organized and validated so it can serve decision-making objectives. The various stores of the business data are extracted, transformed and loaded from the transactional systems into the data warehouse. An important part of this process is data cleansing where variations in data schemas and data values from disparate transactional systems are resolved. In the data warehouse, a multidimensional model can then be created which supports exible drill down and roll-up analyses (roll-up analyses create progressively higher-level subtotals, moving from right to left through the list of grouping columns. Finally, it creates a grand total). Tools from various vendors provide end users with a query and front end to the data warehouse. Large data warehouses can hold tens of terabytes of data, whereas smaller, problem-specific ones often hold 10 to 100 gigabytes (Cody et al., 2002).

BUSINESS INTELLIGENCE (BI) OR KNOWLEDGE MANAGEMENT (KM) – WHICH IS FIRST? McKnight (2002) has organized KM under BI. He suggests that this is a good way to think about the relationship between the two. He argues that KM is internal-facing BI, sharing the intelligence among employees about how effectively to perform the variety of functions required to make the organization go. Hence, knowledge is managed using many BI techniques. Marco (2002) contends that a ‘‘true’’ enterprise-wide KM solution cannot exist without a BI-based metadata repository. In fact, a metadata repository is the backbone of a KM solution. That is, the BI metadata repository implements a technical solution that gathers, retains, analyses, and disseminates corporate ‘‘knowledge” to generate a competitive advantage in the market. This intellectual capital (data, information and knowledge) is both technical and business-related.

346

Key Drives of Organizational Excellence

Cook and Cook (2000) assert that the attraction of BI is that it offers organizations quick and powerful tools to store, retrieve, model, and analyze large amounts of information about their operations, and in some cases, information from external sources. Vendors of these applications have helped other companies and organizations increase the value of the information that resides in their databases. Using the analysis functions of BI, firms can look at many aspects of their business operation and identify factors that are affecting its performance. The Achilles heel of BI software is, according to Cook and Cook, its inability to integrate Non quantitative data into its data warehouses or relational databases, its modeling and analysis applications, and its reporting functions. To examine and analyze an entire business and all of its processes, one cannot rely solely on numeric data. And estimates from various sources have suggested that up to 80 percent of business information is not quantitative or structured in a way that can be captured in a relational database. This is because these documents, that contain information, knowledge, and intelligence, are not to unstructured or semi-structured and hence not well suited to the highly structured data requirements best suited to the database software application. Text mining, seen primarily as a KM technology, adds a valuable component to existing BI technology. Text mining, also known as intelligent text analysis, text data mining or knowledge-discovery in text (KDT), refers generally to the process of extracting interesting and non-trivial information and knowledge from unstructured text. Text mining is a young interdisciplinary field that draws on information retrieval, data mining, machine learning, statistics and computational linguistics. As most information (over 80 percent) is stored as text, text mining is believed to have a high commercial potential value. Text mining would seem to be a logical extension to the capabilities of current BI products. However, its seamless integration into BI software is not quite so obvious. Even with the perfection and widespread use of text mining capabilities, there are a number of issues that Cook and Cook contend that must be addressed before KM (text mining) and BI (data mining) capabilities truly merge into an effective combination. In particular, they claim it is dependent on whether the software vendors are interested in creating technology that supports the theories that define KM and providing tools that deliver complete strategic intelligence to decision-makers in companies. However, even if they do, Cook and Cook believe that it is unlikely that technology will ever fully replace the human analysis that leads to stronger decision making in the upper echelons of the corporation. However, the fields of BI and KM have evolved over the last two decades; they have done so until recently in seemingly parallel universes. BI, relying on traditional business tools and searching well organized and structured data, has emerged over 20 years as a well-established niche in which information is readily accessible; most players understand each other’s languages and processes, and a return on investment (ROI) is easy to define and calculate. However, Kadayam asserts that several technological developments, including those spurred by Intelliseek, Inc., are building bridges between KM and BI, with obvious benefits. Two factors fueling this emergence of what Intelliseek calls ‘‘new business intelligence’ ’ (NBI) are the growth of internet information and evolving technologies that aggregate, analyze and report data from a variety of previously incompatible sources. Accepted business tools that traditionally are used to find and lever-age BI data are now, he says, crossing over into the KM field, able to find more and better information, make it actionable quickly and offer

Knowledge Management and Business Intelligence: Importance of Integrating

347

the promise of greater ROI for strategic planning, sales, decision making and competitive or strategic advantage. BI usually has access to about 20 percent of available information from databases, online analytical processing, supply chain management, data warehouses, and the like, but it commands roughly 80 percent of the relevant budget for business purposes. By contrast, NBI can benefit far more knowledge workers and reach a far larger pool of data, perhaps 50 percent to 60 percent of available information in product documents, research reports, employee records and the like, but it attracts perhaps 20 percent of the traditional budget for IT-related purposes. Kadayam states that the convergence of the KM and BI deepens and broadens the amount of searchable knowledge and information – simultaneously increasing the value, actionability and ROI on the intelligence gained. He asserts that the greatest value of unstructured data comes when it is converted to intelligence that can then be mined, sliced and diced by traditional business tools – Business Objects, MicroStrategy, Cognos, Informatica, Oracle, Microsoft, etc. When KM and BI converge to create NBI, Kadayam maintains that the resulting intelligence involves broader insights, not just raw data. It provides trends, not just raw statistics. It includes historical context, not just a shallow examination of what is apparent and easily accessible. Instead of nuggets or pockets of information from corporate databases, it provides a true 360’” view of attitudes and behaviors, combines structured and unstructured data, meshes solicited and unsolicited feedback, and keeps a real-time pulse on business.

TACIT KNOWLEDGE AND BI When Karl-Erik Sveiby (1997) created the first framework defining intellectual capital, he defined three elements: 1.

Employee competence (the capabilities of people in an organization – its human capital);

2.

Internal structure (structured or organizational capital, including patents, documented processes, computer-based data, and the vision, strategy, and policies created by leadership);

3.

External structure (customer or relationship capital – the value of a firm’s relationships with the people with whom it does business).

It is clear that BI can help firms analyze transactions within each element, but it only partially explains its relationship to KM. To really understand and learn from a firm’s value network, one must also examine tacit behaviors, that is, the nature of behavioral exchanges occurring and the content of information and its value relative to firm performance. Here the role and contribution of BI becomes constrained. Nonaka and Takeuchi (1995) developed the knowledge spiral model to represent how tacit and explicit knowledge interact to create knowledge in an organization. The framework for a learning organization (see Figure 1) identifies four knowledge conversion processes or patterns: 1.

Socialization (tacit to tacit);

2.

Externalization (tacit to explicit);

348

Key Drives of Organizational Excellence

3.

Combination (explicit to explicit; and

4.

Internalization (explicit to tacit).

The implication of this model is that KM comprises activities in all four processes, whereas BI directly may affect combination, and to a lesser extent, socialization, externalization and internalization but indirectly. However, the same may be true of KM if its definition is limited to a technology-restricted, explicit knowledge-based definition (e.g. text management systems).

Figure 1: Framework for a learning organization

The KM literature and practices have not been restricted to issues of explicit knowledge. According to Hasanali, who had identified five primary categories of critical success for KM, all of which suggest the importance tacit knowledge. 1.

Leadership

2.

Culture

3.

Structure, roles, and responsibilities

4.

IT infrastructure and

5.

Measurement

“Knowledge Management promotes an integrated approach to identifying, capturing, retrieving, sharing, and evaluating enterprise information assets” (Gartener, 1998). These information assets may includes databases, documents, policies, procedures, as well as the un-captured tacit expertise and experience stored in individual heads. Based on this

Knowledge Management and Business Intelligence: Importance of Integrating

349

definition, both BI and explicit KM technologies address only a subset of the prescribed KM approach. KM encompasses both explicit and tacit knowledge, as well as the interaction between them.

HOW BI INTEGRATES WITH KM As explained above, new knowledge is created through the synergistic relationship and interplay between tacit and explicit knowledge, specifically, through a four-step process of socialization, articulation, integration, and understanding/internalization (see the Figure 1 (Nonaka and Takeuchi 1995)). Socialization is the process of sharing with others the experiences, technical skills, mental models, and other forms of tacit knowledge. For example, apprentices learn a craft not through language, but by working with their masters; i.e. observing, imitating and practicing under the master’s tutelage. On-the-job-training (OJT) provides this mode of sharing tacit knowledge in the business world. OJT is complemented with explicit film clips of the expert performing the task, virtual reality representations, and kinematic analysis (from the field of robotics). Articulation is the process of converting tacit knowledge to explicit knowledge. In the decisionmaking process, articulation may include, but is not limited to, one or more of the following: l

Specifying the purpose of the decision, e.g. to understand how the number and locations of warehouses influence supply costs in a new marketing area;

l

Articulating parameters, objective functions, relationships, etc., in a BI mathematical model (i.e. building a model);

l

Articulating ‘‘what-if’’ model cases that reflect existing and potential decision-making situations; and

l

Evaluating the decision alter natives, given the uncertainty in the decision-making environment.

Articulation may also include knowledge extraction in expert systems, determination of causal maps, brainstorming, etc. Integration is the process of combining several types of explicit knowledge into new patterns and new relations. One potentially productive integration of explicit knowledge is the analysis of multiple, related ‘‘what-if’ ’ cases of a mathematical model to find new relationships, or meta-models, that determine the key factors of the model and show how these key factors interact to influence the decision. Understanding is the process of testing and validating the new relationships in the proper context, thereby converting them into new tacit knowledge. Perkins’s theory of understanding, from the theory of learning literature, suggests that understanding involves the knowledge of three things: 1.

The purpose of the analysis (i.e. what the decision maker wants to understand);

2.

A set of relations or models of the process/system to be understood; and

3.

Arguments about why the relations/models serve the purpose.

350

Key Drives of Organizational Excellence

Internalization is the process of using the new patterns and relations, together with the arguments of why they fit the purpose, to update and/or extend the decision maker’s own tacit knowledge base, thus creating a spiral of learning and knowledge that begins and ends with the individual. While KM encompasses explicit and tacit knowledge, Malhotra (2004) explains how explicitoriented BI could be construed as KM. He suggests that it depends on how a firm defines its world. That is, it depends on whether the firm adopts a model of KM for routine and structures information processing (see Figure 2) or whether it subscribes to a model of KM that focuses on non routine and unstructured sense making (see Figure 3). Malhotra (2004) notes that because business environments include a combination of stabilizing and destabilizing factors, real world KM implementations should contain combinations of characteristics of both models. The process of knowledge reuse and knowledge creation needs, he asserts, to be balanced by integration of routine and structured information processing (e.g. BI and explicit KM) and non routine and unstructured sense making (e.g. tacit knowledge exchanges such as mentoring, story telling, etc.) in the same business model. It can be argued that there exists an interaction effect between KM activities and BI efforts. For example, as Malhotra notes, artificial intelligence and expert systems are intended to making help deliver the ‘ ‘right information to the right people at the right time’’. However, this can only happen if the right information and the right person to use or apply it, and the right circumstance and appropriate time are known in advance. Detection of non routine and unstructured change depends on the sense-making capabilities of knowledge workers for correcting and validating the computational logic of the business and the data it processes. Further complicating this issue is the realization that the same assemblage of data may evoke different responses from different people at different times or in different contexts. Attempts at coding sense making capabilities are made suspect by the fact that articulation of tacit and explicit knowledge can both be elusive – people may know more than they think they know – or less. Therefore, storing explicit static representations of individuals’ tacit knowledge in databases and algorithms may not be a valid surrogate for their dynamic sense making capabilities.

Figure 2: Model 1: knowledge management for routine structured information processing

Knowledge Management and Business Intelligence: Importance of Integrating

351

Figure 3: Model 2: knowledge management unstructured sense

THE IMPORTANCE OF CULTURE ON KM AND BI EFFORTS Both KM and BI are deeply influenced by the culture of the organization, especially leadership, groups and opinion leaders, as well as organizational values (Scheraga, 1998, Pan and Scarbrough, 1999, Reisenberger, 1999). Since culture is a KM critical success factor and is largely expressed through tacit behavior, we can examine issues that culture can have on both KM- and BI-related efforts. For example, Thong’s (1999) study of technology adoption in small businesses showed that the CEO’s views on innovativeness and their views on the value of technology affected the nature of a firm’s technology adoption decisions. Also, Scheraga (1998) found that unless a company encourages its workforce to contribute to its knowledge-to-knowledge exchange and the decision-making processes, putting KM or BI solutions in place could prove useless. He notes that workers are often reluctant to share information or to articulate their decisionmaking schemas, because businesses often reward people for what they know. Reisenberger (1999) also found employee resistance to sharing knowledge in cultures where most people have gotten ahead by keeping knowledge to themselves. He suggests that this can cause managers to adopt and maintain their use of awed heuristics and decision models that fail to encompass new realities. To change this, he sees the need for top management to develop new cultural and reward systems; to recognize and reward new learning behaviors in front of the entire organization as well as to endorse, participate, and lead in knowledge sharing and challenging the status quo. He stresses that top leaders must lead the effort, becoming change agents within the organization who model knowledge sharing, fostering a culture of continuous learning and improvement to enable successful KM and BI. In McGee’s (1999) research on Proctor & Gamble, she found that their cultural change required not only a shift in internal values, but changes in attitudes about external beliefs as well. She notes that Proctor and Gamble was pursuing aggressive use of KM and BI technology in its supply chain. To be successful, McGee says that the organization must change their cultural beliefs about sharing information and decision-making techniques with outsiders. That is, the company must change its relationships with its suppliers and with its customers, from one of passive market acceptance to one of proactive sharing of knowledge and data.

352

Key Drives of Organizational Excellence

Another dimension to culture and its relationship to information and knowledge sharing is group dynamics. Okhuysen and Eisenhardt (2002) contend that while knowledge is ‘‘owned’’ at the individual level, the integration of this knowledge at a collective level is also necessary. Knowledge is often the most important strategic resource within organizations and yet knowledge usually resides with individuals (Nonaka, 1994). This implies that knowledge disclosure and integration are critical components by which firms enhance the potential utility and benefits from KM and BI efforts. They note that simple formal interventions by management can improve knowledge integration within groups with specialized knowledge by helping group members to self-organize attempts at improving their information exchange processes and to pace those attempts with task execution.

CONCLUSION KM technologies, described earlier as being in some ways less mature than BI technologies, are now capable of combining today’s content management systems and the web with vastly improved searching and text mining capabilities to derive more value from the explosion of textual information. Ideally, this explicit information will be blended and integrated with the data and techniques used in BI to provide a richer view of the decision-making problem sets and alternative solution scenarios. However, even if this is accomplished, mitigating, intervening variables called ‘‘tacit’ ’ knowledge, leadership, culture, structure, roles, and responsibilities, IT infrastructure, and performance measurement must be recognized and their affect on the decision-making process assessed. Programmable decisions can always be affected by both objective and subjective factors. Failure to recognize this fact may have contributed to the devaluation of ‘‘operations research” efforts and could spell the same fate for BI, if the field is not careful. While BI has become a ‘‘buzz word’’, its objectives overlap with those of operations research (OR) that Horner (2003) states has languished in the shadows of the corporate world – unappreciated by some, unknown and thus unused by most. He also notes that lack of demand for OR in the business world has trickled down to the business schools, where one OR course after another has disappeared from the curricula. To avoid a similar fate as OR, BI must be careful to not oversell its capabilities and relevance. While certainly it provides useful tools and techniques for decision-making, it should not claim that it is a field that encompasses of KM. This is a tactical and factual error. Instead, BI must be seen as an integral part of a larger KM effort.

References Barker, R.T. and Camarata, M.R. (1998), Tthe Role of Communication in Creating and Maintaining a Learning Organization: Preconditions, Indicators, and Disciplines, Journal of Business Communication, 35(4), 443-67. Cody, W., Kreulen, J., Krishna, V. and Spangler, W. (2002), The Integration of Business Intelligence and Knowledge Management, IBM Systems Journal, 41(4), 697-713. Cook, C. and Cook, M. (2000), The Convergence of Knowledge Management and Business Intelligence, Auerbach Publications, New York, NY, available at: www.brint.com/members/online/20080108/intelligence/ Hameed, I. (2004), Knowledge Management and Business Intelligence: What is the Difference? Available at: http:/ /onlinebusiness.about.com/

Knowledge Management and Business Intelligence: Importance of Integrating

353

Horner, P. (2003), The Science of Better, OR/MS Today, available at: www.lionhrtpub.com/orms/ orms-1203/frmarketing.html Kadayam, S. (2002), New Business Intelligence: The Promise of Knowledge Management, the ROI of Business Intelligence, available at: www.kmworld.com/publications/ whitepapers/KM2/kadayam.pdf McGee, M. (1999), Lessons From a Cultural Revolution – Proctor & Gamble is Looking to IT to Change its Entrenched Culture – And Vice Versa, Information Week, Vol. 758, pp. 46-53. Malhotra, Y. (2004), Why Knowledge Management Systems Fail: Enablers and Constraints of Knowledge Management in Human Enterprise, in Koenig, E. and Srikantaiah, T.K. (Eds.), Knowledge Management: Marco, D. (2002), The Key to Knowledge Management, available at: www.adtmag.com/ article.asp?id ¼6525 Nemati, H., Steiger, D., Iyer, L. and Herschel, R. (2002), Knowledge warehouse: an architectural integration of knowledge management, decision support, artificial intelligence and data warehousing, Decision Support Systems, Vol. 33, pp. 143-61. Nonaka, I. (1994), A Dynamic Theory of Organizational Knowledge creation, Organization Science, 5(1), 1438. Nonaka, I. and Takeuchi, H. (1995), The Knowledge-Creating Company, Oxford University Press, New York, NY. Pan, S.L. and Scarbrough, H. (1999), Knowledge Management in Practice: An Exploratory Case Study, Technology Analysis & Strategic Management, 11(3), 359-74. Reisenberger, J. (1999), Executive Insights: Knowledge – The Source of Sustainable Competitive Advantage, Journal of International Marketing, 6(3), 94-107. Scheraga, D. (1998), Knowledge management: competitive advantages become a key issue, Chemical Market Reporter, 254(17), 3-6. Thong, J. (1999), An Integrated Model of Information Systems Adoption in Small Businesses, Journal of Management Information Systems, 15(4), 187-214.

354

Key Drives of Organizational Excellence

35

Relationship of Organizational Citizenship Behavior of Employee and Opportunity Provided to Employee by Employer: Mediating Effects of Trust of Employer on Employee Khurram Aziz Fani Tariq Mahmood

Organizational Citizenship Behavior (OCB) is one of the best known and one of the most widely researched extra-role behavior construct. This research paper theorizes that trust of the employer on the employee mediates the relationship between opportunity provided by the employer to the employee and organizational citizenship behavior exhibited by the employee. Employees want to demonstrate extra-role behavior but in doing so they rely on their employers. Employers may not grant them an opportunity to do so because of lack of trust on them. Various reasons for limited trust are discussed.

INTRODUCTION Shakespeare said, “All the world is a stage and all the men and women merely player.” Using the same metaphor, in an organizational context, all organizational members are players, each performing his role. Role is perhaps one of the most widely researched central behavior construct in the organizational sciences. It (Katz and Kahn1, 1978) is the set of expected activities associated with the occupancy of a given position or job. An individual’s role (Robinson, Kraatz and Rousseau, 1994) forms the basis of psychological contract between the person and the organization. Organizations match (Rynes and Gerhart, 1990) individual’s characteristics with the job roles. Employees have the ability to adjust their role according to the changed new situation. Roles delineate behavior of a person and form the basis of his performance evaluation. Barnard (1938) regarded individual’s willingness to contribute cooperative efforts to the organizations as indispensable for effective attainment of organizational goals. According to Social Exchange theory, when employees feel satisfied with his/her job, he/she, in return,

Relationship of Organizational Citizenship Behavior of Employee and Opportunity

355

exhibits positive behavior to benefit his/her organization (Organ & Ryan, 1995). Current trends, including increased global competition, greater use of teams, continuing downsizing initiatives, and more emphasis on customer services, would make Organizational Citizenship Behavior (OCB) more important in the foreseeable future (Borman, 2004). This research paper theorizes the relationship between organizational citizenship behavior of the employees of an organization, opportunity provided to an employee by an employer and the trust of the employer on employees.

THREORATICAL FRAMEWORK Organizational Citizenship Behavior During last two decades, Management Researchers emphasized over a particular domain of roles, namely Extra Role Behavior (ERB). It is the behavior (Van Dyne, Cummings & Parks, 1995) which benefits the organization and/or is intended to benefit the organization, which is discretionary and which goes beyond the existing role expectations. Effective organizations have employees who are willing to cooperate and go beyond their formal job responsibilities (Barnard 1938; Katz & Kahn, 1978; Organ 1990) and freely give of their time and energy to achieve organizational goals and objectives. Organizational Citizenship Behavior (OCB) is perhaps the best known and one of the most widely researched extra-role behavior construct (Van Dyne, Cummings, & Parks). Organ and his colleagues (Bateman & Organ 1983; Smith, Organ, & Near, 1983) first used the phrase “Organizational Citizenship Behavior” to represent organizationally beneficial behavior of workers that was not contractually advised but occurred freely to help others achieve organizational goals and objectives. Organ (1988) originally described OCB as “individual behavior that is discretionary, not directly or explicitly recognized by the formal reward system, and that in the aggregate promotes the effective functioning of the organization”. Organ improved OCB’s definition (Organ, 1997) for “discretionary” and “non-contractual rewards”. In his new definition, Organ adopted OCB as “performance that supports the social and psychological, environment in which task performance takes place.” Behavioral Researchers are taking more and more interest in OCB. However, OCB literature lacks consensus on dimensionality of the construct (Podsakoff, MacKenzie, Paine, and Bachrach, 2000). Podsakoff et al in their examination of OCB literature identified almost 30 different forms of OCB. There is a great deal of conceptual overlap between the constructs. Podsakoff et al captured this by organizing these constructs into seven common themes or dimensions. 1.

Helping behavior refers to voluntarily helping others, or preventing the occurrence of work related problems. Everyone, who has worked in this area, regarded helping behavior as an important form of citizenship behavior (Borman & Motowidlo, 1993, 1997; George & Brief, 1992; George & Jones, 1997; Graham 1989; Organ, 1988, 1990a, 1990b; Smith, Organ, & Near, 1983; Van Scootter & Motiwidlo, 1996; Williams & Anderson, 1991). Above definition covers various facets highlighted in OCB literature. For example, it covers the Organ’s altruism, peacemaking and cheerleading dimensions (Organ, 1988, 1990b); Graham’s interpersonal helping (Graham, 1989); Williams and Anderson’s OCB-1 (Williams & Anderson, 1991); Van Scotter and Motowidlo’s

356

Key Drives of Organizational Excellence interpersonal facilitation (Van Scotter & Motowidlo, 1996); helping others constructs from George and Brief (1992) and George and Jones (1997) and Organ’ notion of courtesy. Empirical research (MacKenzie et al., 1993.; MacKenzie, Podsakoff, & Rich, 1999; Podsakoff & MacKenzie, 1994: Podsakoff, Ahearne, & MacKenzie, 1997) acknowledge the fact that all of these various forms of helping behavior load on a single factor.

2.

Sportsmanship is defined as “a willingness to tolerate the inevitable inconveniences and impositions of work without complaining (Organ 1990b).” Over the years, it has received less attention by the Researchers. Empirical research (MacKenzie et al., 1993; MacKenzie et al., 1999) confirmed that Sportsmanship is different from other forms of OCB constructs. Moreover, it has different antecedents (Podsakoff et al., 1996b; Podsakoff et al., 1990) and consequences (Podsakoff et al., 1997; Podsakoff & MacKenzie, 1994; Walz & Niehoff, 1996) as compared to other OCB constructs.

3.

Organizational loyalty consists of loyal boosterism and organizational loyalty (Graham, 1989, 1991), spreading goodwill and protecting the organization (George & Brief, 1992; George & Jones, 1997), and the endorsing, supporting, and defending organizational objectives construct (Borman & Motowidlo, 1993, 1997). Initial research showed (Moorman and Blakely, 1995) that this construct is distinct from other constructs however later confirmatory factor analysis (Moorman, Blakely and Niehoff, 1998) failed to confirm this.

4.

Organizational compliance refers to a person’s internalization and acceptance of organization’s rules, regulations, and procedures, which results in a scrupulous adherence to them, even when no one observes or monitors compliance. In research literature, organizational compliance can be identified with different names, for example, Smith et al (1983) called it generalized compliance, in the words of Graham (1991), it is organizational obedience, William and Anderson2 (1991) called it OCB-O and Borman and Motowidlo (1993) referred to it as following organizational rules and procedures.

5.

Individual initiative is regarded as ERB because it involves engagement in task-related behavior at a level that is over and above the generally expected levels or minimum required. It involves behaviors like voluntary innovation or creativity to improve one’s task or organization’s performance, persisting with extra enthusiasm and effort to accomplish one’s job, volunteering to take on extra responsibilities, and encouraging others in the organization to do the same. It is similar to Organ’s conscientiousness construct (Organ, 1988), Graham’s and Moorman’s and Blakely personal industry and individual initiative constructs (Graham, 1989; Moorman & Blakely, 1995), making constructive suggestions construct of George (George & Brief, 1992; George & Jones, 1997) making constructive suggestions construct, Borman and Motowidlo’s persisting with enthusiasm and volunteering to carry out task activities constructs (Borman and Motowidlo, 1993, 1997) Morrison and Phelps’ taking charge at work construct (Morrison & Phelps, 1999), and some aspects of Van Scotter and Motowidlo (Scotter & Motowidlo, 1996). This form of ERB is difficult to distinguish from IRB (Organ, 1988) as so many researchers have not included this dimension in their research areas (MacKenzie, Podsakoff, & Fetter, 1991; MacKenzie et al 1993) or identified that this type of behavior is difficult to differentiate empirically from in-role behavior or task performance (Motowidlo, Borman, & Schmit, 1997; Van Scooter & Motowidlo, 1996).

Relationship of Organizational Citizenship Behavior of Employee and Opportunity

357

6.

Civic virtue refers to a macro-level interest in, or commitment to the organization as a whole. Organizational Members exhibits civic virtue by actively participating in organization’s governance (attending meetings, engaging in policy debates, expressing opinion about organization’s strategy), monitoring its environment for threats and opportunities and to look out for its best interests (informing about fire hazards or suspicious activities, locking doors, ), even at great personal cost. This dimension has been referred to as civic virtue by Organ (1988, 1990b), organizational participation by Graham (1989), and protecting the organization by George and Brief (1992).

7.

Self development includes voluntary behaviors employees engage in to improve their knowledge, skills and abilities. George and Brief (1992) regarded developing oneself as a key facet of citizenship behavior. According to George and Brief, this might include “seeking out and taking advantage of advanced training courses, keeping abreast of the latest developments in one’s field and area, or even learning a new set of skills so as to expand the range of one’s contributions to an organization.” Although self development has not received any empirical confirmation in OCB literature, however, it does appear to be a conceptually distinct form of employee’s discretionary behavior.

TRUST In last decade, the concept of Trust has gained an important place in management research (Kramer and Tyler 1996, Rousseau et al, 1998). It facilitates relationships between and within the organizations thus reducing transaction costs (Chiles and McMackin 1996). Trust in the supervisor is seen as pivotal for leader effectiveness and work unit productivity (Kouzes & Posner, 1987). Trust, between individuals and groups, is very important for the long term stability of an organization and the well being of its members (Cook and Wall, 1980). According to Golembiewski and McConkie (1975, p.131), “there is no single variable which so thoroughly influences interpersonal and group behavior as does trust.” It takes a long time to build trust (Sonnenberg, 1993), can be easily destroyed and is hard to regain. Although there is widespread agreement on the importance of trust in human conduct, there is lack of agreement on a suitable definition of the concept (Hosmer, 1995). Different researchers have defined trust differently; some have conceptualized trust as one-dimensional, and others have added dimensions to it. Moreover, a substantial diversity in disciplinary background, methodologies, and definitions exists in research on trust (Bigley and Pearce 1998). Trust has been defined as a general disposition toward others (Rotter 1971), a rational decision about cooperative behavior (Dasgupta 1988), an affect-based evaluation about another person (McAllister 1995), and a characteristic of social systems (Barber 1983). In an attempt to integrate the key components of prior approaches to trust, Myer, Davis, and Schoorman (1995) defined trust as the willingness to be vulnerable to another party when that party cannot be controlled or monitored. Trust is an individual’s optimistic expectation about the outcome of an event (Hosmer, 1995). Deutsch (1958) viewed trust as the irrational expectation of the outcome of an uncertain event, given conditions of personal vulnerability. McAllister (1995) found support for a two-dimensional conceptualization of trust: affect based and cognition based. Lewis and Weigert (1985) proposed a sociological conceptualization of trust with distinct cognitive, emotional and behavioral dimensions.

358

Key Drives of Organizational Excellence

Rousseau et al. (1998) argued that the differences among scholars in the definitions and levels of analysis are less divergent than may appear at first sight. Based on their analysis of the trust literature, these authors suggest that trust is a “psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” (p. 395). Mostly, trust is defined in a manner such that dyad is the underlying unit of reference (Whitener et al. 1998). Depending on the issue of analysis, it can be aggregated to the trustor’s (who holds certain expectations about another party and, as a result, may or may not be willing to be vulnerable to the actions of the other party) perspective in one side and trustee (who is assessed by the trustor) as a reference on the other side. A statement about trust, therefore, always concerns at least two parties (individuals, groups, organizations, institutions, or entire societies). Various studies suggested that trust is a prerequisite to group effectiveness (Friedlander, 1970), group loyalty (Likert, 1961), effective group problem solving (Zand, 1972), managerial and leadership effectiveness (Likert, 1967), effective organizational processes (Dwivedi, 1983), and organizational effectiveness in general (Boss, 1978). An employee may trust his coworkers but distrust his supervisor or top management. Fox (1974) made the distinction between vertical and lateral trust. The term lateral refers to trust relations among peers (or equals) who share a similar work situation, whereas the term vertical refers to trust relations between individuals and either their immediate supervisor, top management or the organizations as a whole. The present study is centered on the vertical relationship between employees and supervisor/top management. Top management is the group of persons at or near the top of the organizational chart (Hart, 1989).

PROPOSED MODEL Research on trust within the organizations has focused mainly on three areas: interpersonal trust (e.g., Cook & Wall, 1980: Mayer, Davis, & Schoorman, 1995; McAllister, 1995; Scoot, 1980), trust in the supervisor (e.g., Butler & Cantrell, 1984; Deluga, 1994, 1995; Lagace, 1991), and trust in top management (e.g., McCauley & Kuhnert, 1992). Little attention has been given to the concept of employer’s trust on employees. Specifically, to the best of my knowledge, no research has reported the relationship between employer trust on employee, OCB of employee and the opportunity provided by the employer to the employee.

Research Proposition Trust of the employer on the employee mediates the relationship between opportunity provided by an employer to an employee and organizational citizenship behavior exhibited by the employee. Figure 1 shows the graphical representation of the proposed model. Opportunity provided by Employer to Employee

Trust of the Employer on Employee

Figure 1: Proposed Model

OCB exhibited by Employee

Relationship of Organizational Citizenship Behavior of Employee and Opportunity

359

DISCUSSION Well-known experiments in the field of communication have demonstrated that a trustworthy source could substantially affect a recipient’s behavior (Hovland et al. 1949, Allen and Stiff 1989, Perry 1996). In addition, observation by trust researchers has found that trustworthiness could inhibit monitoring by the recipient, thereby decreasing attentiveness and reducing the variety of thought and action (Webb 1996, p. 292). A trusting recipient is more likely to accept the advice of the source and change its behavior, and trust is more likely to occur when the source is perceived as trustworthy (Andrews and Delahaye 2000, Mayer et al. 1995). Moreover, the supervisor’s behavior is fundamental in determining the level of interpersonal trust in a work unit (Likert & Willits, 1940). These supervisor behavior include those often used to delineate higher-quality exchanges, i.e., sharing appropriate information, allowing mutuality of influence, and not abusing the vulnerability of others (Zand, 1972) If an employer trust an employee this would significantly effects employer’s behavior towards the employee. If an employee seeks permission from his supervisor to do certain task, if supervisor trusts employee, he is likely to go for lesser monitoring and suspicion. In explaining this relationship, I am using the word “employer” and “supervisor” interchangeably. Supervisor is the formal link between the organization and subordinates (Strutton, Toma & Pelton, 1993). Supervisors are directly responsible for communicating organizational policies and goals to their subordinates. An employee might make judgments regarding whether to trust the organization by making inferences from his or her interactions with the supervisor. Similarly, an employee needs permission from employer to do such tasks which are above and beyond the formal job duties of the employee. Employee seeks permission because of the legitimate power of the employer/supervisor on the employee. Moreover, supervisor, because of his structural position, controls organizational resources which employee might need for exhibiting OCB. Employee depends on the supervisor for attainment of those resources. Dependence would increase when the resource employer controls is important, scarce and is not substitutable (Mintzberg, 1983). There are certain instances when his superior or employer controls his behavior (OCB) by limiting employee’s access to those resources. Similarly, supply of resources and in turn opportunities is limited. When a supervisor provides an opportunity to a specific person, one of the reasons of this is the trust of the employer on employee.

How Trust Mediates the Relationship Karl (2000) pointed that trust among organizational members is an all-time low, and Morris (1995) noted that non-management employees in 57 service and manufacturing organizations viewed lack of trust as a problem in their respective organizations. Therefore, it is important to understand the manner in which trust affects the relationship. There are varieties of ways in which trust can mediate this relation ship. A few of them are mentioned hereunder: 1.

Category-based trust is the type of trust which occurs when individual from a specific group of an organization might place high trust on each other simply because of their shared membership within the group (Kramer, 1999). For instance, because of the increasing part-time workforce or due to increasing consultant workforce, project workers consist of both internal and external workers. One may argue that the supervisor would trust more to the internal employees.

360

Key Drives of Organizational Excellence

2.

Trustee’s gender: Research by Keller (2001) and Williams (2001) reported that trusts are higher within the same gender. Similarly, Spector & Jones (2004) reported that male initial trust level was higher for a new male team member and lower for a new female team member. Generally, tasks that require physical exertion are assigned to male instead of female. Similarly, customer service jobs are offered more to females as compared to males.

3.

Trusting stance is the individual’s propensity to trust others or it is the degree to which an individual consistently deals with people as if they are well-meaning and reliable across situations and persons. McKnight et al (1998) asserted that in new relationships among organizational members, an individual’s trusting stance will positively impact the degree of initial trust for another individual. Moreover, Spector & Jones also confirmed that initial trust level is positively related with initial trust level. For example, it is quite unlikely that supervisor assigns a very important task to a new comer or a young one.

4.

Ability of the trustee is the perceived level of relevant skills, competence and characteristics (Mayer et al., 1995). A number of researchers conceptualized ability as an important antecedent of trust. Cook and Wall (1980) and Good (1988) considered ability an essential element of trust. Other researchers (e.g., Butler, 1991; Butler & Cantrell, 1984; Lieberman, 1981; Rosen & Jerdee (1977) used a similar term – competence. Sonnenburg (1994) believed that trust increases when an individual is perceived to be competent. For example, in accountancy profession, the professional qualification of accountants is an evidence of their competence. Supervisors, being comfortable with their ability, do not hesitate in relying on them.

5.

Benevolence is the extent to which the supervisor wants to do well for the subordinate, aside from an egocentric motive (see Mayer et al. 1995). A number of researchers have found benevolence to be a basis of trust (Larzelere & Huston, 1980; Solomon, 1960; Strickland, 1958). Gabarro (1978) argued that subordinates find it difficult to trust until they have first made a favorable assessment of the supervisor’s motives. When a supervisor is benevolent, he or she will be friendly and attempt to help subordinates in their work, leading in turn, too a benevolent perception and the creation of trust.

6.

Integrity is the extent to which the supervisor’s action reflects values acceptable to the subordinate (Mayer et al. 1995). A supervisor is considered to have integrity if he or she is perceived to be consistent and credible, with a strong sense of justice regarding actions that are congruent with words. Integrity was one important trust factor cited by Gabarro (1978), Lieberman (1981), Butler and Cantrell (1984), and Butler (1991). A subordinate will be more likely to develop a relationship with a supervisor who displays values and attitudes that are similar to the subordinate’s than with a supervisor whose values are in congruent (Berscheid & Walster, 1978; Newcomb, 1956).

7.

frequency:: The effect of both trustor, as well as trustees characteristics Communication frequency (Becerra & Gupta, 2003) on the level of perceived trustworthiness, is moderated by the frequency of communication between the two parties. As communication frequency increases, the trustor’s general attitudinal predisposition towards peers becomes less important as a determinant of his/her evaluation of trustworthiness of other managers within the organization. In contract, as communication frequency increases, the trustor’s and trustee’s contexts within the organization become more important determinants of perceived trustworthiness.

Relationship of Organizational Citizenship Behavior of Employee and Opportunity

361

8.

Knowledge about others others:: Rotter (1971) claimed that if trustor does not have or have little knowledge about the trustee, in that case the level of trust would depend on trustor’s propensity to trust.

9.

characteristics:: Research in psychology has shown that individuals Trustor’s own characteristics differ in their propensity to trust others and that these propensities result from their early childhood, their personalities, and their experiences in life (Erikson 1953, Rotter 1967, Wrightsman 1974) Moreover, Rotter (1971) argued that if there is little contact between two managers within the organizations, perceived trustworthiness can be expected to depend largely on the trustor’s own characteristics and position within the organization. For instance, trusting stance of introvert people is different from extrovert.

FUTURE RESEARECH AREAS The idea floated in this research paper needs to be empirically tested across different professions, cultures, job Rankings, etc. Moreover, this mediating effect of trust needs to be verified across different personality types (both employer and employee). Another important point to check is that whether this proposition hold true in all situations. For example, when human resource supply is in shortage or when there is an urgency of task. Another instance could be high dependence of employer on employee.

References Allen, M., J. Stiff (1989) Testing Three Models for the Sleeper Effect, Western J. Speech Comm, 53, 411–426. Andrews, K. M., B. L. Delahaye (2000), Influences on Knowledge Processes in Organizational Learning: The Psychosocial Filter, J. Management Stud. 37(6) 797–810. Barber, B. (1983), The Logic and Limits of Trust, Rutgers University Press, New Brunswick, NJ. Barnard, C.I. (1938), The Functions of the Executive, Cambridge, MA: Harvard University Press. Bateman, T.S., & Organ, D. W. (1983), Job Satisfaction and the Good Soldier: The Relationship Between Affect and Employee “Citizenship”, Academy of Management Journal, 26, 587 – 595. Becerra, M. & Gupta, Anil K. (2003), Perceived Trustworthiness Within the Organization: The Moderating Impact on Communication Frequency on Trustor and Trustee Effects, Organization Science Vol. 14, pp. 32-44. Bigley, G. A., J. L. Pierce (1998), Straining for Shared Meaning in Organizational Science: Problems of Trust and Distrust, Acad. Management Rev., 23, 405-421. Borman, W.C., & Motowidlo, S.J. (1993), Expanding the Criterion Domain to Include Elements of Contextual Performance in. N. Schmitt, W.C. Borman, & Associates (Eds.), Personnel Selection In Organizations: 71-98. San Francisco, CA: Jossey-Bass. Borman, W.C., & Motowidlo, S.J. (1997), Task Performance and Contextual Performance: The Meaning for Personnel Selection Research, Human Performance, 10: 99-109. Borman, W.C., (2004), The Concept of Organizational Citizenship, Current Direction in Psychological Science, Vol. 13, No. 6, pp 238-241 Boss, R.W. (1978), Trust and Managerial Problem Solving Revisited, Group and Organizational Studies 3(3):331342. Butler, J. K., Jr., & Cantrell, R.S. (1984), A Behavioral Decision Theory Approach to Modeling Dyadic Trust in Superiors And Subordinates, Psychological Reports, 55, 19-28.

362

Key Drives of Organizational Excellence

Butler, J.K. (1991), Toward Understanding and Measuring Conditions of Trust: Evolution of Conditions of Trust Inventory, Journal of Management, 17, 643-663. Butler., J.K. & Cantrell, R.S. (1984), A Behavioral Decision Theory Approach to Modeling Dyadic Trust in Superiors and Subordinates, Psychological Reports, 55, 19-28. Chiles, T. H., J. F. McMackin (1996), Integrating Variable Risk Preferences, Trust, and Transaction Cost Economics, Acad Management, Rev., 21, 73-99. Cook, J. and T. Wall (1987a), New Work Attitude Measure of Trust, Organizational Commitment, and Personal Non-Fulfillment, Journal of Occupational Psychology, 53, 39-52. Cook, J., & Wall, T. (1980), New Work Attitudes Measures of Trust, Organizational Commitment and Personal Need Non-Fulfillment, Journal of Occupational Psychology, 53, 39-52. Dasgupta, P. (1988), Trust as a Commodity. D. Gambetta, ed. Trust: Making and Breaking Cooperative Relations, Basil Blackwell, New York, 49-72. Deluga, R.J. (1994), Supervisor Trust Building, Leader-Member Exchange and Organizational Citizenship Behavior, Journal of Occupational and Organizational Psychology, 67, 315-326. Deluga, R.J. (1994a), The relationship between trust in the supervisor and subordinate organizational citizenship behavior, Military Psychology, 7, 1-16. Dwivedi, R.S. (1983), Management by Trust: A Conceptual Model, Group and Organization Studies 8(4):375405. Erikson, E.H. (1953), Growth and Crises of the ‘Healthy Personality’, C. Kluckhohn and H. Murray, eds. Personality in Nature, Society and Culture, Knopf: New York F.K. Sonnenberg (1993), Trust me: Trust me not, Industry Week, August 16, 1993, pp. 22-28. Fox, A. (1974), Beyond Contract: Work, Power, and Trust Relations. London: Faber and Faber. Friedlander, F. (1970), The Primacy of Trust as a Facilitator of Further Group Accomplishment, Journal of Applied Behavioral Science 6(4): 387-400. Gabarro, J. (1978), The Development of Trust, Influence and Expectations in A.G. Athos & J.J. Gabarro (Eds.), Interpersonal Behavior: Communication and Understanding in Relationships (pp. 290-303). Englewood Cliffs, NJ: Prentice –Hall. Gabarro, J. (1978), The Development of Trust, Influence and Expectations in A.G. Athos & J.J. Gabarro (Eds.), Interpersonal Behavior: Communication and Understanding in Relationships (pp. 290-303). Englewood Cliffs, NJ: Prentice –Hall. George, J.M., & Brief, A.P. (1992), Feeling Good-Doing Good: A Conceptual Analysis of the Mood at WorkOrganizational Spontaneity Relationship, Psychological Bulletin, 112, 310-329. George, J.M., & Jones, G.R. (1997), Organizational spontaneity in context. Human Performance, 10: 153 – 170. Golembiewski, R.T., & McConkie, M. (1975), The Centrality of Interpersonal Trust in Group Processes in C.L. Cooper (Ed.), Theories of Group Processes (pp. 131-185), New York: Wiley. Good, D. (1988), Individuals, Interpersonal Relations, and Trust, in D.G. Gambetta (Ed.), Trust, Making and Breaking Cooperative Relations (pp. 131-185), New York: Basit Blackwell. Graham, J. W. (1989), Organizational Citizenship Behavior: Construct Redefinition, Operationalization, and Validation, Unpublished Working Paper, Loyola University of Chicago, Chicago, IL. Graham, J.W. (1991), An Essay on Organizational Citizenship Behavior, Employee Responsibilities and Rights Journal, 4: 249-270. Hart, K.M. (1989), A Requisite for Employee Trust: Leadership, Psychology, A Journal of Human Behavior 25(2): 1-7.

Relationship of Organizational Citizenship Behavior of Employee and Opportunity

363

Hosmer, L.T. (1995), Trust: The Connecting Link Between Organizational Theory and Philosophical Ethics. Academy of Management Review, 20, 379-403. Hovland, C., A. Lumsdaine, F. Sheffield (1949), Experiments in Mass Communication, Princeton University Press: Princeton, NJ. Katz D, & Kahn, R.L. (1978), The Social Psychology of Organizations, New York; Wiley Keller, R. T. (2001), Cross-Functional Project Groups in Research and New Product Development: Diversity, Communications, Job Stress and Outcomes. Academy of Management Journal, 44, 547-555. Kouzes, j.M. & Posner, B.Z. (1987), The Leadership Challenge, San Francisco: Jossey-Bass. Kramer, R. M. (1999), Trust and Distrust in Organizations: Emerging Perspectives, Enduring Questions, Annual Review of Psychology, 50, 537-567. Kramer, R. M., T. R. Tyler, eds. (1996), Trust in Organizations: Frontiers of Theory and Research, Sage Publications, Thousand Oaks, CA. Lagace, R.R. (1991), An Exploratory Study of Reciprocal Trust Between Sales Managers and Salespersons, Journal of Personal Selling and Sales Management, 11, 49-58. Larzelere, R., & Huston, T. (1980), The Dyadic Trust Scale: Toward Understanding Interpersonal Trust in Close Relationships, Journal of Marriage and the Family, 42, 595-604. Liberman, J.K. (1981), The Litigious Society, New York: Basic Books. Likert, R. & Willits, J.M. (1940), Morale and Agency Management, Hartford, CT: Life Insurance Agency Management Association. Likert, R. (1961), New Patterns of Management, New York: McGraw- Hill. Likert, R. (1967), The Human Organization, New York: McGraw-Hill. MacKenzie, S. B., Podaskoff, P.M., & Fetter, R. (1993), The Impact of Organizational Citizenship Behavior on Evaluations of Sales Performance, Journal of Marketing, 57: 70-80. MacKenzie, S. B., Podaskoff, P.M., & Fetter, R. (1993), The Impact of Organizational Citizenship Behavior on Evaluations of Sales Performance, Journal of Marketing, 57: 70-80. MacKenzie, S. B., Podaskoff, P.M., & Rich, G.A. (1999), Transformational and Transactional Leadership and Salesperson Performance, Working paper, Indiana University. MacKenzie, S.B., Podsakoff, P.M., & Fetter, R. (1991), Organizational Citizenship Behavior and Objective Productivity as Determinants of Managerial Evaluations of Salespersons’ Performance, Organizational Behavior and Human Decision Processes, 50, 123-150. Mayer, R. C., Davis, J.H., & Schoorman, F.D. (1995), An Integrative Model of Organizational Trust, Academy of Management Review, 20(3), 709-734. McAllister, D. J. (1995), Affect and Cognition-Based Trust as Foundations for Interpersonal Cooperation in Organizations, Acad. Management J., 38, 24-59. McAllister, D.J. (1995), Affect and Cognition-Based Trust as Foundations for Interpersonal Cooperation in Organizations, Academy of Management Journal 38, 24-59. McCauley, D. P., & Kuhnert, K.W. (1992, Summer), A Theoretical Review and Empirical Investigation of Employee Trust in Management, Public Administration Quarterly, 265-284. McKnight, D. H., Cummings, L. L., & Chervany, N. L. (1998), Initial Trust Formation in New Organizational Relationships, Academy of Management Review, 23, 473–490. Mintzberg, H. (1983), Power in and Around Organizations: Englewood Cliffs, NJ: Prentice Hall, 1983. Moorman, R.H., & Blakely, G.L. (1995), Individualism-Collectivism as an Individual Difference Predictor of Organizational Citizenship Behavior, Journal of Organizational Behavior, 16, 127-142.

364

Key Drives of Organizational Excellence

Moorman, R.H., & Blakely, G.L., & Niehoff, B.P. (1998), Does Perceived Organizational Support Mediate the Relationship between Procedural Justice and Organizational Citizenship Behavior? Academy of Management Journal, 41, 351-357. Morrison, E.W.., & Phelps, C.C. (1999), Taking Charge at Work: Extra Role Efforts to Initiate Workplace Change, Academy of Management Journal, 42, 403-419. Motowidlo, S.J., Borman, W.C., Schmit, M.J. (1997), A Theory of Individual Differences in Task and Contextual Performance, Human Performance, 10, 71-83. Organ D.W., Ryan, K. (1995), A Meta-Analytic Review of Attitudinal and Dispositional Predictors of Organizational Citizenship Behavior, Personal Psychology, 48, 775-802. Organ, D. W. (1990), The Motivational Basis of Organizational Citizenship Behavior in B. Stw & L. Cummins (Eds), Research in Organizational Behavior (Vol 12, pp. 43-72) Greenwich, CT: JAI Press. Organ, D.W. (1988), Organizational Citizenship Behavior: The Good Soldier Syndrome, Lexington, MA; Lexington Books. Organ, D.W. (1990a), The Motivational Basis of Organizational Citizenship Behavior in B.M. Staw & L.L., Cummings (Eds.), Research in Organizational Behavior, Vol. 12: 43-72. Greenwich. CT: JAI Press. Organ, D.W. (1990b), The Subtle Significance of Job Satisfaction, Clinical Laboratory Management Review, 4, 94-98 Organ; D.W. (1997), Organizational Citizenship Behavior: It’s Construct Clean-Up Time, Human Performance, 10, 85-97. Perry, D. K. (1996), Theory and Research in Mass Communication: Contexts and Consequences, L. Erlbaum Associates: Mahwah, NJ. Podsakoff, P. M., MacKenzie, S. B., Paine, J. B., & Bachrach, D. G. (2000), Organizational Citizenship Behaviors: A Critical Review of the Theoretical and Empirical Literature and Suggestions for Future Research. Journal of Management, 26, 513-563. Podsakoff, P.M., & MacKenzie, S.B. (1994), Organizational Citizenship Behaviors and Sales Unit Effectiveness. Journal of Marketing Research, 3(1): 351-363. Podsakoff, P.M., & MacKenzie, S.B. (1997), The Impact of Organizational Citizenship Behavior on Organizational Performance: A Review and Suggestions for Future Research, Human Performance, 10: 133151. Podsakoff, P.M., MacKenzie, S.B., & Boomer, W.H. (1996b), Transformational Leader Behaviors and Substitutes for Leadership as Determinants of Employee Satisfaction, Commitment, Trust, And Organizational Citizenship Behaviors, Journal of Management, 22: 259-298. Podsakoff, P.M., MacKenzie, S.B., Boomer, W.H.& Fetter, R. (1990), Transformational Leader Behaviors and Their Effects on Followers’ Trust in Leader, Satisfaction, and Organizational Citizenship Behaviors, Leadership Quarterly, 1: 107-142 Rosen, B. & Jerdee, T.H. (1977), Influence of Subordinate Charactertistics on Trust and Use of Participative Decision Strategies in a Management Stimulation, Journal of Applied Psychology, 62, 628-631. Rotter, J. B. (1971), Generalized Expectancies for Interpersonal Trust, Amer. Psychologist, 26, 443-452. Rotter, J.B. (1967), A New Scale for the Measurement of Interpersonal Trust, J. Personality, 35, 651-665. Rousseau, D. M., S. B. Sitkin, R. S. Burt, C. Camerer. (1998), Not so Different After All: A Cross-Discipline View of Trust, Acad. Management Rev., 23, 393-404. S. Rynes and B. Gerhart (1990), Interview Assessments of Applicant ‘Fit’: Am Exploratory Investigation, Personnel Psychology, Spring 1990. pp. 13-34 S.L. Robinson, M.S. Kraatz and D.M. Rousseau (1994), Changing Obligations and the Psychological Contract: A Longitudinal Study, Academy of Management Journal, February 1994, pp. 137-52.

Relationship of Organizational Citizenship Behavior of Employee and Opportunity

365

Scott, C.L., Ill (1980), Interpersonal Trust: A Comparison of Attitudinal and Situational Factors, Human Relations, 33, 805-812. Smith, C. A., Organ, D. W., & Near, J.P. (1983), Organizational Citizenship Behavior: Its Nature and Antecedents, Journal of Applied Psychology, 68, 655 – 663. Solomon, L. (1960), The Influence Of Some Types Of Power Relationships And Game Strategies Upon The Development Of Interpersonal Trust, Journal of Abnormal and Social Psychology, 61, 233-230. Sonnenburg, F.K. (1994), Managing With a Conscience, New York: McGraw-Hill. Spector, M.D. and Jones, G.W. (2004), Trust in the Workplace: Factors Affecting Trust Formation Between Team Members, The Journal of Social Psychology, 144(3), 311-321. Strickland, L.H. (1958), Surveillance and Trust, Journal of Personality, 26, 200-215. Strutton, D., Toma, A., & Pelton, L.E. (1993), Relationship between Psychological Climate and Trust between Salespersons and Their Managers in Sales Organizations, Psychological Reports, 72, 931-939. Van Dyne, L., Cummings, L.L., & Parks, J.M. (1992), Extra Role Behaviors: In Pursuit of Construct And Definitional Clarity, Unpublished Manuscript, University of Minnesota, Minneapolis, MN. Van Dyne, L., Cummings, L.L., & Parks, J.M. (1995), Extra Role Behaviors: In Pursuit of Construct and Definitional Clarity (a Bridge Over Muddied Waters), Research in Organizational Behavior, Vol. 17, pages 215285. Van Scotter, J.R., & Motowidlo, S.J. (1996), Interpersonal Facilitation and Job Dedication as Separate Facets of Contextual Performance, Journal of Applied Psychology, 81, 525-531. Webb, G. (1996), Trust and Crises, R. M. Kramer, T. R. Tyler, eds. Trust in Organizations: Frontiers of Theory and Research, Sage: Thousand Oaks, CA. Whitener, E. M., S. E. Brodt, M. A. Korsgaard, J. M. Wemer. (1998), Managers as Initiators of Trust: An Exchange Relationship Framework for Understanding Managerial Trustworthy Behavior, Acad. Management Rev., 23, 513-530. William, L., J., & Anderson, S. E. (1991), Job Satisfaction and Organizational Commitment as Predictors of Organizational Citizenship and In-Role Behaviors, Journal of Management, 17, 601-617. Williams, M. (2001), In Whom We Trust: Group Membership As an Effective Context For Trust Development, Academy of Management Review, 26, 377-396. Wrightsman, L.S. (1974), Assumptions about Human Nature, A Social Psychological Analysis, Brooks/Cole: Monterey, CA. Zand, D.E. (1972), Trust and Managerial Problem Solving, Administrative Science Quarterly 17:229-239.

366

Key Drives of Organizational Excellence

36

Value Based Management: A Leadership Approach Towards Organizational Excellence Neera Singh

Valuing human relationship means treating people with respect so that they achieve their full potential consistent with the company’s interest. When a business is expanding, treating people with respect is important because the business needs to attract and encourage the productivity of the right kind of people. When the business contracts it is cutting the workforce, treating the people with more respect becomes all the more important in order to maintain the productivity of the employees who remain and to maintain the company’s reputation as one that values human relationship in the mind of potential employees, customers, and communities. In the recent times the concept of enhancing values is the purpose of the organizations in the turbulent times within the organization and also among its stakeholder.

INTRODUCTION “No matter what leaders set out to do - whether it is creating strategy or mobilizing teams to action, their success\depends on how they do it even if they get every thing else just right, if leaders fail in this primal task of driving emotion in the right direction, nothing they do will work as well as it could or should” (Goleman et al, 2002) A great turmoil exists within our organizations today. Our followers are part of a global uncertainty and search for meaning which is translating back into their organizational performance. The heightening uncertainty across the world to an intense search for meaning, our connections as people and as leaders are part of this context.. There was a time when the values within our society spilled over into our organizations. Since then these values have eroded and leave our followers feeling alone and uncertain about their future and their role within our organizations. What has happened to cause this? In Values-Based Leadership Kuczmarski and Kuczmarski (1995) states, “the disintegration of the family” is partly to blame. Kuczmarski and Kuczmarski (1995) tells us heightened uncertainty, increased connectivity, a global economy, a changing workforce, increased speed of delivery and a heightened search for meaning have all contributed to this erosion. Kuczmarski and Kuczmarski (1995) tells us, “the problems that plague our society are mirrored in the

Value Based Management: A Leadership Approach Towards Organizational Excellence

367

workplace.” Therefore leaders need to be aware of these issues and understand what their role is within this context. Their solution to alienation and uncertainty is that “employees need values they can believe in” otherwise our organizations can expect to decline in their efficiency and effectiveness. This will ultimately lead to an eroding position of our country’s competitive position.

THE RELEVANCE OF VALUE BASED MANAGEMENT The relevance of Value based Management which is such a contemporary topic becomes important in today’s concept only when the leadership inculcates this in all the precincts of management and the relevance of this does not vanish with the leader but persist even when he is not in the helm of power . Thus giving his follower not alienation but a significance which can enhance in turn the organizational excellence. This is bringing the alienation of the values to significance and to have its long time presence in the organization also protecting and preserving it. The leadership approach talks about including the three basic elements of Value Based Management in its strategy of sustaining the excellence for a longer time period. The steps it include

These values are formed from the beliefs and the common ethos that the people share in the organization and which is beneficial for the organization growth these values guides the intentions which shape the actual behavior of the employee and also guide them when an decision has to be made. Thus seeing their convergence on certain ideas and views that facilitate the organizational excellence in terms of better services and product and a better environment where it is helpful for the society in which it is thriving on the generating certain core values which could be based on the acceptance of certain belief system (Cohan, 2003). The concept of five level hierarchies consists of creating values among the employee b which has general acceptance of the employees. These beliefs of the founder members develop identity of the organization and helps in creating the image of the organization. The leadership approach in it.

368

Key Drives of Organizational Excellence

Creating Value The model here consist of “Level 5 hierarchy”

1.

Level 1 talks about the creation of capable individual where you identify them procure them, assimilate them, and orient them, in sharing your organizational values.

2.

Level 2 consists of making them valuable as team member through incorporating the ideas where they collaborate , co-ordinate, co exist for the organizational objective.

3.

Level 3 & 4 consist of adding there value as managers an assets to the organization.

4.

Level 5 is the most important as it is for creating the leaders of tomorrow the ones takes the mantle of the organization and surges the organization in the turbulent times. It is the tasks of this few that there is no creation of anomalies which creates disillusion among the employees and to there getting disengaged from the organization which is providing them sustenance.

Value Based Management: A Leadership Approach Towards Organizational Excellence

369

Managing For Values The Created values in terms of the valued employee needs to be managed and kept in the organization because it is through them that the culture of the organization sustains for the longer duration and thus the value of the organization is kept intact by: l

Leadership

l

Change Management

l

Organizational Culture

l

Communication

l

Governance

The Need for Managing Values Model Is There To create the ambience which is conducive enough to maintain this atmosphere and create in terms of output enhanced performance and organizational excellence and loyal employee.

Invitational Leadership Approach (ILA) A synthesis Model. Purkey and Siegel (2003) posited that ignorance of the law is rarely an acceptable excuse for unacceptable actions and behaviors. As in the organization ignorance of the likely effect of executive decisions and relationships is no longer tolerable. How to act — style, presence, appearance, behaviors, words are all expected to be a deliberate choice among known alternatives. Reverberations though a workgroup of an executive’s “shadow side” are sometimes described using psychological labels such as dramatic, depressive, paranoid, compulsive, or schizoid.. Firm dysfunctions associated with the dramatic leader are overextension of resources and early burnout. Depressive leaders spawn organizations that engage in misplaced cost-cutting measures and public scolding. Paranoid executives and businesses are full of suspiciousness and hyper-alertness, suspecting infiltrations that must be uncovered - thus, wasting company time and talent. Compulsive groups reflect the perfectionist manager who is preoccupied with trivial details and micromanagement. The schizoid pattern is reflected in employees’ detachment, their self-protection from double messages and unrealistic or contradictory objectives. Dark sides of talented second-generation leaders in the corporations led to numerous failures, restructurings, and new market-share leaders as they could not manage the values well. Firm names, closet depressions, and compulsions to be first-or-nothing produced leaders debilitated by “recrudescent narcissism,” a term Harry Levinson recently coined to describe a breakdown in decision and judgment abilities — when a previously effective leader makes defective choices that would not have been made earlier in the management career. There is a connection between the regressive decisions and corporate mismanagement, with an implication where the organization fails to maintain its effectiveness among the created levels of the Hierarchy of the Ist step. ILA calls for a synthesis of these two broad, generalized zones of managerial activity. Recent melding of this well-established foundation for effective management work describes the

370

Key Drives of Organizational Excellence

ubiquitous pair as “seemingly contradictory, yet in fact complimentary leadership virtues” as forceful and enabling behaviors. For the ILA model, I chose to label our double-barreled management scenario as charismatic and participatory. These terms emphasize the manager’s need to inspire and develop co-workers while also rolling up one’s own sleeves to get the mission accomplished, i.e., assuring appropriate group structure, mapping out coordination, and scouting the pathways to needed resources. The ILA manager is a mapmaker and a pathfinder, an individual with high-hope. Rick Snyder says the high-hope manager is one who considers goals as challenges and enjoys figuring out the how-to part of reaching each objective. The ILA manager, however, also remains able to give up working on that which becomes obviously unattainable (Noam Wasserman 2008). Kaplan and Kaiser (2003) focused on a measurement method that lets managers think about what it is they are too forceful about, or not forceful enough about; and, on the other side of the coin, what it is they are too empowering about, or not empowering enough about. This allows managers then to get a fix on what needs to be adjusted in current leadership practices. The fresh and clear benefit of their method is that it reflects what is overdone and what is underdone, when someone is doing too much or too little. The better information comes from multi-rater responses to multiple questions. Responses are then contrasted, showing the gaps between what others perceive and what a manager self-reported. The contrast in results is, often, a shocking experience. Not all managers will be willing to take a look at what others say about their management behavior. Not everyone is interested in hearing if others think my way of managing is too charismatic, or, not charismatic enough, or if others think that my way of management is too participatory, or, not participatory enough. Specific questions that address each broad behavioral pattern related to ILA provide a senior partner with a useful, comprehensive picture. For law firms committed to self-improvement, an in-house multi-rater survey provides an excellent way to move toward adopting the ILA philosophy of management. When administering a firm in ways that are empowering, charismatic, and transformational, a Managing Partner or Department Head schedules time for personal attention to members of the firm. The manager creates learning and experience opportunities for partners, associates, other professionals, and support staff to gain competence with collaborative partnering; with client-finding and client satisfying, and with how to go about improving current operations and cultivating new business development. It also means taking time for planning. Short term planning means assuring that tasks are fairly distributed, that assignments go to the appropriately talented individuals, and that decision-making authority is distributed and resourced - ideally, to those closest to the front line. The traditional debriefing of both successful cases and not-so-successful cases is an evaluation review that is too often neglected. Much planning can be and should be done solo, especially when the partner has all the information necessary for a good decision. When others have important information to contribute, then it is better to assemble a small group to assure effective decisions. Long range and strategic planning are “future” challenges that fall naturally into a group decision-making realm. Working separately with professional staff and support staff is typically more efficient and broadens the range of productive goal setting Thus we find the in managing of the values the impact of the leadership is the principle force. The leaders’ i. e the Level 5 are the main precursor in handling and sustaining the value.

Value Based Management: A Leadership Approach Towards Organizational Excellence

371

The rating scale given below is the leadership role and success in communicating the values down the line Clear Goals and Consensus about Them

1. Ambiguity, Low Clarity of Goals for Work Together evidence of ambiguity

evidence of clarity

2. Unclear Mix of Roles to be Performed

Clear Roles evidence of confusion

evidence of clarity

3. Low Trust

High Trust evidence of low trust

evidence of high trust

4. Very High or Low Expectations of Help

Realistic, Congruent Expectations evidence of over, under, or counterexpectations

evidence of realistic expectations

5. Dependence or Counter-Dependence

Inter-Dependence evidence of over or counter dependence

evidence of interdependence

The more checks you can make toward the right end of the scales, the more your law practice will be ready to be introduced, and respond, to an Invitational Leadership Approach. The more checks on the left end indicate that it is more important for taking a long, hard look at revising your firm’s mission, structures, and staffing.

CHANGE MANAGEMENT A very prevalent occurrence in the organization is the changes. It is through these changes that the organization has to be persistent so the reversal of the values does not happen In fact the changes help it to sail through any transformation process that is occurring and facilitates it. It is brought through the attitude of the people Whether the change is large or small, the ability to manage it is a critical component of high performance. It helps organizations to prepare for coming changes, manage the complex organizational and workforce transition to the desired end state. And When the Change comes from the leaders it has to have the values inflicted in it to sustain the change for the longer duration of time to have its impact to be long lasting. This helps in managing them and operating successfully once a business transformation is in place to realize the greatest long-term value from their business improvement efforts the four phases of people transformation: 1.

Creating awareness: bringing common understanding of intended change aligned with individual values and ensuring that the knowledge about the change has occurred.

2.

Building acceptance acceptance:: creating environments conducive to change in mindsets and sense of ownership to gain positive perception on the change initiative.

372

Key Drives of Organizational Excellence

3.

Accomplishing adoption adoption:: transferring ownership of the change program to the business through institutionalization and internalization.

4.

Achieving assimilation assimilation:: coaching, problem solving, and addressing persistent pockets of resistance.

In fact any change is effective only when it protects the core value. The total approach towards managing the values is achieved only when we talk in committing ourselves to the core values and balancing the performance with both qualitative and quantitative factors and rewarding and appreciating them with the reward they value most, such as a blend of respect and high compensation (Harris, 1997). This whole process needs a proper communication and governance in an proactive culture and climate.

VALUE FORTIFICATION In a situation where organization are continuously in turmoil due to the constant changes and transformation going on, the need of the hour is to be anchored to something which is substantial and long lasting. Belief system which gets floated as the core value of the organization has to fortify the values In moving to significance and addressing anomie within our organizations leaders who lead from a context which focuses more on followers needs and less on self-interest would go much further in connecting with them and giving them values they can believe in. The authentic transformational leadership as a leadership approach which is altruistic because it places the followers’ needs before the needs of the leader. Another approach would be servant leadership where the leader serves as a steward to the organizational resources. In the organizations are clearly in turmoil due to anomie creeping into them. We see that leaders have an important role in addressing this issue in relation to values erosion and its impact to the organization. So what can leaders do to address values erosion? The two solutions to overcoming this erosion is through values preservation and values protection,.which is the part of Value fortification (Lomash, 1997). 1.

In values preservation a leader intentionally engages in values modeling and values casting. By values modeling a leader would basically live and breathe values. This modeling the way and considers this one of five practices of a good leader. A leader needs to be “clear about their guiding principles” to be successful with this. The other piece of values preservation is values casting. In values casting the leader continually communicates the values of the organization. It is believed that a variety of methods will achieve this and not just one communication source. Some effective methods when used in combination are written statements, informal and formal communications, storytelling, presentations…etc. Effective values casting does not happen through one method but a variety of communication methods.

2.

The final solution to overcoming values erosion is values protection. Leaders can protect values by prevention methods of building processes to protect values and hiring employees who share the values of the organization. Another method is through correction. It is belived leaders need to “challenge people who do not share the organizational values” and are not committed to them. This is similar to another leadership practice where we take to challenge the process. A final method to values

Value Based Management: A Leadership Approach Towards Organizational Excellence

373

protection is value realigning. In realigning leaders determine where values are not aligned and then work with their employees to determine where realignments need to occur. By focusing on an altruistic leadership approach, values preservation and values protection leaders can begin to address this issue of anomie and move their followers to significance. Phil Downer in Eternal Impact refers to success as “the feeling you get by reaching your goals” and significance is “making a difference in the lives of people.” By engaging in this leaders can once again be a gain as stated in employee commitment, high performance and productivity. As leaders are we looking to reach our own goals or have we been placed into leadership to help our followers attain theirs? The answer to this question is the difference between creating alienation or significance within our organizations.

CONCLUSION Thus the conclusion and the truth of the concept of the value based management a leadership approach lies in our inception of the core values of the organization through the leaders visions in the ever happening flux of the organization and maintaining it through all the changes of the organization till time to come. It should talk about the synthesis of the Values with any change inception. This starts with the leadership of empowerment from within and talks about enhancing the performance excellence of the employee. However the organizational excellence is even when these values feature in the Corporate Social Responsibility of the organization. An incident like Bhopal Gas Tragedy in which the Union Carbide Limited, India was involved is the depiction of what happens when the CSR does not have the Values in it. And the personal values of the employee are not in alignment with that of the organization. The first step is to practice CSR, and then to institutionalize it. Institutionalizing CSR is imperative to make its practice go beyond profit, compliance with the laws, public relations, mere philanthropy and dole-outs. In any attempt to institutionalize CSR, it is also crucial that business leaders go beyond external regulations. The talking and walking of CSR is necessarily incorporate the talking and walking of corporate and personal values. When management is value based, CSR practitioners are empowered from within, intrinsically and fundamentally. The new millennium regards CSR embracing the integrative approach to corporate social performance that includes, among other things, the formal attempts from many quarters to institutionalize CSR, regulatory approach, VBM, and VBL. In fact the short term benefits of CSR involve harm minimization and the traditional costbenefit analysis with the shareholder value in mind. Long-term benefits are reaped when CSR becomes strategic, that is, part of the core business, and therefore institutionalized. Being strategic in one’s approach to integrate CSR into core business is an attempt to institutionalize CSR. However even with CSR being in vogue terminology of today’s organization it has yielded results in very few as the leader at the helm of the organization fail to synthesize the Values with the Corporate Social Responsibility. Thus it remains prevalent just as “name for name’s sake”

374

Key Drives of Organizational Excellence

Figure 1: Balancing short-term with long term business benefits of CSR

1

Around the Globe. html

2

“Rethinking a Classic distinction in Leadership: Implications for the Assessment and Development of Executives.” R. E. Kaplan & R. B. Kaiser. In Journal of Consulting Psychology: Research and Practice, 55, 1, 15-25.

Thus the Value Based Management if goes into sustainability only when it has is focused on personal values, particularly moral values, and therefore nonmaterial values. As seen that manager matter so much in doing and institutionalizing CSR as they are the one who take the concept and the value to the shop floor level and also help in institutionalizing the CSR, and creating organizational excellence.

References Goleman, D., Boyatzis, R., & McKee, A. (2002), Primal leadership, Boston: Harvard Business School Press. Michael C. Harris (1997), Value Leadership: Wining Competitive Advantage in the Information Age, Prentice Hall: New Delhi Noam Wasserman (2008), The Founder‘s Dilemma, Harvard Business Review, South Asia, February. Peter, S. Cohan (2003), Value Leadership: The seven Principles That Drive Corporate Value in any Economy, Joshey Bass: New Delhi R. E. Kaplan & R. B. Kaiser (2003), Rethinking a Classic distinction in Leadership: Implications for the Assessment and Development of Executives”, Journal of Consulting Psychology: Research and Practice, 55(1), 15-25. S. Lomash

(1997), Value Management, Sterling Publishers: New Delhi

W. Purkey & B. Siegel (2003), Becoming an Invitational Leader: A New Approach to Professional and Personal Success, Radford Center for Invitational Education Clearinghouse, Radford VA: Radford University. Kuczmarski, SS & Kuczmarski, TD (1995), Values Based Leadership, Englewood Cliffs: Prentice Hall

37

Managing Requirements Through Creative Process Model Satish Bansal KK Pandey

Requirements continuously change either because of incomplete and nondeterministic character of requirement that is caught as a defect during the reviews or testing stage or because the needs themselves change. That is why when a change is needed, procedures for managing change should be both flexible enough to allow new improvements, but also rigorous enough to prevent other product development problems. The continuous stream of requirements changes creates major problems in the model for the creative process in any system. But the selection of the best process model with relevant to user requirement, we can minimize such problem. Analogous to the various models used in business to guide strategic planning, quality improvement, problem solving and other activities, there are models to guide creativity and innovation. This chapter, will explore the various models for creative thinking that have been suggested in the literature. This chapter will extract common themes from these various models and present a composite model that integrates these themes. Keywords: Requirement, Development Tools, Strategy, Research & Development, Tools & Techniques.

INTRODUCTION This chapter reviews “Model for the creativity process” by Paul E. (1997). When we start any business, we also start thinking & take some observation, don’t think it will be suitable or not. So there are many process models if we use any model according to our requirement we can reduce risk in business & provide good quality. In the context of process improvement, descriptive process models are useful for understanding the way things currently work in the organization and for communicating this understanding. Some of the basic questions that need to be answered in a process model are: What are the tasks in the process? What are the dependencies between these tasks? What are the factors that perform these tasks? The question now is to determine how such an approach can

376

Key Drives of Organizational Excellence

realistically be implemented in different organizations. This chapter is structured according to the phases of a process improvement activity as we see it.

A REVIEW OF CREATIVE THINKING MODELS One of the earliest models of the creative process is attributed to Graham Wallas. Wallas (1926) proposed that creative thinking proceeds through four phases.

The Wallas Model for the Process of Creativity 1.

Preparation (definition of issue, observation, and study)

2.

Incubation (laying the issue aside for a time)

3.

Illumination (the moment when a new idea finally emerges)

4.

Verification (checking it out)

While some models make it appear that creativity is a somewhat magical process, the predominant models lean more toward the theory that novel ideas emerge from the conscious effort to balance analysis and imagination. Rossman (1931) examined the creative process via questionnaires completed by 710 inventors and expanded Wallas’ original four steps to seven.

Rossman’s Creativity Model 1.

Observation of a need or difficulty

2.

Analysis of the need

3.

A survey of all available information

4.

A formulation of all objective solutions

5.

A critical analysis of these solutions for their advantages and disadvantages

6.

The birth of the new idea — the invention

7.

Experimentation to test out the most promising solution, and the selection and perfection of the final embodiment

Note that while Rossman still shrouds the “birth of the new idea” in mystery, his steps leading up to and following this moment of illumination are clearly analytical. Alex Osborn (1953), the developer of brainstorming, embraced a similar theory of balance between analysis and imagination in his seven-step model for creative thinking.

Osborn’s Seven-Step Model for Creative Thinking 1.

Orientation: pointing up the problem

2.

Preparation: gathering pertinent data

3.

Analysis: breaking down the relevant material

4.

Ideation: piling up alternatives by way of ideas

Managing Requirements Through Creative Process Model 5.

Incubation: letting up, to invite illumination

6.

Synthesis: putting the pieces together

7.

Evaluation: judging the resulting ideas

377

Note that Osborn implied purposeful ideation both in his notion of “piling up alternatives” and through his development of the rules of brainstorming as a tool for doing so. The systematic combination of techniques for directed creativity and techniques for analysis continues as a strong theme in several, more recently proposed models. Parnes (1992); Isaksen and Trefflinger (1985) outline six steps in their popular creative problem solving (CPS) model. (Tens of thousands of people have learned the CPS model and its associated tools through the seminars conducted by the Creative Education Foundation in Buffalo, NY.)

The Creative Problem Solving (CPS) Model 1.

Objective finding

2.

Fact finding

3.

Problem finding

4.

Idea finding

5.

Solution finding

6.

Acceptance finding

Steps 3 and 4 (problem and idea finding) clearly require novel, creative thinking; while steps 1, 2, 5, and 6 require traditional skills and analytical thinking. Koberg and Bagnall (1981) propose a similar balanced model in their popular book, “The Universal Traveler”.

Koberg and Bagnall’s Universal Traveler Model 1.

Accept the situation (as a challenge)

2.

Analyze (to discover the “world of the problem”)

3.

Define (the main issues and goals)

4.

Ideate (to generate options)

5.

Select (to choose among options)

6.

Implement (to give physical form to the idea)

7.

Evaluate (to review and plan again)

Again, notice that ideation, the traditional focus of creative thinking tools such as brainstorming, is proceeded and followed by deliberate analytical and practical thinking. Also note the importance that Koberg and Bagnell place on accepting the situation as a personal challenge. This is consistent with the research into the lives of great creators that

378

Key Drives of Organizational Excellence

illustrates the importance of focusing and caring deeply. Finally, note that the final steps of this model support the notion of continuous innovation. The theme of creative and analytical balance is carried over into models proposed for specific applications. Bandrowski’s (1985) process for creative strategic planning. A Model for Creative Strategic Planning Analysis

Creativity

Judgment

Planning

Standard Planning

Creative Leaps

Concept Building

Action Planning

Insight Development

Strategic Connection

Critical Judgment

Creative Contingency Planning

Notice the positive role of judgment in this model and the need for applying specific creative skills in insight development, creative leaps, and creative contingency planning.

COMMON THEMES BEHIND THE MODELS OF THE CREATIVE PROCESS While there are many models for the process of creative thinking, it is not difficult to see the consistent themes that span them all. 1.

The creative process involves purposeful analysis, imaginative idea generation, and critical evaluation — the total creative process is a balance of imagination and analysis.

2.

Older models tend to imply that creative ideas result from subconscious processes, largely outside the control of the thinker. Modern models tend to imply purposeful generation of new ideas, under the direct control of the thinker.

3.

The total creative process requires a drive to action and the implementation of ideas. We must do more than simply imagine new things; we must work to make them concrete realities.

4.

This process continues in a cycle until it fulfills our requirements.

These insights from a review of the many models of creative thinking should be encouraging to us. Serious business people often have strong skills in practical, scientific, concrete, and analytical thinking. Contrary to popular belief, the modern theory of creativity does not require that we discard these skills. What we do need to do, however, is to supplement these with some new thinking skills to support the generation of novel insights and ideas. These insights from the historical models of creative thinking are meant to challenge and encourage. As serious business people, we have strong skills in practical, scientific, concrete, and analytical thinking that will serve us well as we engage the creative process. Contrary to popular belief, the modern theory of creativity does not require that we discard these skills. What we do need to do, however, is to acquire some new thinking skills to support the generation of novel insights and ideas. Importantly, we also need to acquire the mental scripts to balance and direct these new thinking skills in concert with our traditional ones. If we can meet this challenge, we stand well-equipped to help lead our organizations to competitive advantage through innovation.

Managing Requirements Through Creative Process Model

379

THE DIRECTED CREATIVITY CYCLE: A SYNTHESIS MODEL OF THE CREATIVE PROCESS The Directed Creativity Cycle is a synthesis model of creative thinking that combines the concepts behind the various models proposed over the last 80+ years. The Directed Creativity Cycle

 Let’s walk through it, beginning at the 9:00 position on the circle. We live everyday in the same world as everyone else, but creative thinking begins with careful observation of that world coupled with thoughtful analysis of how things work and fail. These mental processes create a store of concepts in our memories. Using this store, we generate novel ideas to meet specific needs by actively searching for associations among concepts. There are many specific techniques that we can use to make these associations; for example, analogies, branching out from a given concept, using a random word, classic brainstorming, and so on. The choice of technique is not so important; making the effort to actively search for associations is what is key. Seeking the balance between satisfying and premature judgment, we harvest and further enhance our ideas before we subject them to a final, practical evaluation. But, it is not enough just to have creative thoughts; ideas have no value until we put in the work to implement them. Every new idea that is put into practice changes the world we live in, which re-starts the cycle of observation and analysis. Directed creativity simply means that we make purposeful mental movements to avoid the pitfalls associated with our cognitive mechanisms at each step of this process of searching for novel and useful ideas. We can’t stop changes or requirements for a system but it can be minimize. Directed creativity process model is a process cycle which continues control the problem of requirement or manage them.

380

Key Drives of Organizational Excellence

CONCLUSION Whenever we talk about creativity many theories believe that it is an inborn process and this is true but another side of it is also true that it comes and develops with continuous practice in the same area but the condition is that the person involved in the area is really interested to watch and see each and every corners of the area. A suggestions capability in a particular area comes when deep study is there. 1.

2.

Creativity in existing system (In term of cost-Benefit analysis) v

Replacing existing system fully through creativity

v

Replacing existing system partial

Creating new systems

The model also purposefully avoids taking a stand on the controversy of whether imagination is a conscious or subconscious mental ability. While the researchers believe that imagination is a conscious, non-magical mental action, the activity of “generation” in the model welcomes creative ideas regardless of their source. Finally, notice that this model clearly supports the notion that innovation is a step beyond the simple generation of creative ideas. The Action phase of the model makes it clear that creative ideas have value only when they are implemented in the real world. Using this paper we can discuss principles of change management in requirements and provides some guidelines for identifying, planning and conducting requirements management activities.

References Paul E. Plsek (1997), Creativity, Innovation and Quality, Milwaukee, WI: Quality Press. Bandrowski, JF (1985), Creative Planning Throughout the Organization , New York: American Management Association. Isaksen, SG and Trefflinger, DJ (1985), Creative Problem Solving: The Basic Course, Buffalo, NY: Bearly Publishing. Koberg, D and Bagnall, J (1981), The All New Universal Traveler: A Soft-Systems Guide to Creativity, ProblemSolving, and the Process of Reaching Goals, Los Altos, CA: William Kaufmann, Inc. Osborn, A (1953), Applied Imagination, New York: Charles Scribner. Parnes, SJ (1992), Sourcebook for Creative Problem Solving, Buffalo, NY: Creative Education Foundation Press. Rossman, J (1931), The Psychology of the Inventor, Washington DC: Inventor’s Publishing. Wallas, G (1926), The Art of Thought. New York: Harcourt Brace.

38

Knowledge Acquisition for Marketing Expert System Based on Problems of Marketing Domain Snehal Mistry

In marketing domain, it is of utmost important to get the proper detail about the customer at the right time in designing and developing the marketing strategies for the organisation. For, helping the marketers in this task there is technique called ‘Knowledge Acquisition’, which is vastly used in the developed countries to meet the marketing challenges, but there exists problem with this technique also. Therefore, this chapter provides a thorough literature review with prima facie conceptualization to map a generic problem domain, and thereby provide guidance in the choice of knowledge-acquisition technique for developers of expert systems in the field of marketing. It also gives an idea how worthy it would be to use non-traditional knowledge acquisition techniques to get more detailed and subjective insight about the customer. Lastly, chapter concludes that designers of expert systems for marketing should consider interviewing and card sorting as the important means of knowledge acquisition instead of regular means like protocol analysis

INTRODUCTION In marketing domain, it is of utmost important to get the proper detail about the customer at the right time in designing and developing the marketing strategies for the organization. The application of expert systems technology to marketing problems has been steadily increasing within the industry to meet the current challenges. The most commonly cited problems in developing these systems are the unavailability of both experts and knowledge engineers, and difficulties with the rule-extraction process. Within the field of artificial intelligence, this has been called the “knowledge acquisition” problem and has been identified as the greatest bottleneck in the expert system development process. Simply stated, the problem is how to acquire the specific knowledge for a well-defined problem domain from one or more experts and represent it in the appropriate computer format, efficiently. Given the “paradox of expertise” (Hoffman, 1987), the experts in question have often focused on procedures to the point that they have difficulty in explaining exactly what they know and how they know it. However, new empirical research in the field of expert systems reveals that certain knowledge-acquisition techniques are significantly more efficient than

382

Key Drivers of Organizational Excellence

others in different domains and scenarios. To objectively compare the effectiveness of different techniques five determinants of the quality of the resulting knowledge base are identified as follows: 1.

domain experts;

2.

knowledge engineers;

3.

knowledge representation schemes;

4.

knowledge elicitation methods; and

5.

problem domains.

This chapter attempts to create link between the body of empirical studies and the different problem domains, within the field of marketing, with the aim of giving important implications to developers of marketing expert systems in their choice of knowledge-acquisition techniques. A generic problem domain taxonomy Research in the field of knowledge acquisition has focused on several dimensions of the problem as determining factors. One primary determinant of the knowledge-acquisition technique used to develop an expert system is the problem domain. The most commonly used technique divides problems into general categories of analysis, synthesis, and those that combine analysis and synthesis. This is represented in Table I. Table I. Generic Problem domain

Analysis problems

Classification – categorizing based on observables Debugging – prescribing remedies for malfunctions Diagnosis – inferring system malfunctions from observables Interpretation – inferring situation descriptions from sensor data

Synthesis problems

Configuration – configuring collections of objects under constraints in relatively small search spaces Design – configuration collections of objects under constrains in relatively large search spaces Planning – designing actions Scheduling – planning with strong time and/or space constraints

Problems combining analysis and synthesis

Command and control – ordering and governing overall system control Instruction – diagnosing, debugging, and repairing student behaviour Monitoring – comparing observations to expected outcomes Prediction – inferring likely consequences of given situations Repair – executing plans to administer prescribed remedies.

Source: Clancy, (1985) taxonomy

Table II shows how generic task domains can be mapped into specific marketing task domains, with selected examples of marketing expert systems. It was evident that a wide variety of problems have been addressed with expert systems with varying levels of success. These include such problems as forecasting demand to analyzing advertising campaigns and promotions. The most common applications were in the domains of pricing, media planning and scheduling, with no clear example of a system in the “repair” domain. These tasks were then placed in the generic taxonomy based upon the generic task descriptions.

Knowledge Acquisition for Marketing Expert System

383

In addition, the process of mapping specific functions to the more abstract categories of analysis, synthesis, and the combination of the two reveals some interesting characteristics of marketing problems. Table – II Marketing Task domains and Expert Systems Generic task domains

Marketing task domains

Some examples of marketing expert systems

Analysis Classification Debugging Diagnosis Interpretation

Sales Prospect Qualification , Market Targeting Discount Evaluation Promotion Evaluation System Evaluating Potential distributors

Ainscough and Leigh (1996); AMOS Levin et al. (1995); Ebersold (1991) PROMOTER (Abraham and Lodish, 1987) Business Insights (McNeilly and Gessner, 1993)

Synthesis Configuration Design Planning Scheduling

Pricing , Onsite price quotes, retail space allocation Advertising , Process Design Market Segmentation, media planning Sales scheduling, Ad spots scheduling, order scheduling

PRICER (Bernstein, 1989); ADCAD (Burke et al., 1990); COMSTRAT (Moutinho et al., 1993) Expert Rule(Heichler, 1993); Logix (Mentzer and Gandhi, 1993);

Combination Command & control Instuction Monitoring Prediction Repair

Market entry, Partner selection, Marketing Budget Evaluation Consumer Product Advising Competitor Pricing analysis Forecasting, customer retention, international negotiations, maint.

PARTNER (Cavusgil, 1995); Product Advisor (Bernstein, 1989); CompShop (Fox, 1992); NEGOTEX (Rangswamy et al, 1989)

Looking at the marketing tasks that fall within the analytic category shows that all of these tasks involve taking a set of data inputs and identifying patterns in them. In contrast, the synthetic problems require that solutions be generated based upon the more general goals of the system and involve the search of a much larger set of potential solutions. Combinations of the two are typically the most ambitious types of expert systems in that they must perform in-depth analysis of large amounts of diverse input data, identify the problems and causes and design a possible solution. These categories are meant to serve as a guide to begin thinking about which knowledge acquisition technique might be the most appropriate for the different problem domains within marketing.

KNOWLEDGE ACQUISITION TECHNIQUES Many different techniques have been developed specifically for knowledge engineers in these different situations, or have been drawn from existing research in respective fields. A brief overview of some of the most commonly used varieties is given here. Of these techniques, according to a survey carried out by Cullen and Bryman in 1988 found that the most commonly used knowledge elicitation technique was the “unstructured interview”, in which the knowledge engineer asks general questions and just hopes for the best. However, each technique requires different abilities from the knowledge engineer and the knowledge source, and allows a different set of knowledge representations used. The knowledge acquisition techniques described here are certainly not without their problems. Not only do they require an enormous amount of time and labour on the part of

384

Key Drivers of Organizational Excellence

both the knowledge engineer and the domain expert but they require the knowledge engineer to have an unusually wide variety of interviewing and knowledge representation skills in order for them to be successful. Recognizing that unstructured interviews are very inefficient, researchers in the area of psychotherapy have been developing structured interviewing techniques for many years. From this work, psychologists developed other interviewing techniques and tools, which were designed to structure the interview process and have been in turn, generally applied to the knowledge elicitation problem. These techniques can often be applied to situations where the expert is being interviewed while actually performing a task or where the task is simulated or reconstructed by case studies or scenarios or simply from the expert’s own past experience. Elicitation techniques most commonly discussed in the literature include protocol analysis, repertory grids prototyping, multidimensional scaling, cluster analysis, discourse analysis, card sorting and even recall. Protocol analysis is one of the most frequently mentioned elicitation techniques in the knowledge acquisition literature. Cullen and Bryman (1988) found it to be second only to unstructured interviews in actual usage. Protocol analysis has become popular as an elicitation tool because it forces the expert to focus on a specific task or problem without interruptions from the knowledge engineer. It forces the expert to consciously consider the problem-solving process and so may be a source of new self-understanding. It is also very exible in that many different types of tasks (simulations, special cases, etc.) may serve as a basis for the protocol. Having a record encourages the knowledge engineer to identify specific topics and also missing steps in the process. On a practical level, protocol analysis requires little equipment or special training for the knowledge engineer. The main disadvantage of protocol analysis is the very necessity of forcing the expert to express actions in words. It is often the case that experts have concentrated to such an extent on procedures that they are either unable to express their expertise or are completely unaware of it. This phenomenon is more commonly referred to as the paradox of expertise (Hoffman, 1987), and is one of the major motivations for research in the field of knowledge acquisition. Not only may they be unaware of their problem-solving methods, but they may actually verbalize them incorrectly and thus introduce error or bias into the resulting system. Thus, the appropriateness of protocol analysis may depend heavily on the type of task being studied and the personality and ability of the expert to be introspective and verbalize thought processes. Protocols can also be very time-consuming to generate and may result in more data than the knowledge engineer can efficiently handle. Card or concept sorting techniques are also used to help structure an expert’s knowledge. As the names imply, these procedures involve the knowledge engineer in writing the names of previously identified objects, experiences and rules on cards which the expert is asked to sort into groups. The expert describes for the knowledge engineer what each group has in common and the groups can then be organized to form a hierarchy. Some empirical research suggests that card sorting, like multidimensional scaling, may be a more efficient elicitation technique than some of the more traditional techniques such as protocol analysis or interviewing. It has also been suggested that it is a tool which could be easily implemented on a computer as an automated knowledge acquisition tool.

EMPIRICAL RESEARCH ON KNOWLEDGE ACQUISITION TECHNIQUES Work on the knowledge acquisition problem currently follows along three major areas described as technique-oriented, empirical studies, and conceptual research. A brief review

Knowledge Acquisition for Marketing Expert System

385

of research carried out in this field is presented in Table. III which shows that both conceptual and empirical research has lagged behind technique-oriented research. Experiments and case studies have focused on comparing and evaluating knowledge acquisition techniques. There have been a few recent efforts to test the usability of different knowledge acquisition tools and techniques empirically. The ability of various knowledge elicitation methods were tested to elicit knowledge about classifying different problems, and the relative efficiency of several automated knowledge acquisition tools was compared. Previous researchers have recognized the need for sound empirical research to compare the effectiveness and efficiency of knowledge-acquisition tools and methods. It was concluded that more research was needed to answer the questions: l

Is there one best elicitation technique for knowledge acquisition?

l

If not, what is the best combination of techniques?

l

Which techniques are most suitable under which circumstances?

l

What skills are required in order to utilize each of these techniques?

Table 2: Showing Research carried out by Researchers in KA techniques Table III : Summary of Empirical

Studies

KA techniques

Mod. Vars.

PDomain

Dep. Vars

Results

Michalski and Chilausky (1980)

Interviewing Inductive

Not considered

Diagnosis

Percentage of correct diagnosis generated

Inductive learning performed better than interview

Messier and Hansen (1987)

Interviews Protocol analysis

Human vs. reconstruct ed knowledge sources

Interpretat ion

KE's opinion of the quality

Protocol analysis has limited usefulness for certain types of K

Holsapple and Raj (1994)

Interviewing & Protocol analysis

Domain Complexity

Planning

Efficiency and quality of K as measured by number of nodes and arcs and their accuracy

Interviewing is more efficient and accurate for simple cases but protocol is more efficient for complex cases

Burton et al. (1990)

Structured interviews, Protocol analysis, Card storing, Laddered grids

Expert vs. Non-expert; two classificatio n domains

Classificat ion

Efficiency of process

Protocol analysis performed poorly in classification domain; card sorting and grids performed better than interviewing; external validation of experts important

Adelman (1989)

Top-down vs. bottom-up interviewing

Knowledge engineer, and domain expert

Command and control

Accuracy of elicited rules as compared to "golden mean" set

Found no sig. Variation except for that due to domain expert

Source: Dhaliwal and benbasat (1990)

386

Key Drivers of Organizational Excellence

In an experiment to discover the source of the greatest variation in the knowledge acquisition process, Adelman (1989) identified five determinants of knowledge base quality: l

domain experts;

l

knowledge engineers;

l

knowledge representation schemes;

l

knowledge elicitation methods; and

l

problem domains.

The best-known experimental research on knowledge-acquisition methods is that of Burton et al. (1987). By varying the different knowledge acquisition techniques among different groups of experts, each of whom was tested for cognitive style, they discovered several specific things. Among their findings was that protocol analysis took the most time and elicited less knowledge than the other three techniques they tested (interviewing, card sorting, and goal decomposition). Not surprisingly, they also found that introverts needed longer interview sessions but generated more knowledge than extroverts. Interestingly, the rarely used techniques of goal decomposition and card sorting proved to be as efficient as the more common interviewing technique and more efficient than the commonly used protocol analysis. One measure of technique efficiency was the time it took to code the transcripts into pseudorules while the number of rules or clauses was taken as a measure of acquired knowledge. These various experimental studies are symptomatic of a recognized need empirical investigation of knowledge-acquisition phenomena. The few pioneer studies are typified by confusing terminology, conflicting operationalizations, and the proliferation of ad hoc taxonomies. In addition, results are conflicting and no clear pattern has emerged. There are problems controlling for effects of moderator variables and in operationalizing the measurement of dependent variables.

CONCLUSIONS From this examination of the different knowledge acquisition techniques used in expert systems development and of the results of recent empirical studies, we can begin to make some more specific conclusions. First, though the problem domains studied are generally drawn from problems in the classification or command and control type, it would appear that protocol analysis does not perform as well as other less traditional techniques, such as card sorting. Being data-given tasks, the use of inductive techniques seems more likely to perform well than interviewing techniques. Where induction cannot be used, techniques for organizing highly structured interviews, such as card sorting, seem to work better than interviewing. In either case, well structured knowledge-acquisition techniques seem to work best in analytic problem domains and protocol analysis performs poorly in all of the comparative studies. As we move into the more difficult-to-model synthetic domains such as design and planning, techniques or protocol analysis may be more appropriate. It would seem that the difficulty in modeling these less structured domains might be one reason there are relatively few comparative studies of knowledge acquisition in the synthetic and combined synthetic/ analytic domains. The two studies in the command and control domain do not offer much

Knowledge Acquisition for Marketing Expert System

387

guidance as to which techniques work best. The fact that Adelman (1989) found no significant effect when he varied the technique may indicate that the choice may not matter as much for problem domains that combine both analytic and synthetic aspects.

IMPLICATIONS FOR DEVELOPING MARKETING EXPERT SYSTEMS The application of empirical knowledge-acquisition research to the problem of choosing an appropriate technique for developing an expert system application in the field of marketing suggests several directions. First, if the task at hand is an analytic problem domain, such as evaluating a promotional campaign or qualifying potential sales prospects, techniques that provide a high degree of structure to the interviewing process seem to work best. Protocol analysis, though fairly commonly used, is relatively inefficient for analytic problems while the most popular technique of using an unstructured interviewing is one of the least efficient and least satisfying from the standpoint of the expert. So it may be worth exploring some of the non-traditional techniques when working on these type applications. If a highly robust expert system for market entry or joint partner selection were to be developed, then we might suppose that protocol analysis would be more efficient than interviewing. The fact that interviewing is more efficient for simple domains, may imply that it is best used for initial knowledge-acquisition sessions, when the problem complexity is not yet developed clearly. For those studies that did consider the effect of moderator variables, it seems clear that no matter what type of problem domain, developers of expert systems in the field of marketing should consider their potential impact. The impact of the cognitive style of the expert, domain complexity, along with other attributes of the domain expert all seem to be important factors in the quality of an expert system regardless of the problem domain. It is clear that some guidance in choosing the appropriate knowledge-acquisition technique can have a significant impact on the quality of the resulting system and the efficiency of its development.

Refrerences Abraham, M.M. and Lodish, L.M. (1987), Promoter: An Automated Promotion Evaluation System, Marketing Science, 6(2), 101-23. Adelman, L. (1989), Measurement Issues in Knowledge Engineering, IEEE Transactions on Systems, Man and Cybernetics, Vol. 19, pp. 483-8. Bernstein, A. (1989), MCI Wins Marketing Game With ‘Expert’ is Strategy, Computerworld, 23(37), 18-19. Boose, J. (1989), A Survey of Knowledge Acquisition Techniques and Tools, Knowledge Acquisition, 1(1), 338. Burke, R.R., Rangaswamy, A. and Wind, Y. (1990), A Knowledge-Based System for Advertising Design, Marketing Science, 9(3), 212-29. Burton, A.M., Schweickert, R., Taylor, N.K., Corlet, E.N., Shadbolt, N.R. and Hedgecock, A.P. (1990), Comparing Knowledge Elicitation Techniques: A Case Study, Artificial Intelligence, 1(4), 245-54. Campanelli, M. (1994), Sound the Alarm!, Sales and Marketing Management, pp. 20-4. Cavusgil, S.T., Yeoh, P. and Mitri, M. (1995), Selecting Foreign Distributors, An Expert Systems Approach, Industrial Marketing Management, 24(4), 297-304. Clancy, W.J. (1986), Heuristic Classification, Artificial Intelligence, Vol. 27, pp. 298-350.

388

Key Drivers of Organizational Excellence

Cullen, J. and Bryman, A. (1988), The Knowledge Acquisition Bottleneck: Time For a Reassessment? Expert Systems, Vol. 3, pp. 216-24. Dhaliwal, J.S. and Benbasat, I. (1990), A Framework for the Comparative Evaluation of Knowledge Acquisition Tools and Techniques, Knowledge Acquisition, 2(2), 145-66. Duan, Y. and Burrell, P. (1995), A Hybrid System for Strategic Marketing Planning, Marketing Intelligence & Planning, 13(11), 5-12. Eom, S.B. (1996), A Survey of Operational Expert Systems in Business (1980-1993), Interfaces, 26(5), 50-70. Forsythe, D. and Buchanon, J. (1989), Knowledge Engineer as Anthropologist, IEEE Transactions on Systems, Man, and Cybernetics, Vol. 3. Gambon, J. (1995), A Database That ‘Ad’ Up, Information Week, Vol. 539, pp. 68-70. Girod, G., Orgeas, P. and Landry, P. (1989), Times: An Expert System for Media Planning, Innovative Applications of Artificial Intelligence, pp. 239-49. Grabowski, M. (1988), Knowledge Acquisition Methodologies: Survey and Empirical Assessment, Proceedings of ICIS. Hoffman, R. (1987), The Problem of Extracting the Knowledge of Experts From the Perspective of Experimental Psychology, AI Magazine, 8(2), 53-67. Holsapple, C. and Raj, V. (1994), An Exploratory Study of Two KA Methods, Expert Systems, 11(2), 77-87. Kim, J. and Courtney, J. (1988), A Survey of Knowledge Acquisition Techniques and Their Relevance to Managerial Problem Domains, Decision Support Systems, Vol. 4 No. 3. McGraw, K.L. and Harbison-Briggs, K. (1989), Knowledge Acquisition: Principles and Guidelines, Prentice-Hall: New Delhi McNeilly, M. and Gessner, S. (1993), Business Insights: An Expert System for Strategic Analysis, Planning Review, 21(2), 32-3. Mentzer, J.T. and Gandhi, N. (1993), Expert Systems in Industrial Marketing, Industrial Marketing Management, 22(2), 109-16. Newell, A. and Simon, H. (1972), Human Problem Solving, Prentice-Hall: New Delhi Wagner, W.P., Chung, Q.B. and Najdawi, M.K. (2003), The Impact of Problem Domains and Knowledge Acquisition Techniques: A Content Analysis of P/OM Expert System Case Studies, Expert Systems with Applications, Vol. 24 No. 1. Waterman, D.A. (1986), A Guide to Expert Systems, Addison-Wesley, Reading, MA.

39

Organizational Culture and Climate Prachi Singh

A fundamental shift is occurring in across the business world. We are moving progressively further away from a world in which national business were relatively isolated from each other by barriers by distance, time zones and language; and by national culture differences, and business systems. How organizations cope with this globalization of culture? The learned community always says, a company is by people not machines, you just concentrate on your men and it will be them who will be after your machines, thus will save most of your precious time. Climate of an organization is somewhat like personality of a person. What makes organizational culture enduring is the socialization process of an organization. This process, which familiarizes a 'fresher' with the various characteristics of culture and forces him to adjust to it, continues throughout one's entire career in the organization.

INTRODUCTION A fundamental shift is occurring in across the business world. We are moving progressively further away from a world in which national business were relatively isolated from each other by barriers by distance, time zones and language; and by national culture differences, and business systems. And we are moving toward a world in which national culture is merging into an interdependent global culture system. It rapidly raises a multitude of issues for business both large and small. It creates opportunities for business. But one question has thrown up about individual organization culture. How organizations cope with this globalization of culture? The learned community always says, a company is by people not machines, you just concentrate on your men and it will be them who will be after your machines, thus will save most of your precious time. Culture consists of patterned ways of thinking, feeling and reacting that are acquired by language and symbols that create distinctiveness among human groups. the term organizational culture consists of two words ‘organization‘ and ‘culture’. Organization is a broader term referring to the process of organizing,the structure of an organization and the process that occurs within an organization, whereas culture is a most commonly experienced phenomenon. Climate of an organization is somewhat like personality of a person. What makes organizational culture enduring is the socialization process of an organization. This process,

390

Key Drivers of Organizational Excellence

which familiarizes a ‘fresher’ with the various characteristics of culture and forces him to adjust to it, continues throughout one’s entire career in the organization. There is no single definition for organizational culture. The topic has been studied from a variety of perspectives ranging from disciplines such as anthropology and sociology, to the applied disciplines of organizational behavior, management science, and organizational communication. Some of the definitions are listed below: l

A set of common understandings around which action is organized, . . . finding expression in language whose nuances are peculiar to the group (Becker and Geer 1960).

l

A set of understandings or meanings shared by a group of people that are largely tacit among members and are clearly relevant and distinctive to the particular group which are also passed on to new members (Louis 1980).

l

A system of knowledge, of standards for perceiving, believing, evaluating and acting . . . that serve to relate human communities to their environmental settings (Allaire and Firsirotu 1984).

Organizational or corporate culture has been defined as the philosophies, ideologies, values, assumptions, beliefs, expectations, attitudes and norms that knit an organization together and are shared by its employees (Kelly). Talking more about culture, people are due to be influenced by the culture they grow & live in, for instance a kid of middle class family will be brought up with values, ethics, morals etc., while a boy belonging to the upper class is being taught the lesson of materialism right from his childhood. The same applies to an organization & its members, the only difference being that a society has social culture while an organization has an organizational culture. Organization culture thus starts up with a common life style adopted by its members in form of shared learning, behaviours, values & interests & provides the employees with a clear understanding of the way things are done around them in their organization thus guides their further actions. It is considered to be a common perception held by the employees regarding their organization. Culture in an organization plays at different levels, dominant culture which is shared & accepted by the majority of the organizations members, the next level to come are the subcultures that exist at the micro level that reflect the common problems & situations faced by every member in the organization within his own group or department. A culture adopted in the organization may be quite possibly based on the national culture or even the local societal & religious norms. Thus in a nutshell view it may said that the organizational culture is a social glue that helps to hold the organization together. The knowledge of organizational culture is important because: No organization can operate in isolation from its cultural environment, they are social systems that must inevitably operate to survive within the framework of a larger cultural system. People in organization come from different cultural backgrounds having different beliefs, customs, understanding preferences. What makes organizational culture enduring is the socialization process of an organization? This process, which familiarizes a ‘fresher’ with the various characteristics of culture and forces him to adjust to it, continues throughout one’s entire career in the organization. Socialization process has 3 stages: pre-arrival, encounter and metamorphosis. Selection of only ‘right type’ of person who “fit” the eligibility requirement (which are laid down in the

Organizational Culture and Climate

391

light of prevailing organization culture) is an attempt to maintain and perpetuate the existing organization culture even before the outsider has joined the organization. If a “wrong” person (whose individual characteristics do not match with the prevailing organization culture) gains an entry into the organization his encounter with the new forces begins. These forces try to change him according to organization culture. The person may decide either to surrender himself to these forces and get completely changed or to leave the organization if he finds the impact of these forces and changing the organization culture. This, of course, is not easy. The various forces which a person has to encounter on his entry in to the organization and which subsequently bring about his metamorphosis are long standing unwritten rules, rituals, taboos, jargons and the prevailing work culture. Every organization has some long-standing unwritten rules of conduct, which its members meticulously follow. Rituals refer to ceremonies. Which organization performs on specific occasions? Taboos refer to the prohibitions imposed on certain forms of speech or acts, e.g., not calling superiors by their first names, not discussing each other’s personal lives in public, not coming to the place of work in a drunken state and so on. Jargon refers to the special language, which only the members of the fraternity understand. This is sometimes referred to as ‘code language’, and may include nicknames for persons, events and processes etc. Based on researches, Collins and Poras (1995) have provided following guidelines for developing suitable organizational culture. These guidelines are the general mantras that firms must follow. 1.

Preserve core ideologies while allowing for change;

2.

Stimulate progress through challenging objective, purposeful evolution, and continuous self-improvement;

3.

Encourage experimentation and accept mistakes;

4.

Accept paradox while rejecting ‘either or thinking’;

5.

Create alignment by translating core values in to goals, strategies, and practices; and

6.

Grow new managers internally by promotion from within.

THE IMPACT OF CULTURE Why is culture so important to an organization? Edgar Schein (1988), an MIT Professor of Management and author of Organizational Culture and Leadership: A Dynamic View, suggests that an organization’s culture develops to help it cope with its environment. Today, organizational leaders are confronted with many complex issues during their attempts to generate organizational achievement in VUCA environments. A leader’s success will depend, to a great extent, upon understanding organizational culture.Schein contends that many of the problems confronting leaders can be traced to their inability to analyze and evaluate organizational cultures. Many leaders, when trying to implement new strategies or a strategic plan leading to a new vision, will discover that their strategies will fail if they are inconsistent with the organization’s culture. Man spends major part of his life in the organizations within which

392

Key Drivers of Organizational Excellence

he works. When people join an organization, they bring with them the unique values and behaviours that they have been taught. Any organization with firmly established organizational culture would be taught the values, beliefs and expected behaviours of that organization. Just as society moulds human behaviour, an organization also moulds human behaviour that is in tune with the prevalent set of norms and behaviour. In this process, certain basic attitudes and beliefs about the people and their work situations are slowly but firmly accepted in the organization, which becomes its ‘Organizational Culture.’ A strong culture, which reflects the healthy behaviour, is the keenness to work hard and a strong desire and willingness to contribute to the best. Behaviour towards workefficiency is largely controlled by internal ability and willingness to work hard. It is based on sincerity of participation, involvement, devotion to duty, earnest desire to work, and discharge of responsibilities with confidence and competence. Here, culture act as a blue print influencing all aspect of life. Culture around a work place provides a comprehensive framework for understanding the various facets of work behaviour. Human behaviour is the out come of frequent interaction between several value system and pattern of the interrelation of cultural traits. It is not a self-induced phenomenon. Employee’s attitudes are reasonably, good predictors of human behavior and the organizational culture. It provides clues to an employee’s behavioral intentions and inclinations to act in a way. The culture of an organization is precipitated through negative and positive attitudes of organizational members. A strong culture, which is widely held by the organizational members, indicates a favorable attitude and a weak culture indicates unfavorable attitude of members towards the beliefs and norms of the organization. Employee’s attitudes are the beliefs and feelings that largely determine how employee will perceive their work environment, commit them to intended actions and ultimately behave. Strong indicator of cultural variations in work environment then can be observed through human behaviour, which is the precipitation of dominant attitude. Attitudes comprise three elements: affect (feelings, emotions); cognitions (knowledge, beliefs, values); and behaviour. An integral and important component of an attitude concerns the values attributed to its contents. Values reflect how positively or negatively a person feels towards a specific object, event or relationship and, consequently, provides valuable insights into the nature of the employee-work relationship. Human attitude towards prevailing value system is then a factor detrimental to organizational growth, organizational development and success. According to Keith Davis (1977) the following values affect the modern organizations. l

Security: people seek security of job and personal life.

l

Opportunity: people expect many opportunities to climb the ladder in an organization.

l

Equality: there should be justice in rewarding performance.

l

Freedom: it represents a basic cultural value that affects work in modern organizations.

ORGANIZATIONAL CULTURE AND ETHICS An organization’s culture evolves from the values of its members. However, organizational culture and ethics are more than the sum of their parts. Organizations develop a selfsustaining and durable system of ethics that exerts a powerful influence on the actions, decisions, and behaviors of all employees. Ethics in organizations are influenced more by

Organizational Culture and Climate

393

the group ethics system (culture) than by the sum of the individual personal ethics systems. These “group effects” can have a profound effect on the ethical behavior and overall culture of an organization. Ethics reflects the collection of values and behaviour, which people feel are moral. In other words, a positive work ethics is the collection of the values and actions that people feel are appropriate in the work place. Since ethics is a collection of values and behaviour which people feel are moral, a positive work ethics is the collection of all the values and action that people feel are appropriate in the work place. Ethics at work place is about the standards of proper conduct to be followed by employees and employers in a work place. Ethical values and conduct at work place includes integrity, loyalty, respect fairness, caring and citizenship. Managerial ethos is concerned with the character and values of managers as a professional group and by ethos we mean habitual character and values of individual groups, races etc. the strong value may provide an ample opportunity to, creativity, independence, challenges in working, serve others, earn money, enjoy prestige and status.

WHY A MANAGER SHOULD UNDERSTAND THE ORGANIZATIONAL CULTURE? Culture is an asset that can also be a liability. It is an asset as it facilitates better cooperation and communication between management and employees. It is a liability when important shared beliefs and values interfere with the needs of the business and of the company and the people who work for it. Hence, l

An understanding of organizational culture is important in the field of organizational behaviour as it give proper understanding, insight, and feedback to the leaders and management about the present cultural pattern that facilitate, either to development or constraints to organizational development.

l

An understanding of Organizational Culture is important because no organization can operate in isolation to its cultural environment. In other words, organizations are social systems that must be inevitably operating to survive.

l

An understanding of organizational culture is important as it explore the ethos and managerial practices at work, which would go a long way in developing positive attitudes, which in turn are likely to exert positive influence on performance.

l

An understanding of Organizational Culture is significant as it establishes the linkage between culture, leadership and work ethics in building human and social capital and it is trough human beings that organizations can sustain high performance.

BEHAVIOR AND ARTIFACTS We can also characterize culture as consisting of three levels (Schein 1988). The most visible level is behavior and artifacts. This is the observable level of culture, and consists of behavior patterns and outward manifestations of culture: perquisites provided to executives, dress codes, level of technology utilized (and where it is utilized), and the physical layout of work spaces. All may be visible indicators of culture, but difficult to interpret. Artifacts and behavior also may tell us what a group is doing, but not why. At the next level of culture are values. Values underlie and to a large extent determine behavior, but they are not directly observable, as behaviors are.

394

Key Drivers of Organizational Excellence

There may be a difference between stated and operating values. People will attribute their behavior to stated values. To really understand culture, we have to get to the deepest level, the level of assumptions and beliefs. Schein contends that underlying assumptions grow out of values, until they become taken for granted and drop out of awareness. As the definition above states, and as the cartoon illustrates, people may be unaware of or unable to articulate the beliefs and assumptions forming their deepest level of culture. To understand culture, we must understand all three levels, a difficult task. One additional aspect complicates the study of culture: the group or cultural unit which “owns” the culture. An organization may have many different cultures or subcultures, or even no discernible dominant culture at the organizational level. Recognizing the cultural unit is essential to identifying and understanding the culture. Organizational cultures are created, maintained, or transformed by people. An organization’s culture is, in part, also created and maintained by the organization’s leadership. Leaders at the executive level are the principle source for the generation and re-infusion of an organization’s ideology, articulation of core values and specification of norms. Organizational values express preferences for certain behaviors or certain outcomes. Organizational norms express behaviors accepted by others. They are culturally acceptable ways of pursuing goals. Leaders also establish the parameters for formal lines of communication and message content-the formal interaction rules for the organization. Values and norms, once transmitted through the organization, establish the permanence of the organization’s culture. Culture is deep seated and difficult to change, but leaders can influence or manage an organization’s culture. It isn’t easy, and it cannot be done rapidly, but leaders can have an effect on culture. Schein (1988) outlines some specific steps leaders can employ Changing the culture of an organization takes the full commitment of every leader within the organization. You cannot just tell people, “From now on its going to be done this way.” On the other hand, climate is a feeling by the employees on how they perceive that something should be done at the minute. These feelings can normally be changed within perhaps a few hours, days or weeks. The workers get these feelings from their both leaders and peers, formally and informally. Feelings are transmitted to them by how their leaders act and model, and what they praise and ignore.

GUIDELINES FOR THE LEADER l

Don’t oversimplify culture or confuse it with climate, values, or corporate philosophy. Culture underlies and largely determines these other variables. Trying to change values or climate without getting at the underlying culture will be a futile effort.

l

Don’t label culture as solely a human resources (read “touchy-feely”) aspect of an organization, affecting only its human side. The impact of culture goes far beyond the human side of the organization to affect and influence its basic mission and goals.

l

Don’t assume that the leader can manipulate culture as he or she can control many other aspects of the organization. Culture, because it is largely determined and controlled by the members of the organization, not the leaders, is different. Culture may end up controlling the leader rather than being controlled by him or her.

Organizational Culture and Climate

395

l

Don’t assume that there is a “correct” culture, or that a strong culture is better than a weak one. It should be apparent that different cultures may fit different organizations and their environments, and that the desirability of a strong culture depends on how well it supports the organization’s strategic goals and objectives.

l

Don’t assume that all the aspects of an organization’s culture are important, or will have a major impact on the functioning of the organization. Some elements of an organization’s culture may have little impact on its functioning, and the leader must distinguish which elements are important, and focus on those.

ORGANIZATIONAL CLIMATE Climate may be thought of as the perception of the characteristics of an organization (Kelly, 2004). It is the summary perception which people have about an organization. It is the global expression of what the organization is .it is a relatively enduring quality of the internal environment that is experienced by its members, influences their behaviour and can be described in terms of the values of a particular set of characteristics of the organization. It may be possible to have as many climates as there are people in the organization. The abstract concept of culture and operational concept of climate basically refers to the perceived personality of an organization in very much the same sense as individuals have personality. The determinants of organizational climate are: 1.

Organizational context: mission, goals, objectives, function etc.

2.

Organization structure: size, degree of centralization and operating procedures.

3.

Process: leadership styles, communication, decision making and related processes.

4.

Physical environment: employee safety, environmental stresses, physical space characteristics.

5.

Organizational values and norms: conformity, loyalty, impersonality and reciprocity.

Improving Organizational Climate l

Change in policies.

l

Participative decision making.

l

Technological changes.

l

Concern for people.

A healthy organizational culture rests on eight strong pillars of “octopus” referring to openness, confrontation, trust, authenticity, proactive, autonomy, collaboration & explicitness. In the current scenario of a cut throat competition, made worse with the emergence of liberalized, globalised & privatized economic era where the domestic industries find it tough to face the competition posed by the multinational companies from developed nations with superior technology resulting in better output at lower prices, many people argue that technology is the field where most of the Indian companies lag, but I strongly comment that even if one has a superior technology but not proper manpower to exploit & cultivate it what may be the use of holding such advanced technology, one might acquire the best of the man power but what is the guarantee that they will strive for the organizational

396

Key Drivers of Organizational Excellence

goal. Yes the healthy organizational culture with an open environment, filled with the feeling of mutual trust & confidence, with added flavor of authenticity, sense of collaboration, freedom & autonomy added to the responsibilities, proactive measures, loyalty, surrendered personal interests before organizational interests and above all a treatment with respect and humanitarian consideration for each employee provides this guarantee.

References Becker, H. And Geer, B. (1960), Participant Observation: The Analysis of Qualitative Field Data in R. N. AdamS and J. J. Preiss (editors) Human Organization Research: Field Relations and Techniques, Homewood, IL: Dorsey Press. Louis, M. R. (1980), Surprise and Sensemaking: What Newcomers Experienced in Entering Unfamiliar Organizational Settings, Administrative Science Quarterly, 25, 226-251. Allaire, Y. and M. Firsirotu (1984), Theories of Organizational Culture, Organization Studies5(3): 193-226. Schein, E. H. (1988), Organizational Culture, Sloan School of Management, Working Papers (WP 2088-88), Massachussets Institute of Technology. Collins, James C, & Porras, Jerry I (1995), Building a Visionary Company, California Management Review, 37(2), 80-102. Keith Davis (1977), Human Behavior at Work: Human Relations and Organizational Behavior (4th edition), McGraw Hill: New Delhi. Kelly Brunning (1994), Quality of Work-Life Issues The Needs of the Dual-Career Couple Employee Perceptions of Personnel Practices: A Study of Rural America-A Barometer for Human Resource Managers, Journal of Organizational Culture, Conflict and Communication, Volume 8, Number 1, 2004, ISSN: 1544-0508, pp. 91-110.

40

Economic Strategies for HRM P. Paramashivaiah S. Aravind

Today every organization is aiming at achieving productivity by enhancing return on investments and achieving the economies of scale. This makes business sense to focus only on the organization's core competencies and outsource non-critical business activities. Today the entire outlook of the HR activities has taken a new road and influencing the organizations in corporate restructuring. Now HR has proved that it is part of core business area. One or the other way HR is involved in the entire process of Business. It is linked directly with production and Quality as it is involved in the whole process, which in turn should reflect Total Quality. Likewise its contribution towards marketing where in it is working as a watchdog to ensure the customer satisfaction. In other areas like R & D and services HR has pulled its socks and is out to explore and execute creativity and excellence. The most important area of Business where experts always treated HR as non-core area of Business is Finance. But in recent years HR is treated as an asset and has started evaluating its performance and contributes towards the bottom line and the HR Audit has done the certifying job. Some of the experts in HR predict that the number of people in HR will reduce but their value may go up considerably and the organizations are expected to be flat and virtual which also is an indirect indication that the HR positions will get reduced but the significance of HR qualities will enter into every functional area of the organization. Some of the basic reasons which hamper the growth of HR outsourcing in India are confidentiality and cost factors. The fear of losing jobs, losing control over confidential data, security breaches and overall confidence in the vendor deters many organizations. But today the security in a job is treated as a myth. Young and ambitious professionals look for jobs that could give satisfaction and good pay.

398

Key Drivers of Organizational Excellence

INTRODUCTION Today every organization is aiming at achieving productivity by enhancing return on investments and achieving the economies of scale. This makes business sense to focus only on the organization's core competencies and outsource non-critical business activities. Today the entire outlook of the HR activities has taken a new road and influencing the organizations in corporate restructuring. Now HR has proved that it is part of core business area. One or the other way HR is involved in the entire process of Business. It is linked directly with production and Quality as it is involved in the whole process, which in turn should reflect Total Quality. Likewise its contribution towards marketing where in it is working as a watchdog to ensure the customer satisfaction. In other areas like R & D and services HR has pulled its socks and is out to explore and execute creativity and excellence. The most important area of Business where experts always treated HR as non-core area of Business is Finance. But in recent years HR is treated as an asset and has started evaluating its performance and contributes towards the bottom line and the HR Audit has done the certifying job. Some of the experts in HR predict that the number of people in HR will reduce but their value may go up considerably and the organizations are expected to be flat and virtual which also is an indirect indication that the HR positions will get reduced but the significance of HR qualities will enter into every functional area of the organization.

OVERVIEW OF HR OUTSOURCING Today administering HR services is becoming an expensive and complicated activity to every company and to compensate this companies have to divert their resources from other strategic corporate initiations. Further the need for employment information and expert service at low costs has given rise to HR out sourcing. Outsourcing has emerged as a new economic strategy for HR. To maintain the competitive edge companies usually enter into a contract with an external expert agency that can assist them in any one or all areas of Business process. This external partnering organization works as a vendor in the process, which is popularly known as outsourcing. When any company subcontracts to another supplier the work previously done by HR parent company it refers to outsourcing. Outsourcing means externalizing the production or services or both. It is widely accepted that the Business process outsourcing has the History of more than two decade in various fields like production and Information Technology. Whereas HR outsourcing has geared up in the recent years and has grown to the great extent. Initially HR outsourcing was involved in very few discrete services like Recruitment and payroll. Experts have opined that today HR outsourcing in literal sense has included all the complex transactions and every aspect of HR services. HR outsourcing has come out with the major benefits like holding the vendor responsible and accountable for the services they provide and have facilitated the companies to track these services through penalty clauses in their contracts. This benefit has made sense to the companies of risk sharing by putting the legal responsibilities on the vendor.

Economic Strategies for HRM

399

Further the changing business scenario with lots of mergers, acquisitions, layoffs have made the road clear for HR outsourcing. In this process companies take some time to develop understanding and companies, which are branching out in different countries usually, avoid in house HR function. Significant savings as a result can be ensured by making a cost comparison of an outsourcing program with average salary and maintenance of an in house HR staff. In most cases, HR outsourcing brings 60% increase in the productivity, 30% reduction in costs and eradicates labor union problems to a great extent. According to India Life Hewitt three most important drives leading to outsourcing are 1)

Financial gains

2)

Lesser administrative hassle

3)

Focus on core areas

One more major factor attracting interest for outsourcing is the evolving HR function itself. Departing from its traditional image of a cost consuming function, in today's highly competitive environment, HR has seriously started enjoying the taste of business by contributing to the company's bottom line. The common opinion of Indian companies has been that outsourcing is economical. HR is making a clear transition from backroom to boardroom to concentrate more on productivity enhancement. All the benefits are linked to direct cost savings and indirect savings. HR outsourcing makes financial sense, provided the company has to see it as a long-term process. Financial gains could be direct and indirect, for example, a gradual, stable increase in productivity, eradication of statutory obligations and reduction of opportunity costs.2 There are many areas in HR, which have attracted companies to outsource which help them reduce the cost and increase the efficiency in the services. HR outsourcing to some extent has made an impact on almost every functional areas like having of payroll, Employee record, Recruitment, Organizational structure etc. HR Outsourcing otherwise does not make any sense if it does not make Economic sense. It directly reduces the cost and indirectly increases the revenue. Analysts predict that the market for HR outsourcing will continue to grow. The market Research firm Dataquest estimated that HR outsourcing industry is expected to grow from $ 13.9 billion in 1999 to $37.7 billion in 2003. And according to another study, the US outsourcing market, estimated to grow to US $ 58.5 billion by 2005.3 Gartner Dataquest estimated the global BPO market size to be $544 billion in 2004…and India would cross the $20 billion mark by 2005. A survey by an HR association showed that 72% of Indian companies have outsourced at least one HR activity and 50% of these are operational in nature. A recent study by the Conference Board says that while two third of Fortune 1000 companies outsource some HR functions, that it has become virtually common among midsize companies with mediocre HR departments, helping them to ascend to world class HR levels. According to a recent survey by Gartner Inc., 89% of midsize companies outsource at least one HR function. A research in the US confirms that about three - fourth of all US companies

400

Key Drivers of Organizational Excellence

now outsource at least one major HR function. Hewitt Associates survey shows that as much as 30-35 percent cost saving can be achieved by outsourcing HR service delivery. Most critical issue in HR outsourcing is the negotiation of the agreement and entering into a formal contract. The company needs to make sure that the scope they write in the contract is the scope they expect. All the issues related to the statement of work, service levels and General Legal considerations should be made clear in the contract or else it costs extra. Usually HR outsourcing contracts run for a year but the company can put a clause by which they can give 30 days notice to terminate the contract if they are dissatisfied with the services or do not need the service any more. HR BPO Projection

HR BPO Projection 2004-05 HR outsourcing forecasts by process in Asia Pacific (In $ million) Payroll services

761.20

Benefits administration

535.65

Education and training

555.99

Recruiting and staffing

347.98

Personnel administration

167.16

Other HR functions

191.97

Total

2,560.00

Source: Gartner PROCESS OF HR BUSINESS PROCESS OUTSOURCING F a ctors in flu en cin g ou tsou rcing

Issu es a nd con sid eration s

1. E con om ic 2. Te chn olog ical 3. Strateg ic

C orporate re cog nizes the req uirem e nts of ou tsou rcing

Ide ntify the a re a of ou tsou rcing

O p tions Tota l or pa rtial ou tsou rcing

S atisfa cto ry S ervice

S ele ctio n of the suita ble ven do r

F o rm a l con tract an d ag re em e nt

Tra nsition pe rio d

P erform an ce eva luatio n of the Ve nd or

D issatisfactio n of service

Term ina te existing

Economic Strategies for HRM

401

Reasons behind Outsourcing Human Resources 1.

Economic factors: Today the market clearly reflects that the rapid Global Economic transformation has changed the priorities of the Business. Every organization is focused to improve their business performance through various drives.

2.

One of the strong reasons for outsourcing HR is undoubtedly Head count restriction. Usually companies prefer to have people in revenue contributing areas and for most of them cost reduction is another attraction for outsourcing due to the Global Competitive Market Place.

3.

Technological Advancement: It has contributed in many ways like reducing the paper work and Documentation. Corporate using this advantage have encased extra time for learning and time to strategize. Technology is also an alternative to outsourcing for strengthening the HR functions of the companies. If the companies are not in a position to invest money in technology then the best alternative seems to be outsourcing.

4.

Strategic Business Goals: When HR outsourcing started giving better results at the lesser cost companies started realizing its importance and today unlike the beginners who thought of limited benefits like Head count and cost reduction suddenly started thinking in the new direction which can contribute towards the strategic Business Plan, through qualitative services.It has enabled the HR Executives to concentrate on the strategic functions.

5.

Corporate Restructuring: The present Business scenario focuses on core business deliverables to retain competitive advantage. Corporate have realized that they need to be flat. To ensure that they become virtual organizations, steps are taken to manage the growth without adding infrastructure, Employees and cost.

6.

Quality of Service: Quality of service and the Expertise of the vendor are the most important factors determining the HR outsourcing. As these dynamics are operational in the market corporate are concentrating on Employee satisfaction and customer delight.

However no single factor can be given full credit for influencing the HR outsourcing as every company has its priorities set. Hewitt a leading HR outsourcing service provider has conducted several surveys in North America, Europe and Asia to explore the reasons behind HR outsourcing.

Hewitt Survey: The most frequently cited reason for outsourcing HR activities was that outsourcing “allows HR to focus on strategic business responsibilities” reported by 23% of Employers, Followed closely by “both cost effectiveness and maintains, enhances services to Employees”(17%). Of all the objectives cited as a reason for outsourcing, in two objectives a very high percentage of Employers felt the objectives had also been met: 82% of the Employers looking for cost effectiveness felt that the objective had been met, and 82% of those wanting to capitalize on technological Advancement felt that the objective had been met. 92% Employers said their HR outsourcing were effective in meeting strategic business goals. 30% said outsourcing has been very effective, 93% of Employers currently outsource some part of their HR activities, and another 4% are considering outsourcing. Only 1% said they considered outsourcing, but decided against it.

402

Key Drivers of Organizational Excellence

Drawbacks: There are several drawbacks of not having the HR functions in house. 1.

No single outsourcing solution fits all, corporate need to give up some key issues if they favor fulltime outsourcing.

2.

Employees, the most valuable assets of any organization, may want someone in house whom they can see, interact everyday and trust and if necessary can turn to settle various issues like Grievance Negotiation, Disputes, Counseling, work related problems and Employee retention.

3.

The In-house HR person can definitely handle some of the issues better than the vendor, like recognition for Employees, group offerings, building Incentive programs and vocation policies.

4.

The corporate may not want few responsibilities out of their hands like final say about hire and fire of the Employees and the issues related to discipline.

5.

There are lots of Questions and doubts about the security and confidentiality of the Information, which can determine the Existing and potential Business.

Further some serious tasks involved in HR outsourcing like vendor selection that should be ready to change, unlearn and learn, issues related to preparation of formal contract and some common complaints.

CONCLUSION Some of the basic reasons which hamper the growth of HR outsourcing in India are confidentiality and cost factors. The fear of losing jobs, losing control over confidential data, security breaches and overall confidence in the vendor deters many organizations. But today the security in a job is treated as a myth. Young and ambitious professionals look for jobs that could give satisfaction and good pay. Analysts have different opinion and some of them are quite relevant to today’s business scenario. Some of the core OD interventions, soft skills, Behavioral Training, high value Decisions, strategic functions, culture building, Employee satisfaction, Organization Design and Business rules will always remain with the organization. Experts have expressed that in-house HR Department can never wash its hands off these completely. HR professionals are taking over the strategic role and becoming the change agents in the organization. They need to play a role of internal consultant in the on-going design and redesign that will characterize organizations to continually modify themselves to achieve shifting strategies, new capabilities and higher levels of performance. The relationship between the Employees and HR managers is expected to graduate from a transaction level to an involvement level. Although Outsourcing in HR has developed as one of the major economic strategy, but HR outsourcing cannot be a substitute for the HR department.

References Debi S Saini, Soni A Khan(2000) Human Resource Management-Perspectives for the New Era, New Delhi: Sage Publications Limited. Purav Misra(2002), Making Smart Move, New Delhi: Published by Human Capital June 2006.

Economic Strategies for HRM

403

Websites Buyerzone.com.Inc(1997-2003), HR Outsourcing (PEO, ASO) Buyer’s Guide.(1,3) John Halvey(2002), Human Resources outsourcing Sourcing, Interest Group 2002 www.sourcinginterests.org Martin Longlois, partner, stikeman Elliott(2002) Business Process Outsourcing – contracting wisely www.sourcinginterests.org

IV IT APPLIC ATIONS

41

Data Hiding in Identification and Offset IP Fields B.K.Chaurasia Kuldeep Singh Jadon

Steganography is defined as the art and science of hiding information; it takes one piece of information and hides it within another. The pieces more used to hide information are the digital images. In this chapter we present away to use unused fields in the IP header of TCP/IP packets in order to send information between to nodes over Internet.

INTRODUCTION Steganography literally means “covered languages”. In today’s computer world, it has come to mean hiding secret messages in any digital multimedia signals. Steganography works by replacing bits of useless or unused data in regular computer files(such as graphics, sound, text, HTML, or even floppy disks) with bits of different, invisible information. This hidden information can be plain text, cipher text, or even images. Must of the scientific word is focus in hiding information into images. The techniques used have the intention to make impossible to detect that there is anything inside the innocent file, but the recipient must obtain the hidden data without any problem. The most important feature of a steganographic system is the fact that it allows communication between two authorized parties without an observer is aware that the communication is actually taking place, Kann(1996). TCP/IP is the protocol used in Internet. TCP /IP were developed by a Department of Defense(DOD) research project to connect a number different networks designed by different vendors into a network of networks(the “Internet”). IP(Internet Protocol) is responsible for moving packet of data from node to node, and TCP(Transmission Control Protocol) is responsible for verifying the correct delivery of data from client to server. The IP protocol defines the basic unit of data transfer through the Internet as a packet. All the data is partitioned into IP packets on the sending computer and reassembled on the receiving computer. Each packet begins with a header containing addressing and system control information. The IP packet header consists of 20 bytes of data divided in several fields. Each field has a special purpose, depending on the type of data contained in the packet payload.

408

Key Drivers of Organizational Excellence

Many scientific works have been made in order to create software and methods to hide information into digital images. Our approach takes advantage of the unused fields of the IP header packet. As mentioned earlier, not all the fields of an IP packet are always used. These fields are used to hide the information we want to send without raising any suspicion. This chapter is organized as follows: In section two we present an analysis of steganographic methods. This is followed by an overview of the Internet Protocol in section three. Previous work, that uses a similar approach than us, is analyzed in section four. Our proposal is explained in section five and the implementation and experiments are showed is section six. The last section presents our conclusions, limitations and advantages of our work.

STEGANOGRAPHY OVERVIEW Communication confidentiality can be accomplishing using cryptography, which involves key administration, algorithm implementation and other management issues. Nevertheless, if an eavesdropper is listening he will realize that exists a secret communication between two entities. Steganography will hide the presence of a message in such a way that an eavesdropper(who listen to all the communications) cannot tell that a secret message is being sent. As the goal of steganography is to hide the presence of a message, it has been as the complement of cryptography, whose goal is to hide the content of a message. The first scientific study of steganography was presented by Simmons in 1983 who formulate it as the “Prisoners problem”. The problem is the following one: Two prisoners need to communicate, but all the messages pass through the warden who can detect any encrypted messages. They must find some technique of hiding their message in an innocent looking communication. The generic embedding and decoding process in steganography is presented. The first step in embedding and hiding information is to pass both, the secret message and the cover message, into the encoder. Inside the encoder, one or several protocols will be implemented to embed the secret information into the cover message. A key is often needed in the embedding process. This can be in the form of a public or private key. Having passed through the encoder, a stego object will be produced. A stego object is the original cover object with the secret information embedded inside. it will then be sent off via some communications channel, such as email, to the intended recipient for decoding. The recipient must decode the stego object in order for them to view the secret information. The decoding process is simply the reverse of the encoding process. After the decoding process is completed, the secret information embedded in the stego object can then be extracted and viewed. The most used cover messages are digital images. In Nelson and Jajodia gives an introduction to steganography in digital images. According to them must of the techniques use common approaches that include least significant bit insertion, masking and filtering and transformations. The LSB method works by using the least significant bits of each pixel in one image to hide the most significant bits of another. Masking and filtering techniques hide information by marking an image, in a manner similar to chapter watermarks. Transformation take advantages of algorithms and coefficients form processing the image or its components to hide information. One example of this technique is the discrete cosine transformation. In [10] the authors use digital imagery as a cover signal to hide information. In the authors propose to use random bit-sequences generated by linear shift registers(LFSRs) within the pixel byte instead of just the LSB. They established that such changes within any

Data Hiding in Identification and Offset IP Fields

409

given pixel of the image will result in better hiding of the data and hence secure data transmission. Other covert messages include audio signals or slack space in disks. In propose a technique that uses autocorrelation modulation, with several variations, to hide information within audio-signals. A MP3 resistant oblivious data hiding technique is presented. Like many security tools, steganography can be used for a variety of reasons, some good, some not so good. Legitimate purposes can include things like watermarking images for reasons such as copyright protection. Digital watermarks(also known as fingerprinting, significant especially in copyrighting material) are similar to steganography in that they are overlaid in files, which appear to be part of the original file and are thus not easily detectable by the average person. Attacks on steganographic systems exist and are named steganalysis. Their goal is to determine whether or not they have a payload encoded into them, and, if possible, recover that payload. More information can be found in. An interesting analysis of limits of steganography is presented. The authors present a discussion of the obstacles that lie in the way of a general theory of information hiding systems.

THE INTERNET PROTOCOL Internet use the Internet Protocol(IP) as a standard way to transmit information and actually almost al the network is based in the IP version 4. The header of this protocol usually uses some fields that have some redundancy or normally are not used during the transmissions. We can use this fields that are not used for our purposes, but first we will analyze how the IP header works. For the aim of our investigation we will focus in just the second and third 32bit worlds of the header; it mean’s; the identification, flags, fragment offset, Time to Live, protocol and Checksum fields of the header.

Figure 1: Fields of an IP header

When the transmission over the internet occurs; the information is wrapped by different protocols at different layers of the TCP/IP network model. Two of these layers are the Physical layer and the Transport layer. The communication over the transport layer is standardized by the IP protocol, but over the network layer exists some different technologies and implementations, which implies that each technology has a maximum size of data it can carry per transmission or Maximum Transfer Unit (MTU).

410

Key Drivers of Organizational Excellence

Figure 2: MTU example

The Transport layer solves this problem with the fields located at the second and third 32-bit words. During the transmission of an IP packet, if the MTU of the source network is smaller there is not problem; but if not, the router needs to fragment the IP packet. When fragmentation occurs; the router splits the IP datagram with a maximum size of the new MTU. The new headers have the same information, but now the bit of More Fragments is turned on and the Fragment Offset indicates the offset of the data. Otherwise, if fragmentation does not occur; both fields, the Flags Field and the Fragment Offset, are set in zero, Liato(2001). Finally when the packet arrives to its destination, this device must be able to join the packet again; therefore, IP needs to assure when the device joins the pieces again; that each one corresponds to the original packet. To assure it, IP uses the Identification field. There are other modifications that occur over the IP header every time that a packet pass trough a router. When a packet reach to a router, the field of Time to live in the header decreases its value, originally set in 30, by one. If a packet reaches the router with a value of zero in the Time to live field, the packet is dropped; this is because IP need to assure that a packet will no be forever traveling over the network without reaching its destiny. At least, because of all the modifications that occur in the IP header while traveling, the Checksum field is modified every time the packet reaches a router.

PREVIOUS WORK OF STEGANOGRAPHY IN IP Our proposal is to use some of the fields described in the previous section. Similar approaches have been published. The author’s idea resides in the manipulation of the IP Identification Field. The Identification Field of the IP Packet is assigned by the original sender. This number consists basically in a random number generated while the packet was being constructed. The Identification Field is only used when fragmentation occurs. Therefore; if we assure that no fragmentation will occur because of the size of the packet; it is possible to hide data in this field without any consequence in the transmission, Wang (2004). The advantages in this work are that it is used to send information from point to point, but the limitations are the quantity of information that you send. Furthermore if by any circumstances the datagram is fragmented, the receiver will listen noise in the transmission because it will receive the same information more than one time with every new fragment of the datagram.

Data Hiding in Identification and Offset IP Fields

411

In the work is focused in the manipulation of the Do Not Fragment Bit. There is possible to indicate if we do not want that our packet be fragment by the routers in the way. In consequence; again, if we assure that our packet will be not fragmented because of the size of it; we can hide information in the Do not fragment Bit at the flags field. If this work the problem of the size of data is worst than the Identification Field, because here we can only transmit one bit for each datagram. Imagining that the datagram does not carry any data but the header, then the ratio useful information to total data is 1:160, it means that if you want to transmit the phrase “hello world” you will need to transmit 88 datagrams producing and overhead of almost 2 Kb for just 11 bytes.

OUR PROPOSAL Our idea is not really to hide information, but to use the non-used bits to send messages and information, node to node, related with the router performance, best routes or even to update the new routes between the gateways without generating more traffic. For this purpose we will analyze two fields that are not quite often used in a IP datagram transmission. As it was mentioned, the Fragmentation Offset Field always is set to zero if fragmentation does not occur at all and the Identification Field also becomes useless if there it occurs. Unfortunately, we cannot be sure of it because we are not sure that the source MTU is the smaller in the travel that the packet will take; furthermore, we are not sure of which path the packet will take during the travel. In consequence we can not use the Fragmentation Offset Field or the Identification Field if we want to transmit point to point information, but in node to node.

PACKET FRAGMENTATION There are two scenarios when a IP packet cross from one network to other. The first one is that it is fragmented because the MTU of the second one is smaller the former, in which case the More Fragments bit of the Flags Field is set to 1 and the Fragment Offset became used. The second scenario is when fragmentation is not necessary; therefore, the Fragment Offset will be zero, and the only two modifications that the datagram will receive is the decrement of the TTL field and a recalculation of the Header Checksum, Jessica(2008). It is in the second scenario when it is possible to substitute the fragment offset by some data without any consequence.

DATAGRAM SELECTION TO CARRY INFORMATION Now the problem is how we can identify when a datagram is loading information of fragmentation in the Offset field or when the datagram is loading our information. This is not possible to know only with the More Fragment bit, because it is set to one in every fragment except the last one. Therefore, in the last fragment we will have a More Fragment bit in zero and nonzero Fragment Offset. Furthermore we cannot track if the datagram is part of a fragmented one or not because maybe every fragment can take a different path. Moreover, ff this could be possible we will require storing the ID of the fragment with the destination IP address. The solution to this problem is to use a non used bit, than can be every reserved bit of the header that actually is not used. It can be the two less significant bits of the TOS field or the

Key Drivers of Organizational Excellence

412

most significant bit of the Flags field. For convenience we will use the bit of the Flags field, because its only necessary to do an “AND” in a 32 bit length word to extract if the datagram carries our information. Another advantage of this approach if that we already have extracted the information. We have also to discern when a datagram can use a datagram; it can be used only under two circumstances. The first one is when the datagram has the chosen reversed bit on, meaning the datagram carries information of the previous gateway. The second case occurs when the More Fragments Bit is off and also the Fragment Offset is set to zero that indicates the datagram has not been fragmented, Gang(2001). After the gateway extracts the information we embedded in the datagram, these fields are replaced with a random value in the Identification Field and set to zero in the Offset Field if there is no information we need to transmit to the next router, or with the new information in the other case.

IMPLEMENTATION AND TESTS The code that implements our proposal uses the LibNet library for the construction of the packets two computer Pentium running Open BSD 2.x. Our environment test was constituted by two computers Pentium running Open BSD 2.x that work as the gateways. One of them(R1) was running the program that injects the information within the datagram. The principal roll of this program was to read from the internal interface the datagram, check if it has fragmentation. If not if there was some information to send to R2, it sets the reserved bit from the flags field to one and write down the information in the ID and Offset Field. After the decrement of the TTL and the recalculation of the checksum the datagram is sent through the external interface to the next network.

Figure 3: Implementation architecture

The information that R1 sent was write down in a text file in R1. And it was sent while there exists traffic in the network from C1 to C2 and the EOF of the text file was no reached, Bender (1996). The second gateway(R2) was running the program that takes out the information and reestablishes the packet. This program read the flags field of the datagram, if it has the reserved bit on and the fragmentation bit of then it takes out the information of the ID and fragment Offset field and reestablishes the value of the Offset field to zero, copies the Checksum to the ID field(this because we need any number in this field), decreases the TTL and recalculates the Checksum. The information that R2 received was displayed to the screen of R2. C1 and C2 were to laptops running Windows 2000 that was doing ping and

Data Hiding in Identification and Offset IP Fields

413

telnet. Also both of them were running ethereal to check the structure and the data that the datagrams were carrying. Additionally we connected and sniffer between R1 and R2 in order to maintain a tracking of the packets and the information that they were carrying.

CONCLUSIONS The researchers have presented a new another technique to hide information over a valid communication channel. The covert messages were the identification and Offset IP fields of the TCP/IP packets used in a communication between two valid entities. The experiments shown some limitations but they also presented some advantages over similar steganographic techniques. As is mentioned above; it is not possible to send information point to point because we cannot assure that the IP datagram will be not fragmented. Furthermore we do not know exactly which way the packet will take, so is not possible to be sure in our information will arrive to destiny or the datagram will take another way that never pass thought our destination. That is caused because the owner of the datagram is not ours. Actually there are two ways to assure the packet will cross thought a gateway. The first way when we route the datagram thought a known interface with a known MAC address of a known gateway at the same network segment. The second way is to put a static rout in the options field of the datagram, but with this we are doing and extra work that causes overloading at the gateway and also and over heading at the network that is what we try to avoid. Another limitation is that in presence of an Intrusion Detection System(IDS), and depending the configuration of, it is possible that the datagram can be identified as a malicious one. By the other hand, our work presents several advantages. The first advantage is that we have an effective 12-bit word to be used in every datagram that is not fragmented, and the only extra work that the gateway need to do is to replace the data carried at the Offset Field with zeros. There is no more work for the gateway because finally every time a datagram cross thought a gateway the TTL field is decreased and the checksum must be recalculated. Furthermore, with the use of this methodology to send information between two gateways that are back to back, they can share routes, the load of each route or the quality of service, throughput of each route without generating overloading in the network.

References Ahsan, K. and D. Kundur(2002), Practical data hiding in TCP/IP, Proc. ACM Workshop on Multimedia Security, 2002. Anderson, R.J. Petitcolas(1998), F.A.P., on the Limits of Steganography Computer Lab., IEEE Journal on Communications, 16(4), pp. 474-481. W Bender, D Gruhl, N Morimoto, A Lu(1996), Techniques for Data Hiding, IBM Systems Journal, 35(3&4), 313-336. Chandramouli, R.; Subbalakshmi, K.P.(2003), Active steganalysis of spread spectrum image steganography Circuits and Systems, ISCAS ’03. Proceedings of the International Symposium on, Volume: 3, May 25-28. Craig H. Rowland(2003), Covert Channels in the TCP/IP Protocol Suite. First Monday, 2003.

414

Key Drivers of Organizational Excellence

Huaiqing Wang, Shuozhong Wang(2004), Cyber Warfare: Steganography Vs. Steganalysis, Communications of the ACM archive, 47(10), October, 76 – 82. Jamil, Neil T.; Ahmad, A.(2002); An Investigation into the Application of Linear Feedback Shift Registers For Steganography, Southeast Con, 2002. Proceedings IEEE, 5-7 April, pp 239 – 244. Jessica Fridrich and Miroslav Goljan(2002), Practical Steganalysis of Digital Images: State of the Art, Proceedings of SPIE — Volume 4675 Security and Watermarking of Multimedia Contents IV, April 2002, pp. 1-13. Johnson, F. and Sushil Jajodia(1998), Steganalysis: The Investigation of Hidden Information, IEEE Information Technology Conference, Syracuse, New York, USA, September 1st - 3rd, 1998: 113-116. Johnson, Neil F.; Sushil Jajodia(1998), Exploring Steganography: Seeing the Unseen, IEEE Computer, February 1998(Vol. 31, No. 2) pp 26-34. Kanh, D(1996), The History of Steganography, Proceedings: Information Hiding, First International Workshop, Cambridge UK pp1-5. Litao Gang; Akansu, A.N.; Ramkumar, M.;(2001), MP3 Resistant Oblivious Steganography Acoustics, Speech, and Signal Processing, Proceedings.(ICASSP ’01), IEEE International Conference, Volume: 3, 7-11 May 2001, pp: 1365 – 1368. Marvel, L. M.; C. G. Jr. Boncelet and C. T. Retter(1999), Spread spectrum image steganography, IEEE Transactions on Image Processing, Volume: 8, August 1999. Marvel, L.M.; C.T. Retter, C.G. Boncelet, Jr(1998), Hiding Information in Images, International Conference on Image Processing(ICIP ’98) 3-Volume Set-Volume 2, October 04 - 07, 1998 Chicago, Illinois. Neil F. Johnson and Sushil Jajodia(1998), Steganalysis of Images Created Using Current Steganography Software, Proceedings Second Information Hiding Workshop held in Portland, Oregon, USA, April 15-17, Lecture Notes in Computer Science, Vol. 1525, pp 273-289. Petrovic, R.; Winograd, J.M.; Jemili, K.; Metois, E.(1999), Data Hiding Within Audio Signals Telecommunications in Modern Satellite, Cable and Broadcasting Services, 1999, 4th International Conference, Volume: 1, 13-15 Oct. 1999. RFC 791 Internet Protocol, Darpa Internet Program, Protocol Specification, September 1981. RFC 793, Transmission Control Protocol, Darpa Internet Program, Protocol Specification, September 1981. Rowland, Craig H(1997), Covert Channels in the TCP/IP Protocol Suite, DOIS Documents in Information Science, May 1997. Simmons, G. J.(1983), Prisoners’ Problem and the Subliminal Channel, Advances in Cryptology, Proceedings of CRYPTO 83, D. Chaum, ed. Plenum: New York, pp. 51-67.

42

Computer Security: A Security Model to Secure an Organization Just Adequately Kanak Saxena Binod Kumar

Various surveys indicate that over the past several years' computer security has risen in and increasingly becoming focal point for many organizations, Payne, 2006. Almost every day, around the world, computer networks and hosts are being broken into. The level of attacks is sophisticated and varies widely. It uses more advanced techniques to break in. Their varying nature makes much harder to detect them because less is known about the latter types of break-ins, Dan and Venema (2002). Security has become pressing business issue at the highest levels of most organizations. Organizations are seeking such a security system that could protect their technical information and assets adequately. The existing security systems are not able to satisfy the organization completely because they do not fulfill the requirement just adequately. Some poses extra security burden for the organization due to that organization performance goes slow down while others poses low security measures causing that organization loose its technical information and assets. Hence there is a need of an effective security system that could protect corporate information and technology assets just adequately. Aim of writing this chapter is to enable the developer to develop a security system that could protect corporate information and technology assets adequately but do not posses any extra security burden on it. It must keep balanced the security and performance. The developing Model will preserve the security factors such as Authentication, Confidentiality, Integrity and Non-repudiation but also will minimize the potential of extra security burden. Keywords: Layered approach, firewall, dial-in, encryption.

416

Key Drivers of Organizational Excellence

INTRODUCTION Security Model, it is a basis upon which a security system is developed. A good model preserves the confidentiality, integrity and availability of information by minimizing the potential for the unauthorized addition, alteration, destruction, denial or disclosure of data and reduces the overhead burden of security, Binod Kumar and Kanak Saxena (2007). This chapter covers the basic aspects of security metrics. It enables the developer to develop a security system that could be used to secure an enterprise adequately and could have best balance between system security and performance. This chapter has four parts, first is introduction part. Second part tells designing aspects of this security model and third tells about the working of this security system. Fourth part is last part that gives conclusion and future aspect of security.

MODEL DESIGN (SECURITY-LEVEL BY LEVEL) Likewise AT&T, and Hewlett & Packard, we also used the Multi Level Security (MLS) mechanism in order to design this security system. This system controls what data various users could see and what data they could modify. We applied following security measures in this model. l

Accept UDP packets just for needed UDP services then deny all other UDP packets.

l

Accept SYN packets just for needed TCP services then deny all other TCP packets.

l

Open only ports those are just required then disabling all other ports.

l

Run only services those are just needed then stop all other services.

l

Allow only those protocols running those are just concerned then down all others.

l

Allow only clients those are just need to communicate then rejecting all others.

l

Removing all anonymous users/groups and then create required users/group.

l

Delete default policies for users, group and data and define required policy for them.

l

Allowing accessing only the required users and only the required rights.

l

Install Antivirus and keep it updated.

It is said “Defense is in Depth”, and we have tried to follow this rule while designing and implementing this security system or model. This model consists of 5 layers of security and each layer is described briefly in this chapter. See fig-1 below. The five key layers of the security model are: Layer-1: Perimeter Defense Layer-2: Host Protection Layer-3: Moving Data Protection Layer-4: Virus Protection Layer-5: Database data protection

Computer Security: A Security Model to Secure an Organization Just Adequately

417

Figure 1: Security Model

MODEL WORKING This model facilitates secure connectivity and access by anyone, anywhere, anyhow, and anytime to anything and has best balance between system security and performance, Vedaraman (2004) This model handles IT Security related issues of a Network operations with its remote host. It deals with assessing by using Linux platform, Linux Dial-In, Getaway, Firewall and Apache services, MySQL Database, Cisco Router, as an example see fig-1, and present a sample for the various steps. This example could be very easily modified to accommodate network devices and others standards as required.

Layer-1: Perimeter Defense: It provides defensive parameter by do not allowing anyone to login locally or remotely without having valid used id and password. Its file and folder level access permissions and role restricts user to do only the allowed operations (read, write or execute) and access only the information that he is allowed to access. This layer not only protects system data but also protects server from sophisticated attacks and provides methods to reliably detect attacks on server. It uses SSL and VPN mechanism to encryption and decryption data in transit. It also records the activities of running services, logged users, opened ports and enabled protocols. It provides system commands and tools by using that user can minimize host overload only by running, required applications.

Layer-2: Host Protection: This layer consists of firewall and gateway. A firewall can be either a hardware machine but we prefer to develop our own firewall because an application firewall gateway is much stronger and can be configured by various angles and roles. It protects host by not allowing Internet people to scan opened port, protocols and running services. It disallows outsiders even to ping the system. Apart from this, it also controls Inbound-Outbound access of services & data and Network Address Translation for a specific subnet.

418

Key Drivers of Organizational Excellence

This Firewall can be configured to keep open/close ports, up/down protocols and permit/ deny the inbound/outbound access.

Layer-3: Moving Data Protection: As we know Internet has prone of tracing moving data and finding data source as well as destination. As less as we use Internet to move data we are as much as safe. In spite of being used SSL and VPN mechanism by layer-1, to encryption and decryption data in transit, this layer runs Dial- In services to avoid using Internet as Data Carrier, It allows remote users to access data without using Internet as a carrier.

Layer-4: Virus Protection: Even after there is less prone of antivirus for Linux we have used antivirus. This layer running AVG Free anti virus software updated daily. So that it can encounter computer viruses. Layer-5: Database data protection: This layer has MySQL server running. Its different types of locking mechanisms, allows only the valid database user to access only the allowed database data and do only the allowed operation on these data. These users before getting validated by the MySQL Database are also validated by Linux Operating System (OS).

CONCLUSION Various surveys indicate that over the past several years’ computer security has risen in and increasingly becoming focal point for many organizations, Payne (2006). Detecting of varying nature attacks and advance break-ins is much harder. Security system developer finds it difficult to develop an effectively security system keep up with the capabilities needed by organization. In order to assist with this, the developer can develop and implement and effective system that addresses topics of management, exchange and transmission of information securely.

References Binod Kumar and Kanak Saxena (2007), Computer Security: A Model for Handling E-Security Issues of The Enterprise, Thirteenth Annual Conference and First International Conference on Mathematical Modeling in Engineering and Biosciences, Agra, India, 10th to 13th January, 2007 Dan Farmer & Wietse Venema (2002), Improving the Security of Your Site by Breaking Into it, Sun Microsystems, 2002 Shirley C. Payne (2006), A Guide to Security Metrics, SANS Security Essentials GSEC Practical Assignment, Version 1.2e, June 19, 2006 Radhika Vedaraman (2004), Host Assessment and Risk Rating, SANS Institute, Version 1.4b, Option 1.

43

Mobile Agent Technology in Intrusion Detection B. K. Chaurasia Robin Singh Bhadoria

The increasing number of network security related incidents makes it necessary for organizations to actively protect their sensitive data with the installation of intrusion detection systems (IDS). Autonomous software agents, especially when equipped with mobility, promise an interesting design approach for such applications. We evaluate the implications of applying mobile agent technology to the field of intrusion detection and present taxonomy to classify different architectures. Sparta, an actual implementation of a mobile agent based system which is developed at our group is described as well. Keywords: Intrusion Detection, Network Security, Mobile Agents

INTRODUCTION After the concept of intrusion detection (ID) was first established by Anderson (1980) and later refined by Denning (1986) introduced two major variants of intrusion detection systems (IDS), namely host and network based approaches. Host based systems collect local data from sources internal to a computer, usually at the OS level. This has the advantage of collecting high quality data directly at the source (e.g. kernel). Unfortunately, some attacks cannot be detected at a single location. Distributed intrusions may leave innocent marks at each single host and can only be identified when combining data from a number of different machines. In addition to that, worms or telnet chains (i.e. successive logins by an attacker to hide his tracks) are easier to spot when data from several sources is considered. Network based variants monitor packets on the wire by setting the network interface to promiscuous mode and analyzing network traffic. Therefore, they have some possibilities to correlate activities that occur at different hosts, but suffer from scalability problems in case of high network load and have problems when encrypted communication is used. A new approach is the development of distributed architectures, where sensors (host and network based) collect data, preprocess it and send it to a centralized analyzing station which is able to relate this input. Such client-server architectures suffer from the following deficits.

420

Key Drives of Organizational Excellence

1.

A central analyzer is a single point of failure. When an intruder manages to put it out of action (e.g. denial-of service attack), the whole network loses its protection.

2.

When all information is processed at a single location, the system is not scalable.

3.

The processing capacity of the analyzer unit limits the monitored network size and distributed data collection can lead to excessive data traffic over the network.

4.

It is difficult to apply reconfigurations to the sensor stations. Usually, the whole system has to be restarted after a modification.

The classic solution to combat this shortcoming is the introduction of several hierarchical layers and redundant components. State-of-the-art ID systems like Emerald or AAfID use that approach. A peer-to-peer intrusion detection system without a central processing station is proposed. All hosts run a local ID system and a security manager that can process input from the local host as well as from other security managers. These security managers cooperate by message passing to detect distributed attacks. A different and interesting approach is taken by systems which utilize mobile agents to perform distributed intrusion detection. The aim of this chapter is to discuss advantages and possible drawbacks when applying mobile agents to intrusion detection systems. In addition to that, we introduce a taxonomy which can be used to classify such ID systems and briefly present our implementation of an ID system called Sparta, which heavily relies on mobile agents.

MOBILE AGENTS The development of distributed ID systems and the introduction of software agents to perform intrusion detection lead to the idea of using mobile agents. Mobile agents offer several potential advantages when used in ID systems (see also Jansen et. al, 2000) that may overcome limitations that exist in IDS that only employ static, centralized components (as discussed above).

Reducing Network Load Instead of sending huge amounts of data (e.g. audit files) to the data processing unit, it might be simpler to move the processing algorithm (i.e. agent) to the data.

Overcoming Network Latency When agents operate directly on the host where an action has to be initiated, they can respond faster than hierarchical IDS that has to communicate with a central coordinator located elsewhere on the network.

Autonomous Execution When portions of the IDS get destroyed or separated, it is important for the other components to remain functional. Independent mobile agents can still act and do useful work when their creating platform is unreachable which increases the fault-tolerance of the overall system.

Mobile Agent Technology in Intrusion Detection

421

Platform Independence The agent platform allows agents to travel in a heterogeneous environment and inserts an OS independent layer between the hosts and the IDS using agents. This allows IDS to share data (and build a common knowledge base) as agents from one organization may visit other ones and collect data there (if allowed).

Dynamic Adaption The mobility of the agents can be used to reconfigure the system at run-time by having special agents move to a location where an attack currently takes place to collect additional data.

Static Adaption (Upgradability) It is important for an (especially misused based) IDS that its attack signature database and the detection algorithms are up-to-date. Instead of upgrading and restarting all sensors when new signatures are available, it is simpler to write updated agents and send them on duty while the IDS keeps running.

Scalability When a central processing unit is replaced by distributed mobile agents, the computational load is divided between different machines and the network load is reduced. This enhances scalability and additionally supports fault-resistant behavior. Unfortunately, the introduction of agents and agent platforms may also cause the following problems.

Security Introducing agents into IDS causes several security implications that must be considered. On one hand, the host (and the agent platform) where an agent gets executed must be protected against malicious code. This can be done by signing agent’s code and providing a valid certificate that can be checked by the platform. On the other hand, agents can be modified or be subject to an eavesdropping attack when they move over the network. This might be prevented by encrypting agents in transit. Additionally, mobile agents can be attacked by a malicious agent platform itself. This threat is extremely difficult to fight when agents need unrestricted movement around the network.

Code Size An IDS is a complex piece of software and agents that implement its functionality might get rather large. Transferring the agent’s code over the network may take some time, but it is only needed once, when each host stores agent code locally. Jansen et. al. (1999) claims that agents get especially large when they encode operating system dependant parts, but one might consider putting these routines into the agent platform and offer a generic interface to agents (effectively overcoming this drawback).

Performance Agents are often written in scripting or interpreted languages to be easily ported between different platforms. This mode of execution is very slow compared to native code. As an IDS

422

Key Drives of Organizational Excellence

has to process a large amount of data under very demanding timing constraints (near realtime), the use of MAs could degrade its performance. An important aspect of an IDS is its capability to detect multi-point attacks (Lugmayer, 2000). It is a question under research, how agents can efficiently collect and analyze only parts of the data. Instead of moving huge amounts of data to a central point, the idea is to utilize mobile agents. They can work in a collective fashion without simply carrying the whole data with them (which would in fact increase network load). Our proposed solution to this problem (that has been implemented in Sparta) is described in Section 4. Additionally, agent systems could be used to create more attack resistant architectures. When a couple of aggregating processing units are replaced by a fully distributed agent system, the number of weak points is reduced. Agents act like a colony of insects when working towards a common goal. Nevertheless, it remains an open question how a global system view can be established by independent agents without a central coordinator. Agents can be seen as guards, which protect a network by moving from host to host and performing random sampling. Instead of monitoring each host at any time, agents only visit machines from time to time to conduct their examinations. When any anomaly is detected, a more comprehensive search is initiated. Although the metaphor of patrolling guards seems appealing at first, this approach has the is advantage of leaving hosts vulnerable while no agents are present. On the other hand, random sampling definitely reduces the average computational load at each machine

CLASSIFICATIONS A number of taxonomies for intrusion detection systems as well as for agent systems exist. Nevertheless, the application of agents in the context of intrusion detection raises issues which aren’t sufficiently dealt with in either classification. Therefore, we have created a new taxonomy model for ID systems that utilize agents. The following points are of interest for our model.

Agent Tasks Agent based intrusion detection systems utilize agents for different tasks. One can imagine a number of different designs, where agents fulfill parts or the whole functionality of the ID system. The basic functional parts of IDS are (according to data gathering, data processing, data storage and response components. Data gathering is needed to collect information from various sensors to get a picture of the system state. Data processing is necessary to extract potential attacks from the often enormous amounts of raw data delivered by the data gathering unit and the response component is used to react on an intrusion (e.g. notify an administrator, reconfigure the firewall). The data storage serves as a persistent storage for collected data for later analysis and correlation. All of these parts can be implemented by agents, here most of the time, at least data gathering and processing is realized directly by them. It is interesting to know for a system whether it realizes data gathering, data processing, data storage or response components as agents.

Attack Description An interesting characteristic of agent based IDS is the way that users may specify intrusion scenarios which should be discovered by agents. Three possible ways can be identified. A

Mobile Agent Technology in Intrusion Detection

423

common way is to implicitly describe attacks by providing code that directly operates on data structures delivered by data gathering components. The code itself determines whether an intrusion has occurred by processing the input and calling appropriate response functions. Another possible way that separates ID systems into components is the specification of scenarios in an application-specific scripting language. Usually, one is supported by predefined data types (e.g. IP packets) or a rudimentary way of expressing timing constraints. The last approach is a special language which allows the security officer to define attack patterns which consist of a set of events that can be spatially and temporally related. The description of the attack is translated into rules and code, which can directly be processed by agents. This has the advantage of an intuitive description of the attack scenario. The shown description methods are listed in increasing order of expressiveness. For each system, the kind of used language is an important design decision. A simple language might be easier to implement, but it can fail when new, distributed scenarios should be described. When choosing an extensive language the system design gets more complicated.

Data Relation A challenging issue when building an IDS is its mechanism to relate information from different sources. A common way is to rely on a client-server architecture, where client nodes forward their sensor information to a central analyzer component. This approach suffers from scalability problems when the number of clients grows and introduces a single point of failure into the system. Standard approaches like replication or additional hierarchical layers mitigate the negative impacts but do not attack the core problem itself. An alternative variant is a fully distributed design, where all nodes are equal and form temporary groups to detect distributed attacks. This approach offers improved fault-tolerance and scalability, but suffers from difficulties when coordinating the cooperating hosts. Systems which use central entities have to attack totally different issues than distributed ones. A design with a central processing unit has to deal with the weaknesses of a single point of failure and faces potential performance and scalability problems. Distributed systems on the other hand have to solve the problems of ordering events from different sources (time synchronization) or coordinating and locating agent activities. Another issue is the cooperation and communication between agents themselves. When agents can work together and in parallel to solve a common task, a system scales better than one, where a single agent has to visit every interesting host sequentially.

Persistency Two different types of agents are possible. One type (called transient) is launched by a central station or by another agent to perform a single, specific task. After results have been delivered, the agent itself vanishes. Such agents are mainly used for data gathering tasks. The other types (called persistent) are agents that remain active for a longer period of time. Such agents have the ability to accumulate knowledge over time and react differently to identical stimuli. They roam the network on their own initiative and usually follow a broader range of activities. They can be considered as permanent guards that hop from host to host to perform checks.

424

Key Drives of Organizational Excellence

It is important to distinguish between these agent types because persistent agents have the ability to modify their reactions throughout their lifetime. Systems utilizing persistent agents can adopt to certain threat scenarios while transient agents usually cannot.

INTRUSION DETECTION SYSTEMS Only a few research projects have already attempted to incorporate some ideas of mobile agent technology into intrusion detection systems. One called Michael aims to realize the entire system functionality with mobile agents. Although the architectural description is interesting no implementation has been provided so far. Recently, another system which is based on mobile agents has been introduced in. Unfortunately, only an design overview is presented while the actual detection and correlation mechanism is dealt with superficially. IDA is a classic host based system which uses agents to track attackers and perform active response i.e. counterattacks (Asaka et. al. 1999). A few other systems, which claim to use agents in some way, do not really fit our area as those agents are static. The most known one is AAfID, an architecture where distributed sensors are called autonomous agents. These agents cooperate in a client-server fashion by sending data to central stations where it is further processed. Although mobility is not an issue AAfID is frequently referenced in the descriptions of the systems mentioned above (Balasubramaniyan et. al, 1998). Several other agent based systems without mobility are listed. The following subsection briefly describes Sparta, the system which is currently developed at our group. It should act as a proof of concept which demonstrates that the potential advantages of mobile agents for ID systems can actually be realized.

SPARTA Sparta (which is an acronym for Security Policy Adaptation Reinforced Through Agents) is the name of a project sponsored by the European Union. It is a system whose primary aim is to detect security violations in a heterogeneous, networked environment. Nevertheless, the architecture which has been designed for Sparta targets a broader range of applications, ranging from network management to intrusion detection. Sparta is an architectural framework which helps to identify and relate interesting events that may occur at different hosts of a network. In addition to the detection of interesting patterns, Sparta can also be utilized to collect statistical data (i.e. extreme value or sum of attribute values) of certain events. The system monitors local events at a number of hosts connected by a network. In order to deal with complex patterns, it is not sufficient to select events based on content alone. It is necessary to consider multiple events at the same time and deduce knowledge that is beyond the scope of an individual event (a process called correlation). Each host has at least a local event generator, a storage component and the mobile agent platform installed. The local event generation is done by sensors which monitor interesting occurrences on the network or at the host itself. The exact types of events and their attributes are determined by the application’s needs. In the current setup, we use Snort to extract interesting events from network traffic. The events are stored in local databases for later retrieval by agents. The mobile agent subsystem is responsible for providing a communication

Mobile Agent Technology in Intrusion Detection

425

system to move the state and the code of agents between different hosts and for providing an execution environment for them. Additionally, the system has to provide protection against security risks involved when utilizing mobile code. Most of the components are written in Java and the agent platform itself rests on Gypsy, a Java based system which has been developed at Technical University Vienna (Kr¨ugel and Toth, 2000). The goal of Sparta is the design of mobile-gent based IDS that identifies and improves potential shortcomings of other intrusion detection system designs. The following three issues are addressed (Kr¨ugel and Toth, 2001). To support our detection algorithm and to address the problem of systems which only offer an implicit way of specifying attack scenarios, we have designed an attack pattern language (called EQL). This language allows us to express offending event correlations in a declarative manner where one can specify what to detect instead of how to detect. The primary language design objective is the reduction of the needed amount of transferred data while still retaining enough expressiveness to be usable for most situations. When a system uses mobile code (i.e. mobile agents), it should aim at performing flexible computation remotely at the location where the interesting data is stored. This resulted in the limitation of restricting patterns to tree-like structures when events between nodes are involved. For events that occur at a single node virtually arbitrary correlations are allowed. In addition to the specification of patterns, EQL also allows us to define simple statistical queries. These can deliver the number of authentication failures of a certain user or identify the maximum number of processes running at a single machine of the monitored network. We realized a correlation mechanism which does not rely on (one or more) central server locations where data is gathered and related. Instead, it follows a fully distributed approach. Mobile agents locally select interesting information and only move parts of the data across the network. When detecting patterns agents first try to find actual events which are specified by (and match) the root node of a given tree pattern. When the root node is located, the agents follow the branches of the tree to detect events that match the root’s predecessors. This process is recursively applied until the whole tree has been matched. With this algorithm only very few data has to be carried by agents during each hop (for a complete description of the detection process). The detection algorithm is performed by multiple agents in parallel which improves scalability, fault tolerance and performance of the system when compared to a client-server variant. While many agent systems use some way of encryption and authentication when agents are sent over the network, most of them lack a pubic key infrastructure (PKI). We address this issue in Sparta by providing such a PKI to manage our cryptosystem. Sparta utilizes an asymmetric (public/private key pair) cryptosystem to exchange private keys which are needed to secure agents when they are transferred over the network. The agent code is signed and can be authenticated before it is executed (to protect the platform). The signature is also used to determine the set of permissions an agent is granted when executing on a platform. In our opinion, the contribution of Sparta is the description of system architecture to collect and correlate distributed data in an efficient way by using mobile agents. It can be used for intrusion detection, but other network applications are possible as well.

426

Key Drives of Organizational Excellence

CONCLUSION Although the possible advantages of mobile agents seem impressive at first, only a few systems use them to perform security related tasks. This stems from the fact that the benefits are not introduced automatically and often the disadvantages outweigh the intended improvements. IDA employs mobile agents mainly for tracing purposes while Micael and Sparta have more ambitious aims. In these systems, mobile agents actually carry out the event correlation. Our goal in Sparta is to demonstrate that it is possible to beneficially apply agents to intrusion detection systems. We have recently finished a first prototype system to support our claims. The scalability, performance and fault tolerance can be improved when mobile agents perform distributed detection and don’t need a central location where data is gathered. Nevertheless, the designer has to be careful that the amount of transferred data is not increased.

References J. P. Anderson. Computer security threat monitoring and surveillance. Technical report, James P. Anderson Co., Box 42, Fort Washington, February 1980. M. Asaka, A. Taguchi, and S. Goto. The implementation of ida: An intrusion detection agent system. In Proceedings of the 11th FIRST Conference, June 1999. J. S. Balasubramaniyan, J. O. Garcia-Fernandez, D. Isacoff, E. Spafford, and D. Zamboni. An architecture for intrusion detection using autonomous agents. In 14th IEEE Computer Security Applications Conference, December 1998. M. C. Bernardes and E. dos Santos Moreira. Implementation of an intrusion detection system based on mobile agents. In International Symposium on Software Engineering for Parallel and Distributed Systems, pages 158–164, 2000. D. de Queiroz, L. F. R. da Costa Carmo, and L. Pirmez. Micael: An autonomous mobile agent system to protect new generation networked applications. In 2nd Annual Workshop on Recent Advances in Intrusion Detection, Rio de Janeiro, Brasil, September 1999. D. Denning (1986), An intrusion-detection model, in IEEE Symposium on Security and Privacy, pages 118– 131, Oakland, USA, 1986. W. Jansen and T. Karygiannis. Mobile agents and security. Special Publication 800-19, NIST, September 1999. W. Jansen, P. Mell, Karygiannis, and D. Marks. Applying mobile agents to intrusion detection and response. Interim Report (IR) 6416, NIST, October 1999. W. Jansen, P. Mell, T. Karygiannis, and D. Marks. Mobile agents in intrusion detection and response. In 12th Annual Canadian Information Technology Security Symposium, Ottawa, Canada, June 2000. C. Kr¨ugel and T. Toth. A survey on intrusion detection systems. Technical Report TUV-1841-00-11, University of Technology, Vienna, 2000. C. Kr¨ugel and T. Toth. Sparta - a security policy reinforcement tool for large networks. In submitted to INetSec 01, 2001. W. Lugmayer. Gypsy: A component-based mobile agent system. In 8th Euromicro Workshop on Parallel and Distributed Processing (PDP 2000), Rhodos, Greece, January 2000. P. A. Porras and P. G. Neumann. Emerald: Event monitoring enabling responses to anomalous live disturbances. In Proceedings of the 20th National Information Systems Security Conference, October 1997. M. Roesch. Snort - lightweight intrusion detection for networks. In USENIX Lisa 99, 1999. G. B. White, E. A. Fisch, and U. W. Pooch. Cooperating security managers: A peer-based intrusion detection system. IEEE Network, pages 20–23, January/ February 1996.

44

Implementation of Personalization Service Based on Mobile Web Service Platform Neeharika Sengar Anil Singh Jainendra Jain

In this chapter, candidate personalization service architecture for future mobile services has been designed the framework to find the required software modules for a mobile personalization service, which is related with context-aware processing and user-preference information has also been designed. And, in this chapter we are implementing and testing the prototype system, and finding better solution to deploy personalization on mobile terminal.

INTRODUCTION

Figure 1: Proposed Framework for mobile internet personalization service

428

Key Drives of Organizational Excellence

Many kinds of wireless internet service are available for mobile user. The system consists of mobile terminal, gateway, web server and network. From current wireless internet service, new kinds of mobile web service are emerging with intelligent semantic web server as well as web service. To comply with these kinds of services, it also demands new service architecture. Semantic and Web service technologies are possible to provide personalized and customized services to mobile users. Fig. 1 shows architecture framework for next generation mobile personalization services. This framework is our test bed environment.

CONSIDERATIONS OF RESEARCH To provide personalization service, there is need to review many kinds of technical aspects. The first, because of movement of user, personalization service needs to consider the location based information and user’s context, i.e., mobile terminal or time schedule, etc. the second, the solution for understanding mechanism for user context. For this, database is needed to allow to access with user information. The third need to build to modeling the knowledge having reflected on user requirements. The fourth, current system is approached on thin client based terminal capacities which the mobile client has the minimum roles in functions. The last, there are conflicts of integration between server systems and server agents are not yet standardized on market world. Until now, even though many considerations are not resolved, mobile web service applications are studied for the achieving the goals of service automation, intelligence and personalization. Among them, now focus will be on personalization area. This personalization approach is based on context-ware based. Context-aware optimization and modeling is the growing issue for mobile web service (Abowd et. al., 1997; Lum and Lau, 2002; and Pascoe et. al., 2000) and research needs context based optimization technology to support intelligent mobile web service. For current web-based services are rapidly working with mobile services. Both personalized and agile services, especially in mobile-web services, demand promptness of service by collaboration of context optimization. Therefore, context-aware optimization research that is based on multiple ontology may be applied for mobile customization.

MOBILE PERSONALIZATION SERVICE SYSTEM The proposed framework for context aware mobile service is shown in Fig. 2. The first two architectures are in similar to the concept of that adopted in standardization body. The last figure is the new architecture and it is now currently implementing at laboratory level, which would be candidate system for the next generation mobile framework supporting with context aware service. As shown in Fig. 2, six levels of layers are traditional architecture on first, second figure. There are application layer, content layer, service layer, platform layer, communication layer and operating system layer. From this layer, we can put some new layer to that layer. The additional layer is personalized application, semantic content processing and context-aware service layer. These are new functions for the supporting the personalized service than that of traditional services. The web server plays an important role in the CAMA-myOpt system. CAMA-my-Opt contain a decision engine in the M-optimization task specific agent (MSA) to provide accurate decision results to the customer. It also connects with MAS subsystems in agent communities. If the UA retrieves the user preferences, the decision-making engine will send the information to a negotiator on behalf of the UA, which is located in the agent community.

Implementation of Personalization Service Based on Mobile Web Service Platform

OMA Architecture Based ware Mobile Frame work Frame work Application Layer (Browser Content Layer ( XHTML, XML) Service Layer (LBS, MMS) WAP/ME (No Platform) Communication Layer (CDMA)

Mobile Middleware

Mobile context

Platform based Frame work

Service Based

Application Layer (Browser)

Personalized Application (Collaborative app.) Application Layer

Content Layer ( XHTML, XML)

Semantic Content (DAML+OIL)

Service Layer (LBS, MMS)

Content Layer

Middleware Platform (WIPL, BREW, J2ME) Communication Layer (CDMA)

Context - aware Service ( Agent Platform) Service Layer Middleware Platform Communication Layer (CDMA)

OS:REX OS:REX

OS:REX

Figure 2: Comparison for each Framework methods

User

HTML

SOAP

Agent Community

Mobile User Agent

Multi-Agent System - Negotiator - Coordinator - Seller agents Mobile Task specific Agent -Decision Engine

Context Model & DB - Model Base Management System - DBMS

Figure 3 Architecture of proposed System

Web ServicesOntologies: Service, User, Product

429

430

Key Drives of Organizational Excellence

SYSTEM FRAMEWORK As shown in Fig. 3, our framework has five primary components: ontology, web service, mobile optimizing user agent (UA), mobile optimizing task specific agent (TSA) and contextaware multi-agent system (MAS). A customer connects to a server by using a mobile device to access the UA. The A provides information to the customer from the user ontology, which provides the user’s personal information such as profile and preferences, and sends a request message to the TSA. The TSA performs a request action to support corresponding service, according to optimization models executed by the Model Base Management System (MBMS). These optimization models are located in the model base, and are created by context-model matching rule base along with the model base. A web service needs three kinds of server systems for the context-aware optimization: a general web server, a MAS embedded application server, and SOAP protocol based semantic web servers.

Agent Community

User

Mobile User Agent

Multi-Agent System - Negotiator - Coordinator - Seller agents

HTML

SOAP Mobile Task specific Agent -Decision Engine

Web ServicesOntologies: Service, User, Product

Context Model & DB - Model Base Management System - DBMS

Figure 3: Architecture of proposed System

The web server plays an important role in the CAMA-my Opt system. CAMA-my-Opt contain a decision engine in the M-optimization task specific agent (MSA) to provide accurate decision results to the customer. It also connects with MAS subsystems in agent communities. If the UA retrieves the user preferences, the decision-making engine will send the information to a negotiator on behalf of the UA, which is located in the agent community.

TEST BED ENVIRONMENTS Our Test bed environment is shown figure 4. User ontology, product ontology, and service ontology are implemented with DAML + OIL, based on XML. The ontology is accessed and interpreted with Jena API. Agents are implemented with JATLite on top of JDK version 1.4.1. Communication between the UA and the client is expressed with Java Servlet Pages (JSP).

Implementation of Personalization Service Based on Mobile Web Service Platform

431

Through the agent communication language (ACL), KQML, CAMA-my-Opt implements the negotiator’s communication with both seller agents and the coordinator. The TSA’s active environment is implemented through an Apache SOAP server, a web service communication that exists outside of the server and agent community. Information repositories such as a model base and database are implemented with Microsoft Access® 2000.

Figure 4: Test bed System for mobile personalization service

References Abowd, G.G., Atkeson, C.G., Hong, J., Long, S., Kooper, R., and Pinkerton, M. Mobile Context-aware Tour Guide, ACM Wireless Networks, Vol. 3, pp. 421-433

(1997), Cyber Guide: a

Lum, W.Y., and Lau, F.C.M. (2002), A Context-Aware Decision Engine for Content Adaptation, Pervasive Computing, Vol. 1, No. 3, pp. 41-49. Pascoe, J., Morse, D.R., and Ryan, N.S. (2000), HCI issues in fieldwork environments, ACM TOCHI, Vol. 7, No. 3, pp. 417-437.

432

Key Drivers of Organizational Excellence

45

Finite-state Automata based Classification of News Segments Nitin Shrivastava

The goal of Content - Based Retrieval (CBR) is to provide quick access to relevant content stored in multimedia digital libraries that contain enormous video data. Most video CBR systems retrieve shots, or a collection of shots, based on user input. Thus, the tools for retrieving segments of a video program are not explored fully, though they form a meaningful utility for a CBR user. Parsing video programs into program segments is useful in retrieval of individual segments and video summarization. Many video classes show structure in them that can be effectively modeled using Finite-State Automata (FSA). In this paper, we present a FSA-based system that extracts contextual structure from news video database. Each video segment such as newscaster sequence, weather sequence, etc., becomes a node in FSA. The transition is fired from one node to another node, based on arc conditions, which can be easily obtained by employing statistical methods on classified data. Modeling with FSA avoids the use of complex rule-based system. Experimental results presented with FSA approach for more than 8 hours of video data show an accuracy of 88% in recognizing the components of news video. Keywords: Video indexing, Content-based retrieval, shot transition, Parsing, Segment retrieval, Finite-State Automata (FSA)

INTRODUCTION With the advancement of technology, the amount of video data has increased enormously. Unlike text data, video data is unstructured, and searching for a desired segment (a segment is a shot or a group of shots that are relevant) is not so straight forward. Techniques are, therefore, being sought for automatically classifying video data, for summarizing video data, and for recognizing important parts of a program. Parsing video programs into meaningful components hence becomes an important tool required in many applications. Consider, for example, parsing broadcast news into different sections, and providing the user with a facility to browse anyone of them. Examples of such queries could be “Show me

Finite-state Automata based Classification of News Segments

433

the sport clip which came in the news” and “go to the weather report”. If a video can be segmented into its scene units, the user can more conveniently browse through that video on a ~cene basis rather than on a shot-by-shot basis, as is commonly done in practice. This allows a significant reduction of information to be conveyed or presented to the user. Wolf (1997) has used the Hidden Markov Model (HMM) to parse the sequence into dialog scene, master scene and establishing scene, using dynamic programming to determine the most likely sequence of states which generated the token sequence. Yeung et aI., (1995) developed a scene transition graph model. However, they employ clustering, which is used to create the nodes of a scene transition graph. Clustering does not provide the lexical information required to recognize many cinematic structures. Zhang et al., (1995) developed a hierarchical browser, which temporally samples the video but does not provide insight into the segment structure of the program. We employ Finite-State Automata (FSA) (refer Lewis and Papadimitriou, 1998 for introduction to FSA) to model video programs. FSA modeling is quite generic in nature and has been employed in application encoding of contextual information such as speech recognition Ourafsky, 2000). Though the video programs change everyday, they still have a similar structure, especially the news programs. For example, the weather report always follows the newscaster shot and begins with dissolve. The variations could be in the duration of weather report, or the number of shots depicting the weather report. Thus, using the simple rulebased approach would be quite complex. Modeling with FSA, on the other hand, is effective as there is some uncertainty, at each node, and finally, segments are obtained by the construction of a parsing tree. The contribution of the article lies in presenting an approach where structured domains like news, educational videos, etc., can be parsed such that a user can easily retrieve segments of the classified videos. However, the approach would fail if the underlying video domain does not have such unique structure, or the structure is highly variable with respect to time. The commercial application of this framework lies in video-ondemand, video indexing and classification. For the purpose of parsing in FSA model, we selected features based on shot transition. A ‘Shot transition’ is the boundary between the two camera shots, usually generated with a post-production editing operation. When presented with a succession of shots, the viewer instinctively seeks to establish an association between them. The reason behind selecting shot transition features is that various previous studies have highlighted the importance of editing elements in film production and bringing forth some desired effects. Nack and Parkes (1997) have proposed an automated video editing model based on the actual practices of the editors. Butler and Parkes (1996) use filmic time-space representations to enable interactive editing, and to assemble congruous sequences with spatial and temporal continuity. The works by Nack and Parkes (1997) and Butler and Parkes (1996) give a strong indication that editing follows well established rules (though sometimes, complex). In our work, the shot transition rules and shot length dictated by news structure playa crucial role in determining news segments.

PARSING SYSTEM Figure 1 shows the steps executed in the parsing system. The digitized media is kept in the database and the shot transition module segments the raw video into shots. For detection of shot transition and its type, we employed our shot detection algorithm (Mittal et aJ., 2002).

434

Key Drivers of Organizational Excellence

The sequence composer groups a number of shots (say 17) based on the similarity values of the shots depending on the similarity value based on color, texture, motion, etc., of the individual shots (using techniques such as given in Jain et aJ., 1999). The output of the sequence composer is a segment that contains media of a particular type. In video ContentBased Retrieval (CBR) applications, it is found that there are certain parts of video segments that can be classified using learning tools such as Bayesian Networks and labeled data (Mittal and Cheong, 2003). In case of news programs, these are newscaster segments and segments with the begin logo. Once these segments are recognized, assuming context information is valid, certain other types of segments that occur in between newscaster segments and show particular characteristics can be identified. For example, static picture segments in case of news video can be recognized. During the learning of FSA model, other segments of news are classified by an expert. The statistics for the variables are obtained and it is used to calculate Conditional Probability Tables (CPTs) for the transition variables. During the parsing phase, one can begin at the start node of FSA, and transitions are fired appropriately depending on outgoing arcs as well as probabilities associated with the arcs.

Figure 1: Steps in Parsing System

SHOT-CUT AND SHOT-LENGTH DESCRIPTION Shot transitions can be analyzed along several dimensions. We outline descriptors that are included in our shot transition module.

Shot Lengths: They are the lengths of shots in the temporal domain. The length of any individual shot influences the length of the shots before and after it, and helps define a certain rhythm in the scene. The special relevance in news video is that newscaster shot has longer shot length than, for example, the sports segment, in general. Type: when a transition from one shot to another occurs, the viewer becomes Transition Type aware of several issues (Millerson, 1967) like a retained memory of the previous shot, the new shot’s initial impact, and a comparison between the two shots. Shot transitions can be

Finite-state Automata based Classification of News Segments

435

divided into two types, based on the video production procedure and the duration of the change: abrupt transition (also referred to as cut, discontinuous transition) and gradual transition (also referred to as continuous transition), which includes fade in, fade out, dissolve, and wipe transitions.

Transition Lengths: Transition length is the number of frames over which the transition occurs. For flat-cut, it is 1, and for others it varies generally in the range of 10 to 100. Dissolves occurring in commercials are very short in length (e.g., 10 frames), while for sports, the dissolves are of longer duration (e.g., 50 frames).

CONCLUSION We present a FSA-based system that extracts contextual structure from news video database. Each video segment such as newscaster sequence, weather sequence, etc., becomes a node in FSA. The transition is fired from one node to another node, based on arc conditions, which can be easily obtained by employing statistical methods on classified data.

Refernces Butler, S.; and A. P. Parkes (1996), Filmic Time-Space Diagrams for Video Structure Representation, Signal Process: Image Communication V8, 269-280. Cli,_S.; NJ, Jurafsky and J. Martin (2000), Speech and Language Processing, Prentice-Hall, Englewood. Jain, K.; A. Vailaya, and X. Wei (1999), Query by Video Clip, Multimedia Systems, pages 369–384, 1999. Jurafsky, D. and J. Martin (2000), Speech and Language Processing, Prentice-Hall, Englewood Lewis H.R. & Papadimitriou C.H. (1998), Elements of the Theory of Computation, Prentice Hall International Editions: Singapore Millerson, Gerald (1967), The Technique of Television Production. Hastings House: New York. Mittal and L. F. Cheong (2003), Framework for Synthesizing Semantic-Level Indices, Journal of Multimedia Tools and Application, pages 135–158. Mittal, A., Cheong, L.-F., Leung, T.-S., (2002), Robust Identification of Gradual Shot-Transition Types In: IEEE International Conference on Image Processing (ICIP), pp. II-413-II-416. Nack, F. and A. Parkes (1997), The application of video semantics and theme representation in automated video editing, Multimedia Tools and Applications, pages 57– 83. Wolf, W. (1997), Hidden Markov Model Parsing of Video Programs, Proc. Int. Conf. Acoustics, Speech, and Signal Processing, Munich, Germany, April, pp. 2609-2611. Yeung, M., Yet, B.L., Wolf, W., and Liu, B (1995), Video Browsing using Clustering and Scene Transitions on Compressed Sequences, Multimedia Computing and Networking, San Jose, Feb. Zhang, H. J.; S. W. Smoliar; and J. H. Wu (1995), Content-Based Video Browsing Tools, Proc. IS&T/SPIE Conf. on Multimedia Computing and Networking 1995, San Jose, CA.

436

Key Drivers of Organizational Excellence

46

Unified Communication: Enabling the Knowledge Worker through Simplicity Somil Mishra*

Unified communications encompasses all forms of call and multimedia/cross-media message-management functions controlled by an individual user for both business and social purposes. Business and technology decision makers already place a high priority on providing optimized communication between remotely located knowledge workers and their teams. Indeed, the modern organization is already awash in communications devices. Unified communications has repeatedly been the center of many discussions involving the future of communications. Unified communications encompasses a broad range of technologies and many potential applications. It is important to note that it is still in its infancy and many definitions have been used by the messaging industry. Unified communications can be used as a business tool as well. It can provide efficient business communication or act as an interface to a business organization. People can use the phone to get information or to make transactions. Here, looking at its business implications, it is important for a business for its internal communications, for a business to speak to its customers as well as for service providers as a business opportunity. None of the parties involved would like to miss this technological train. Then there are regulatory issues, and almost all vendors agree that the full benefits of unified communication (or any other IP technology) can only be had if convergence of PSTN is allowed with data networks. This paper will attempt to explain the importance, functionality, usability and hurdles in the Indian context.

INTRODUCTION In 1970, the late management visionary Peter Drucker invented the term knowledge worker. The most important role in business, he predicted, would be "to make knowledge more productive." We, the individuals of a Technology and Knowledge Society, live in multiple networks and multiple electronic communities and have an ever increasing number of *

Somil Mishra, Assistant Professor (Information Technology), Dr. Gaur Hari Singhania Institute of Management & Research (J. K. Group Institution), Jaykaylon Colony, Kamla Nagar, Kanpur (U.P.).

Unified Communication: Enabling the Knowledge Worker through Simplicity

437

innovative communication devices to choose from; few to mention as PDAs (Personal Digital Assistants), Mobile Phones, Pagers, Handheld Computers or a WAP (Wireless Application protocol) enabled device. With the wide range of services and devices at our disposal, greater demands are being imposed on a subscriber in the way we manage our communications. Today's busy consumer wants an intuitive, easy to use solution for unifying his communication. That goal remains at the forefront of the modern organization, where knowledge workersfrom executive and middle management to contact center agents-travel constantly or work in mobile or distributed environments. According to a study conducted by Sage Research for Cisco Systems, 27 percent of the workforce of IP-enabled companies travels at least once per month. Of those companies, 58 percent say that one-fifth of their workforce travel out of the office monthly, and almost a quarter report that 40 percent or more of their employees work from the road at least once each month. The prevalence of the distributed workforce-located in multiple campuses or smaller offices, often around the world-represents another challenge. Nemertes Research reported in 2005 that 90 percent of employees work in locations other than headquarters, with between 40 and 70 percent of employees working in different locations from their supervisors. Nemertes also noted that the number of virtual workers (people who work in offices geographically separated from their supervisors) has increased by 800 percent since 2000. Indeed, millions of knowledge workers telecommute from home. According to published data from In-Stat, 44 million Americans telecommute on at least a part-time basis in 2004, and 51 million are expected to do so in 2008; the same study estimated that there were 14 million full-time telecommuters in 2004. Finally, the rise of outsourcing and the networked virtual organization concentrate on their core responsibilities and partner with other entities to complete the "value chain"-make it imperative that project team members be able to communicate transparently across organizations. Business and technology decision makers already place a high priority on providing optimized communication between remotely located knowledge workers and their teams. Indeed, the modern organization is already awash in communications devices. The Sage Research study revealed that those businesses average more than six communications devices and almost five communications applications per employee. At the same time, the "quality of interaction" expectations have increased as the use of Web chats and Web conferences, videoconferences, and multimedia contact centers has grown. But the essential problemhow to create and sustain an effective communication environment for traveling personnel and distributed workgroups-remains (Haag et. al., 2006). The essence of communication is breaking down barriers. In its simplest form, the telephone breaks distance and time barriers so that people can communicate in real time or near real time when they are not together. There are now many other barriers to be overcome. For example, people use many different devices to communicate (wireless phones, personal digital assistants [PDA], personal computers [PC], thin clients, etc.), and there are now new forms of communication as well, such as instant messaging. The unified communications concept involves breaking down these barriers so that people using different modes of communication, different media, and different devices can still communicate to anyone, anywhere, at any time (Szakonyi, 2006).

438

Key Drivers of Organizational Excellence

In other words, it may be understood as an employee trying to communicate to another employee for an office problem and due to human limitations or communication complexities, is not able to communicate or track him. Not needed to explain how it would create communication delay, which will accumulate itself as task delay, schedule delay, project delay and ultimately will convert into financial and other losses. This variety of communication methods and channels often lead to confusion and ambiguity and may result into four business losses and penalties: l

Communication-caused delay and disruption in a pervasive business problem

l

Communications complexity affects long-term productivity, business process reform, and financial performance

l

Decision support outcomes suffer from inability to access and collaborate effectively with primary players

l

Resources are underused or misallocated because of the complexity of communication

COMMUNICATIONS SYSTEMS AND MODELS IN UC Unified communications encompasses several communication systems or models including unified messaging, collaboration, and interaction systems; real-time and near real-time communications; and transactional applications. Unified messaging; focuses on allowing users to access voice, e-mail, fax and other mixed media from a single mailbox independent of the access device. Multimedia services include messages of mixed media types such as video, sound clips, and pictures, and include communication via short message services (SMS). Collaboration and interaction systems focus on applications such as calendaring, scheduling, workflow, integrated voice response (IVR), and other enterprise applications that help individuals and workgroups communicate efficiently. Real-time and near real-time communications systems focus on fundamental communication between individuals using applications or systems such as conferencing, instant messaging, traditional and next-generation private branch exchanges (PBX), and paging. Transactional and informational systems focus on providing access to m-commerce, e-commerce, voice Web-browsing, weather, stock-information, and other enterprise applications. Trying to summarize these needs and technologies (solutions), Unified Communication platform is an attempt to marry a number of key technologies: l

Unified Messaging (Email, Voicemail, and Fax)

l

Calendar

l

Instant Messaging (Chat and Voice over IP)

l

Web Conferencing

l

Content Sharing

l

Content Management

l

Security

l

Policy management

Unified Communication: Enabling the Knowledge Worker through Simplicity l

Access from everywhere (Web and Mobile platforms)

l

Presence

l

Identity Management

439

Unified communications provides control for the individual user. It can help to send and receive messages, whether they are voice, e-mail, or fax. It also will notify the user whenever mail arrives. The concept of notification is becoming a large part of messaging. Some people want to be reached at all costs, anywhere, at any time. Whether they are at home or on vacation, they want to be notified of messages. Others are more protective about their privacy. They do not want to be reached, for example, when they are sleeping or having dinner. Unified communications technology provides the power to reach people almost anywhere, at any time, and provides the flexibility to allow people to control when they can be reached. Subscribers can interface with messages how and when they want. With unified communications, subscribers reduce the number of places they must check for incoming voice, fax, e-mail messages, and other media types. From a single interface, they can check for all messages. Like evolution in any other area, unified communication too has adapted a smooth path, right from basic voice messaging systems to unified communications en route advanced voice messaging and unified messaging. Today, technologies like voice recognition and voice-to-text and text-to-voice are gaining momentum and that day is not too far when we would be able to say that we have achieved one hundred percent accuracy in voice-to-text conversion. Natural language processing is another paradigm. So we may think of a commuting user, calling from a railway station or a bus stand and listening to the e-mails and replying them thus taking important decisions well in time and without the commutation delay. A well developed speech recognition system will also aid to easy navigation of menu items.

440

Key Drivers of Organizational Excellence

A steep downwards trend in telephonic communication costs will ensure that this use is cost effective and reliable. Unified communication can and will become a very effective business tool. It may enable an automated and round the clock front desk available on any phone. Unified Communication is the direct result of convergence in communication networks and applications. Differing forms of communication have historically been developed, marketed and sold as individual applications. The convergence of all communications on IP networks and on open software platforms is allowing a new paradigm for Unified Communication and its impact on how individuals, groups and organizations communicate. Unified Communication products are used by employees for their own communications as well by organizations to support workgroup and collaborative communications (Bhaskar, 2007). These products also extend UC outside the boundaries of a company to enhance communications between organizations, as well as to support interaction among both very large public audiences and specific individuals. Five core communication product areas are converging in the current generation of products. As these areas converge, each is also evolving individually: 1.

IP telephony and soft-phones are replacing the PBX;

2.

Unified messaging is integrating voice mail with e-mail;

3.

E-mail itself is evolving toward a more-powerful desktop knowledge and contact management tool;

4.

Separate voice, video and Web-conferencing capabilities will converge; and

5.

IM is expanding its capabilities to incorporate presence for multiple communication methods (sometimes called rich presence) and has become an effective way from which to initiate differing forms of live conversation.

VENDORS PROVIDING UC PRODUCTS Let's try to gather the information about the vendors who are working on this area and have offered Hardware and Software products for Unified Communication capabilities. They have varying degree of flexibility and as of the core concept of UC is concerned. Some of the vendors and their products are: Alcatel calls its UC product "OmniTouch Unified Communication" (OTUC). This software suite is based on a Java framework that provides common directory, Web Services and presence information. This is used by four applications: MyPhone provides softphone capabilities for any device. MyMessaging provides unified messaging integration with leading e-mail systems, including speech access. It also integrates with Alcatel voice mail products. MyAssistant provides flexible personal call routing. MyTeamwork provides tripleplay conferencing, IM and presence capabilities. OTUC also offers Web services for business process integration. The Avaya solution comprises many individual parts, some of which are excellent. However, Avaya's approach to UC remains fragmented and needs further integration. Mobility is offered through the One-X family, while individual Multi-vantage applications offer other functions. These include Avaya Modular Messaging, Avaya Meeting Exchange, Avaya

Unified Communication: Enabling the Knowledge Worker through Simplicity

441

Softphone, and Avaya Video Telephony. A promising part of the solution set is the Converged Communications Server, which will offer Session Initiation Protocol (SIP) and Application Enablement Services; however, these are in an early stage and have seen very limited deployment. Other key components, such as rich presence and consolidated reporting, are not yet defined. Avaya has strong individual communication capabilities, a broad set of partnerships and a large client base. However, it must increase the level of integration across its products and define a specific architecture before it can be considered to offer a UC solution. AVST's AVST's CallXpress product offers vendor- independent voice mail and unified messaging. The speech assistant product provides real-time voice control and access to live calls, as well as access to calendaring, directory and notification. CallXpress supports Exchange, Notes and other IMAP-based e- mail environments. This product may be considered if one is looking for a platform-independent migration path for voice mail to unified messaging and third-party UC solutions in time division multiplexing (TDM) and Internet Protocol telephony (IPT) environments that scale to 20,000 subscribers per server. Multisite environments are also supported. Cisco Systems significantly advanced its UC portfolio in the past year. It re-branded many of its products with the Cisco Unified (CU) prefix, at the same time it rolled out what promises to be comprehensive SIP support across its product line. Established products include CU CallManager for call-processing functionality, Cisco Unity for unified messaging, and CU MeetingPlace for voice, Web and videoconferencing. It also offers a full set of contact center and mobility products. Currently in the testing phase are CU Presence Server, which will provide aggregated presence and contact preference information, and the CU Personal Communicator, which will provide a desktop user interface to communication functions. Cisco has advanced its strong established product base by developing a next-generation communications road map based on open standards and convergence. IBM's primary UC solution is based on capabilities delivered via Lotus Notes/Domino and Lotus Same Time. However IBM also has communication services planned as part of WebSphere Application Server 6.1 (WAS). Lotus Notes/Domino is offering messaging and unified messaging (in conjunction with partners such as Cisco and Avaya). Lotus Same time offers IM, presence awareness, location presence, as well as click-to-call, click-to-talk, click-toconference and Web conferencing with integrated audio-conferencing. IBM is also releasing a real-time collaboration gateway to allow interoperation with Public IM networks, including Google Talk and other protocols, including SIP WAS will support SIP services, including proxy and registrar, providing a platform for third-party communication applications. It will also offer a rich presence server later in the year. Microsoft's UC solution is based on Live Communication Server (LCS) and its client Office Communicator (OC). Together, these offer presence, IM functionality, call control, a general-purpose client interface and integrations to other live communications. Unified messaging will be offered through their Exchange Server 2007 product, as well as through partnerships. Live Meeting offers conferencing and collaboration but, unfortunately, is currently offered only as a service, not a premises solution. Microsoft has partnerships to enable live voice integration, including PBXs and IP-PBXs, and also has native SIP-based voice solutions. Although many of the functions are not yet in general release, or are at an

442

Key Drivers of Organizational Excellence

early-stage, together, this represents a strong emerging UC portfolio. These functions can also be integrated with Microsoft Speech Server, Active Directory, Microsoft's various mobility solutions and will be offered as a premises solution through channels or as a service through partners. Oracle's Collaboration Suite provides a set of UC functions. Real Time Collaboration provides Web conferencing, presence and IM. Oracle UM provides e-mail, voice mail and threaded discussions, and Oracle Workspaces provides shared teamwork environment. Voice communications is offered as a point-to-point solution with Real Time Collaboration, which can also be integrated with third-party conferencing bridges. Additional voice functionality, such as SIP proxy and registrar, is planned as part of future releases of Oracle's Fusion Middleware, which is also the method of application integration. Organizations with existing Oracle deployments should review these solutions. Siemens' HiPath OpenScape product is the most- mature and open UC product in the market today. HiPath OpenScape offers desktop and speech communications interfaces with presence and conferencing and works in multiple PBX environments. Related modules are HiPath Xpressions for unified messaging; HiPath ComAssistant for computer-telephony integration (CTI), routing and application integration; and HiPath CorporateConnect to support mobile and remote users. Their solution integrates with Microsoft's Live Communication Server (LCS) and with IBM Same Time. Of particular interest is the approach that Siemens is taking toward offering complete integrations with vertical industry applications. This gives organizations the ability to communication- enable key business processes without having to make intrusive across-the-board infrastructure changes. Looking at the industry trend, like everyone is trying to enrich their UC portfolio, and take advantage of it before the technology life-cycle phases it out, it may be said that we are yet to see and feel many others to enter this segment of communication. Right now we may observe that business organizations and enterprises are going for the concept of UC for enhanced productivity of employees and hence gaining higher degree of operating profit, we may assume that Mobile and other telephony service providers may also sense its practicality and hence in coming time, we may observe operators to launch UC concepts to retail and individual customers. Because of no availability of data from mobile service providers, which is indeed very natural as no one would like to float this idea before they launch it as a commercial product, we are not able to comment on it but of-course our sense says that they may go in this direction in near future. The UC products, available for enterprise level, invite a substantially high initial investment. Thus for Indian companies, which operate on district or state level, it will be a large investment to incur. And I am sure that they would not like to go for it in the first go. Secondly, they need to understand the real benefits of UC concepts and its applicability before planning to invest in that. So other than those companies, which are Blue Chip companies in India, they need to wait till UC is available on a small, integrated product. For example, we may think of a Private Branch Extension (PBX) which integrates IP and PSTN communications and is available in market for a moderate price, this will open flood gates for everyone to practically use UC in Indian context. So, UC in India, for non-Blue Chip companies is still a concept and we need to wait till UC products and services are available to retail and individual consumers or they are being offered through mobile companies on affordable prices.

Unified Communication: Enabling the Knowledge Worker through Simplicity

443

References Szakonyi, Robert (2006), "Handbook of Technology Management", Viva Books Pvt. Ltd, New Delhi. Haag, Stephen; Baltzan, Paige; Phillips, Amy (2006), "Business Driven Technology", Tata McGraw-Hill Publishing Co. Ltd., New Delhi. Bhaskar, Bharat (2nd Ed. 2007), "Electronic Commerce", Tata McGraw-Hill Publishing Co. Ltd., New Delhi.

444

Key Drivers of Organizational Excellence

47

Self-monitoring and Self-adapting Operating Systems Krishan Kant Yadav Ghanshyam Yadav Anil Singh

Extensible operating systems allow applications to modify kernel behavior by providing mechanisms for application code to run in the kernel address space. Extensibility enables a system to efficiently support a broader class of applications than is currently supported. This chapter discusses the key challenge in making extensible systems practical: determining which parts of the system need to be extended and how. The determination of which parts of the system need to be extended requires self-monitoring, capturing a significant quantity of data about the performance of the system. Determining how to extend the system requires selfadaptation. In this chapter, we describe how an extensible operating system (VINO) can use in situ simulation to explore the efficacy of policy changes. This automatic exploration is applicable to other extensible operating systems and can make these systems self-adapting to workload demands.

INTRODUCTION Today’s extensible operating systems allow applications to modify kernel behavior by providing mechanisms for application code to run in the kernel address space. The advantage of this approach is that it provides improved application flexibility and performance; the disadvantages are that buggy or malicious code can jeopardize the integrity of the kernel and a great deal of work is left to the application or application designer. It has been demonstrated that it is feasible to use a few simple mechanisms, such as software fault isolation and transactions, to protect the kernel from errant extensions (Seltzer et al, 1996). However, it is not well understood how to identify those modules most critical to an application’s performance and how to replace or modify them to better meet the application’s needs. The ability for applications to modify the kernel is the key to extensible operating systems, but it is also its critical drawback. The power derived from enabling applications to control their own resource allocation and kernel policy can lead to improved performance, increased functionality, or better system integration, but it imposes a tremendous burden on the

Self-monitoring and Self-adapting Operating Systems

445

application developer. The application designer must determine which kernel modules are critical to an application’s performance and what modifications to those modules are required. Determining which parts of the kernel are critical to an application’s performance requires an in-depth under standing of the demands that the application places on the operating system. In some application areas, such as data base management, these demands are well understood (Stonebraker, 1981). However, in other application domains or in the case of emerging applications, these demands are not well-understood, and there is no convenient method of obtaining this information.

THE NEED FOR DATA One of the lessons learned from every major software development project is that there is never enough data about how the system is operating and what is going wrong. Saltzer reminds designers that measurement is more trustworthy than intuition and can reveal problems that users have not yet detected. He encourages designers to focus on understanding the inner workings of the system, rather than relying on the response to changes in workload (Saltzer and Gintell, 1970). Lucas (1971) conveys similar message encouraging system designers to regularly run benchmarks and build in as much instrumentation as possible. While many systems of the 70’s heeded these words of wisdom and built in significant performance evaluation tools (Ferrari, 1978), today’s systems show a surprising dearth of native measurement tools. The 1980’s produced a noticeable absence of well-instrumented systems. Today’s common research platform, UNIX, initially had very little in the way of performance measurement tools, but now has a standard set of utilities that can provide constant monitoring of system state (e.g., netstat (Baker et al, 1991), iostat (Ferrari, 1978), vmstat (Ferrari, D, 1978), rstat (Baker et al, 1991), nfsstat (Baker et al, 1991), pstat (Ferrari, D, 1978), systat (Computer Systems Research Group, 1994a) ( Computer Systems Research Group, 1994b)). These utilities were designed for their output to be read by humans, not processed automatically. However, any system with such a collection of performance monitoring tools could benefit from the off-line analysis of regularly captured output from them. A second source of readily available data comes from the hardware itself. Several of today’s microprocessors contain one or more instrumentation counters that provide invaluable data for performance analysis. For example, the Pentium processor family contains a 64-bit cycle counter and two 40-bit counters that can be configured to count any one of a number of hardware events such as TLB misses, cache misses, and segment register loads (Intel Corp, 1994). The Sparc (Sun Inc, 1994) and Alpha (Digital Semiconductor, 1995) microprocessors also have performance counters, although neither has as extensive a set as those available on Pentium processors. Regular monitoring and collection from these hardware counters can also provide a source of information and insight into system behavior.

SELF-MONITORING Here, methods for making an operating system self-monitoring are introduced. The details are presented in the context of the VINO operating system, on which a prototype self-adapting system is being built. However, the principles and approaches are applicable to any number of systems that support extensibility [e.g., SPIN (Bershad et al, 1995)].

446

Key Drivers of Organizational Excellence

VINO is an extensible operating system designed to provide resource-intensive applications greater control over resource management. VINO supports the downloading of kernel extensions (grafts), which are written in C++ and protected using software fault isolation. To facilitate graceful recovery from an extension failure, VINO runs each invocation of an extension in the context of a transaction. If the invocation fails or must be aborted (e.g., because it is monopolizing resources), the transaction mechanism undoes all actions taken by the invocation of the extension (Seltzer et al, 1996). The VINO kernel is constructed from a collection of objects and consists of an inner kernel and a set of resources. VINO provides two different modes of extensibility. First, a process can replace the implementation of a member function (method) on an object; this type of extension is used to override default policies, such as cache replacement or read-ahead. Second, a process can register a handler for a given event in the kernel (e.g., the establishment of a connection on a particular TCP port). Extensions of this type are used to construct new kernel-based services such as HTTP and NFS servers. The VINO approach to self-monitoring and adaptability takes advantage of the extensible architecture of the VINO system providing the following features: 1.

Continuous monitoring of the system to construct a database of performance statistics.

2.

Correlation of the database by process, process type, and process group.

3.

Collection of traces and logs of process activity.

4.

Deriving heuristics and algorithms to improve performance for the observed patterns.

5.

In situ simulation of new algorithms using logs and traces.

6.

Adapting the system according to results of simulation.

These steps are described in more detail ahead.

Monitoring Each VINO subsystem includes a statistics module that maintains counts of all the important events handled by the subsystem. For example, the transaction system records the number of transactions begun, committed, and aborted, as well as the number of nested transactions begun, com mitted, and aborted. The locking system maintains statistics about the number of lock requests and the time to obtain and release locks. Each module in the system records the statistics relevant to the module and provides interfaces to access these statistics. The first step in making VINO self-monitoring is to periodically record the statistics for each of the kernel’s modules and accumulate a database of performance activity. In VINO, this mechanism can be provided by constructing an event graft (one that responds to a timer event) that polls the kernel modules and records the statistics it collects. We can factor out the overhead of the measurement thread itself by running the measurement graft in its own thread, and using our normal thread accounting procedures.

Compiler Profile Output The second source of system performance information comes from the compiler. Harvard’s HUBE project has produced a version of the SUIF compilation system (Hall et al, 1996) that

Self-monitoring and Self-adapting Operating Systems

447

collects detailed profiling information. By compiling VINO using the SUIF compiler, we can collect detailed statistics concerning code-path coverage and branch pre diction accuracy in the kernel. These statistics augment those collected with the measurement thread described in the previous section.

Tracing and Logging The measurement thread output and compiler profile output represent static data that characterizes the behavior of the system. The next task is to capture dynamic data about the behavior of the system. We use VINO’s grafting architecture to facilitate the collection of this dynamic data. We attach simple grafts to the inputs and outputs of all modules in the system. On input methods, these grafts record the incoming request stream and then pass the requests to the original destination module. This record of incoming requests, called a trace, captures the workload to a given module and is used to drive that module or its replacement during simulation. Similarly, grafts placed on output methods of a module record the outgoing messages or data before passing them along to the next module. This record of outgoing results, called a log, captures the results of a particular module and is used to compare the efficacy of a number of different modules or policy decisions within the module. The set of traces and logs are then available for simulation.

Simulation The combination of statically gathered data, traces, and logs creates a complete picture of what the system is being asked to do. The next step is to evaluate the currently implemented policy and determine its efficacy and the potential for other possible policies. The VINO grafting mechanism provides the ability to perform in situ simulation. This is a significant improvement over conventional simulation methodology where entirely separate simulation systems are typically used to evaluate design decisions. Since VINO provides the mechanism to replace a kernel module on a per-process basis, a simulation process can simply replace the module under investigation with an alternate implementation, rerun a trace, and record the result log that would be produced if the system used the alternate implementation. The result logs from multiple simulation runs are then compared to determine which of the simulations produced the best results. Most modules inside the VINO kernel can be instantiated as simulation modules. Simulation modules are identical to real modules except that they do not modify the global state. Therefore, simulations can run without affecting the rest of the system. Since the simulators and real modules share much of the code, we do not increase code size substantially and our results are realistic. Similar technology has been employed in the design and analysis of file systems, with encouraging results (Bosch and Mullender, 1996). Modules that support simulation must consist of two logical sets of states: the first set is writable by both the real and simulation instances of the module and is duplicated for each instance of such modules; the second set is writable only by the real instance of the module because the states are shared system-wide. Returning to the example of a buffer cache module, meta-data such as buffer headers falls into the first category, while the actual data falls in the second. The simulation modules run without affecting the rest of the system and are not affected by other activities in the system. This allows the simulations to be run reproducibly under many different configurations.

448

Key Drivers of Organizational Excellence

Self-Adaptation The four components described in previously provide the framework for building a selfmonitoring and adaptable operating system. The statistics gathered through self-monitoring provide invaluable feedback for identifying performance-critical portions of the kernel. More importantly, this data is gathered in the context of actual work loads, so it reflects the demands of the workload in practice, not the demands under some artificial benchmarking workload. Because data is collected via grafts, we have access to a low-level interface that provides more detailed data than is typically available to user-level utilities. The first step in making a system self-modifying is to endow it with the proper analytic tools to allow it to detect unusual or problematic occurrences. We use two different types of analysis to make system changes. Online analysis takes advantage of the data as it is being collected while off-line analysis consists of a post-processing phase. In general, the off-line system is responsible for monitoring long-term behavior of the system, identifying common and uncommon profiles, suggesting thresholds to the online system, and evaluating the feasibility of system changes. The online system is responsible for monitoring the current state of the system, posing “questions” to the off-line system, and identifying trouble spots as they arise. As we will show in the following sections, the online and off-line systems work together, each gathering data or performing an analysis task best suited to that system.

OFF-LINE ANALYSIS As described earlier, one part of our measurement system is a kernel thread that polls each of the sub systems at regular intervals and records the state of the performance counters. After each collection, this data is written to a system-wide database of statistics for later processing. The off-line analysis phase uses this database and specific queries from the online system as its input. The off-line analysis system consists of a collection of user-level processes that process system data to accomplish two goals: construct a characterization of the system under normal behavior and detect anomalous behavior and use that anomalous behavior to suggest performance thresholds to the online system. The latter goal depends significantly on the former goal: in order to identify anomalous behavior, we must have an accurate view of normal behavior. Let us consider the selection of the measurement interval as an example of how the off-line system works and how it can provide useful information to the online system to change how the system behaves. Our normal system characterization is based on time series analyses of resource utilizations. For each resource (e.g., memory, disk, network, processor) we maintain utilization statistics. Initially, the in-kernel thread collects utilization statistics every 100 ms. periodically (at least once per hour), the data are collected each millisecond. All the data are written to the system wide database. Each night, the off-line system performs variance analysis. First, it examines the one-ms data and determines if the default measurement interval is sufficiently short (i.e., that it does not fail to capture important utilization patterns). If the current measurement interval is determined to be appropriate, the off-line system performs variance analysis on the normal intervals, to determine if the measurement interval can be increased without loss of information. For each resource, the off-line system feeds this interval information back to the measurement thread, which modifies its behavior according to the recommendation of the off-line system. In this way, the online and off-line systems

Self-monitoring and Self-adapting Operating Systems

449

interact to allow the measurement thread to run as infrequently as possible without loss of important information. The next task of the off-line system is to examine the day’s resource usage profile, identifying any periods of anomalous behavior and informing the online system of suggested thresholds for resource utilization. We have analyzed the behavior of the systems that comprise our central computing facility and found that, in general, system usage follows a regular pattern over the course of a week (Park, 1997). Therefore, in analyzing our system profiles, we compare a day’s profile to reference profiles constructed from profiles of the previous day, the same day of the previous week, and the same day of the previous month. We use these three profiles to generate a reference profile for the day and then compare the daily profile to the reference profile, looking for periods of abnormal utilization. If we detect abnormally high utilization, we trigger detailed analysis. In order to perform detailed profile analysis, we decompose the daily profile into per-process profiles. Using these per-process profiles, we determine if the anomaly is caused by a single process, a small group of processes, or by an overall increase in system load. All the results of the profile analysis are then fed back into the off-line system, which derives expected loads and thresholds for the next day. The expected loads are based on the analysis of the current day’s profile and the previous week’s and month’s profile. The thresholds are based on the observed variances in the reference profiles. Finally, the next day’s thresholds and expected profiles are fed back into the online system.

ONLINE ANALYSIS The online system is responsible for monitoring the instantaneous resource utilization and the rate of change in utilization. In addition, it maintains efficiency statistics such as: hit rates for all the cached objects in the system, contention rates in the locking system, disk queue lengths, and voluntary and involuntary context switch counts. Using the expected behavior and thresholds presented by the off-line system, the online system is responsible for detecting “red flag” conditions; cases where resource utilization is outside the expected variance and still climbing. When this condition is detected, the online system has two options: it can select an adaptation heuristic or it can install the appropriate trace-generating graft for the resource in question, producing data for the off-line system. The former approach is effective when, as system designers, we have an a priori understanding of the types of algorithmic changes that might be beneficial. These techniques are similar to other dynamic operating system algorithms, such as multi-level feedback scheduling and working set virtual memory management. We discuss these heuristics ahead. The latter approach is the method of last resort, in which we make an effort to develop new algorithms to improve the performance of a particular workload. When the system detects a red-flag condition and has no heuristic to improve it, it generates a trace of the poorly performing module and then signals the off-line system to analyze the trace. The off-line system computes the optimal behavior of the system under the imposed load and compares the optimal behavior to the actual behavior. If the actual behavior is within an acceptable margin of the optimal, then the off-line system concludes that the module in question cannot significantly improve performance alone, and it invokes detailed system analysis to identify other suspect modules.

450

Key Drivers of Organizational Excellence

If, however, the current algorithm is significantly less effective than is optimally possible, our goal is for the off-line system to suggest new heuristics to the online system. Not surprisingly, this is one of the key areas for future research in the development of this system.

ADAPTATION HEURISTICS The typical goal of adaptation is to decrease the latency of an application. As a rule, latency is caused by an application waiting for the availability of some resource. Waiting can be caused by the application blocking on user or other input, which is outside the control of the system, or it can be caused by the application blocking on a resource that is unavailable due to an operating system policy decision. We call the former form of blocking compulsory and the latter form needless. The goal of adaptation heuristics is to reduce needless blocking. In the rest of this section we discuss some causes of needless blocking, heuristics for identifying the causes, and methods for decreasing the amount of needless blocking in the system.

PAGING Techniques for dealing with high paging overhead have been known for decades. In general, if an application is paging, it is assumed that the working set of the application is larger than the number of physical pages assigned to it and that it should be given more physical memory. When allocating additional pages to one application reduces the number of pages for some other application below its working set size, some application is swapped out in order to increase the amount of physical memory available. Similar techniques can be used in our environment. If an application is paging heavily, we generate a trace of the pages faulted in by the application and the value of the program counter for each page fault. We can collect more detailed traces by unmapping all pages in the address space, generating a full trace of page accesses. Although this technique adds considerable overhead, it provides the complete page access history. Once we have page access traces, we look for simple, well-known patterns: linear memory traversal or correlation between function calls and page references. In the former case, we perform simple prefetching, while in the latter case, we perform slightly more complex prefetching, faulting in appropriate data pages when the application enters the function for which the data will be referenced.

DISK WAIT We can make similar system modifications to alleviate disk waiting. When we detect an application that is spend ing time waiting for disk I/O, we generate the trace of disk block requests, capturing both those that were satisfied in the cache and those that required I/O operations. As in the case of page faults, we then look for common patterns. If the application is performing a linear pass over a file (which is the most common case (Baker et al, 1991)), our normal read-ahead policy should already be performing aggressive read-ahead. If the application is spending less time processing each page than it takes the system to read the page from disk, read-ahead can reduce, but not eliminate, the amount of time the application spends waiting for the disk. In this case, we have found a compulsory component of the disk waiting time that can not be removed.

Self-monitoring and Self-adapting Operating Systems

451

If the application appears to be randomly accessing the file, we compare the traces of multiple runs of the application to determine if the blocks of the file are accessed in a repeatable order. If so, we graft an application-specific read-ahead policy that knows this ordering. Other patterns for which we can easily construct application-specific read-ahead policies include strided reads, common for scientific applications, and clustered reads, where a seek is followed by a fixed-size sequential read.

CPU-BOUND PROCESSES Even if an application is simply CPU bound, there are techniques we can use to improve its performance. Using SUIF, we can gather information about branch mispredictions. With onchip counters, such as those on the Pentium, we can gather information about L1 cache misses, code and data TLB misses, and branch target buffer hits and misses. If we find that the application is suffering from a large number of branch mispredictions or poor code layout, we request recompilation of the poorly performing kernel functions in the context of the particular application (Saltzer and Gintell, 1970). We then install this recompiled kernel segment as an application-specific graft.

INTERRUPT LATENCY While the time spent waiting for an interrupt to take place is compulsory, the time between the occurrences of the interrupt and when the application runs is needless. Previous studies of latency point out that slow system irritates users (Endo et al, 1996). By reducing this latency we can improve not only overall application performance, but also overall user satisfaction. In order to measure this latency, we time-stamp interrupts as they arrive, and compute the difference between this time and when the process to which the interrupt is delivered is scheduled. Our goal is to detect interrupt handling latencies or variance across latencies that are percep table to users. When we detect such cases, we try to determine the cause and either modify the system behavior appropriately or notify the application of potential areas for improvement. For example, if a process blocks too long behind higher priority processes, we recommend raising the application’s priority. If we find that the process typically faults immediately upon being scheduled, this is a sign that the event handling code is being paged out. In this case, we pin the code pages of the event handler into memory. If the process is not yet ready for the event (i.e., a mouse interrupt arrives before the process performs a select () on the mouse device), we recommend that the application be restructured to check for events more frequently.

LOCK CONTENTION If a lock is highly contended (i.e., the queue of processes waiting for the lock is long), there is a problem with the structure of the applications using the lock. When we detect processes waiting on unusually long lock queues, we decrease the granularity of the locked resource (if possible), to reduce contention. Note that this reduction in granularity is frequently possible when the resource in question is a kernel resource whose structure we understand, but may not be possible if the resource is an application resource. In the latter case, we signal the application that contention on the particular resource is abnormally high.

452

Key Drivers of Organizational Excellence

SUMMARY In summary, we can construct a framework in which a system becomes self-monitoring, so that it may adapt to changes in the workload it supports, providing improved service and functionality. We can accomplish this by Grafting the necessary components into the VINO system to make it self monitoring. Next, Designing and developing a system to collect and analyze performance data. Further, developing heuristics and algorithms for adapting the system to changes in workload and using in situ simulation to compare competing policy implementations would be the next steps.

References Baker, M., Hartman, J., Kupfer, M., Shirriff, K., Ousterhout, J. (1991), Measurements of a Distributed File System, Proceedings of the 13th SOSP, Pacific Grove, CA, Oct, pp. 198-212. Bershad, B., Savage, S., Pardyak, P., Sirer, E. G., Fiuczynski, M., Becker, D., Eggers, S., Chambers, C. (1995), Extensibility, Safety, and Performance in the SPIN Operating System, Proceedings of the 15th Symposium on Operating System Principles, Copper Mountain, CO, Dec., 267-284. Bosch, P., Mullender, S. (1996), Cut-and-Past file-systems: Integrating Simulators and Filesystems, Proceedings of the 1996 USENIX Technical Conference, Jan., pp. 307-318. Computer Systems Research Group, University of California, Berkeley, 4.4BSD System Manager’s Manual, O’Reilly and Associates, 1-56592-080-5, 1994a. Computer Systems Research Group, University of California, Berkeley, 4.4BSD User’s Reference Manual, O’Reilly and Associates, 1-56592-075-9, 1994b. Digital Semiconductor, Alpha 21164 Microprocessor, Hardware Reference Manual, Order Number: EC-QAEQBTE. Digital Equipment Corporation, Maynard, MA 1995. Endo, Y., Wang, Z., Chen, J., Seltzer, M. (1996), Using Latency to Evaluate Interactive System Performance, Proceedings of the 2nd OSDI, Seattle WA, October. Ferrari, D. (1978), Computer Systems Performance Evaluation, Prentice Hall, Englewood Cliffs NJ, 44-64. Hall, M. W., Anderson, J. M., Amarasinghe, S. P., Murphy, B. R., Liao, S. W., Bugnion, E., Lam, M. S. (1996), Getting Performance out of Multiprocessors with the SUIF Compiler, IEEE Computer, December 1996. Intel Corporation, Pentium Processor Family Developer’s Manual, Volume 3: Architecture and Programming Manual, Intel Corporation, 1995. Lucas, Henry C. (1971), Performance Evaluation and Monitoring, ACM Computing Surveys, Vol. 3, No. 3, September 1971, pp. 79-91. Park, L. (1997), Development of a Systematic Approach to Bottleneck Identification in UNIX systems, Harvard University Computer Systems Technical Report, TR-05-97, April 1997. Saltzer, J.H. and Gintell, J.W. (1970), The Instrumentation of Multics, Communications of the ACM, Vol. 13, No. 8, August 1970, pp. 495-500. Seltzer, M., Small, C., Smith, M. (1996), Symbiotic Systems Software, Proceedings of the Workshop on Compiler Support for Systems Software (WCSSS ‘96). Seltzer, M., Endo, Y., Small, C., Smith, K. (1996), Dealing with Disaster: Surviving Misbehaving Kernel Extensions, Proceedings of the Second Symposium on Operating System Design and Implementation, Seattle, WA, October 1996. Stonebraker, M. (1981), Operating Support for Database Management, Communications of the ACM 24, 7, July 1981, 412-418. Sun Inc. (1994), STP1021 Super SPARC II Addendum for use with SuperSPARC and MultiCache Controller User’s Manual, Revision 1.2.1, Sparc Technology Business, A Sun Microsystems, Inc. Business, Mountain View, CA, 1994.

48

Selection of Software Process Model Satish Bansal K K Pandey

Software process improvement (SPI) aims to understand the software process as it is used within an organization and thus drive the implementation of changes to that process to achieve specific goals such as increasing development speed, achieving higher product quality or reducing costs. The requirements present that dynamic nature, managing this change all through the software development lifecycle has an important impact on the success of the project. The objective of this chapter is to describe both the selection and usage of grounded theory in this study and evaluate its effectiveness as a research methodology for software process researchers. Accordingly, this chapter will focus on the selection of Software Process Model according to Software project. Keywords: Software engineering; Software process improvement; Qualitative research methods; SDLC, S/W Development

INTRODUCTION The system development life cycle is the process, involving multiple stages (from establishing the feasibility to carrying out post implementation reviews), used to convert a management need into an application system, which is custom-developed or purchased or is a combination of both. The systems development life cycle (SDLC) is a conceptual model used in project management that describes the stages involved in an information system development project, from an initial feasibility study through maintenance of the completed application. Various SDLC methodologies have been developed to guide the processes involved, including the Waterfall model(which was the original SDLC method); rapid application development (RAD); joint application development (JAD); the fountain model; the spiral model; build and fix; and synchronize-and-stabilize. Frequently, several models are combined into some sort of hybrid methodology. Documentation is crucial regardless of the type of model chosen or devised for any application, and is usually done in parallel with the development process. Some methods work better for specific types of projects, but in the final analysis, the most important factor for the success of a project may be how closely the particular plan was followed.

454

Key Drives of Organizational Excellence

In general, an SDLC methodology follows several steps. The existing system is evaluated. Deficiencies are identified. This can be done by interviewing users of the system and consulting with support personnel. The new system requirements are defined. In particular, the deficiencies in the existing system must be addressed with specific proposals for improvement. The proposed system is designed. Plans are laid out concerning the physical construction, hardware, operating systems, programming, communications, and security issues. The new system is developed. The new components and programs must be obtained and installed. Users of the system must be trained in its use, and all aspects of performance must be tested. If necessary, adjustments must be made at this stage. The system is put into use. This can be done in various ways. The new system can phased in, according to application or location, and the old system gradually replaced. In some cases, it may be more cost-effective to shut down the old system and implement the new system all at once. Once the new system is up and running for a while, it should be exhaustively evaluated. Maintenance must be kept up rigorously at all times. Users of the system should be kept up-to-date concerning the latest modifications and procedures. The SDLC for an application system would depend on the chosen acquisition/development mode. Application systems could be acquired/developed through various modes, which include Custom development using internal resources, Custom development using fully or partly outsourced resources located onsite or offsite (locally or in an offshore location), Vendor software packages implemented as-is with no customization, or Vendor software packages customized to meet the specific requirements. During the SDLC of an application system, various risks could be encountered, which include Adoption of inappropriate SDLC for the application system; Inadequate controls in the SDLC process; User requirements and objectives not being met by the application system; Inadequate stakeholder (including internal audit) involvement; Lack of management support; Inadequate project management; and Inappropriate technology and architecture. Further, Scope variations, Time over-runs, and Cost over-runs may afflict the system. Jawadekar Waman S (2004) propose that Inadequate quality of the application system Insufficient attention to security and controls (including validations and audit trails) in the application system, Performance criteria not being met ,Inappropriate resourcing/staffing model management, Inadequate staffing skills, Insufficient documentation, and Inadequate contractual protection may affect SDLC adversely. Inadequate adherence to chosen SDLC and/or development methodologies, Insufficient attention to interdependencies on other applications and processes, Inadequate configuration management, Insufficient planning for data conversion/migration and cutover and Post cut-over disruption to business are also some of the heartaches in SDLC. Basili and R. Selby (1993) and V. Basili and S. Green (1994), The Software Developer should consider the following while planning the review of the SDLC of an application system: 1.

The acquisition/development mode, technology, size, objectives and intended usage of the application system v

Project structure for the acquisition and implementation

v

Skill and experience profile of the project team

v

The SDLC model chosen

v

The formal SDLC methodology and customized process design adopted, if any

v

Risks that are likely to effect the SDLC

Selection of Software Process Model

2.

v

Any concerns or issues perceived by appropriate management

v

The current SDLC stage

v

Any prior review of the earlier SDLC stages of the application system

v

Any prior SDLC reviews of similar application systems

455

Any other risk assessments/reviews by the IS auditor or others (such as IT) that have a bearing on the proposed review

The choice of model also changes depending on customer interaction, risk perception, understanding and application skill of the SOFTWARE engineer, cost and domain knowledge of the Software developer. Based on these factors each model has a different focus. The efficiency level of operations through these models depends upon the capability maturity model (CMM) of the software solution development organization. According to M. J. Bassman; F. McGarry; R. Pajerski (1994), there is a need of capability maturity model.

CAPABILITY MATURITY MODEL (CMM) What is it A process improvement model that provides a set of industry-recognized practices to address productivity, performance, costs and stakeholder satisfaction in the systems engineering and software development process. 1.

Helps your organization examine the effectiveness of your processes

2.

Establishes priorities for improvement

3.

Helps you implement these improvements.

How it is different The CMMI provides an integrated, consistent, enduring framework for enterprise-wide process improvement and can accommodate new initiatives as future needs are identified. Unlike single-discipline or stove-pipe models that can result in confusion and higher costs when implemented together.

Who it is for Those providing systems and software engineering products and services to organizations who transform customer needs expectations, and constraints into products, and supporting these products throughout their life. If you manufacture, code, analyze, maintain or document a product, you need this. Irrespective of which level of CMM the organization has, the software Engineer has choices for selection of software process model. The main models are:1.

Linear Sequential Model(Waterfall)

2.

Prototype Model

3.

Rapid Application Development Model(RAD)

4.

Incremental Model

5.

Boehm Spiral Model

456

Key Drives of Organizational Excellence

Khurana Rohit (2007), proposed that A SOFTWARE process model framework should be specific to the project. Thus, it is essential to select the software process model according to the software project; It is efficient if the model is selected to the user requirement. All model satisfies the user requirement but requirement change continues and all other factors are also consider for selection of software process model. The basic characteristics required to select the process model are project type and associated risk, requirement of the project, users associated with the project, costing, domain knowledge & technology. (A) Project type & associated risks (Risk Level): Spiral

Hi gh Risk

Incremental Medium

RAD Low

Prototype Very Low

Waterfall

No Risk

Risk

(B) Requirement of the Project (Nature of requirement):-

Not Clear, need Confirm Risk

Spiral

Clear, need Confirm

Incremental

RAD

Clear, Large

Prototype Clear, Complex

Waterfall

Stable, Simple

Requirement

Selection of Software Process Model

457

(C) Basis of User associated with the project (Customer Interaction):-

Continue to move

Spiral

Incremental

Very Frequent

User selection in all phase

RAD

Couple of times till prototype is approved

Prototype

Waterfall

Limited

Users (D) Basis on Technology

Integration of multiple Tech.

Spiral

Configurat-

Incremental

ion Mgt.

Team Mgt.,

RAD

Proven Tech.

Prototype

Waterfall

Proven but need testing Proven & understood

Technology

458

Key Drives of Organizational Excellence

(E) Basis on Domain Knowledge:

Spiral Master Knowledge with high Experience

Incremental Good Knowledge with Experience

RAD Knowledge with high Experience

Prototype Common Knowledge with Experience

Waterfall

Common Knowledge

Domain Knowledge

According to L. Briand; V. Basili; Y.M. Kim; D. R. Squier (1994), the key to successful project development is “effectiveness” which is reflected in the Attributes of project planning, project cost estimating, project measurements, project milestone tracking, project quality control, risk management, project change management, processes and communications. It also subsumes capable project managers, Capable technical personnel, Significant use of specialists and Substantial volumes of reusable material.

CONCLUSION Developers cannot continue to produce software at the previous rate, and change all their practices at the same time. Processes Model are important because industry cares about their intrinsic qualities, such as uniformity of performance across different projects and productivity, with the aim of improving time to market and reducing production costs. A part from CMM level of the organization, the influencing factor on quality of SOFTWARE development is clarity & agreement on requirement analysis & its specification. The one more factor focus in each process model is different. In LSM it is solution. In Prototype it is on proving the solution before design & development. In RAD it is architectural & development methodology for quick development. In Incremental Model the focus is on the solution its implements strategy. In BSM the focus & emphasis is on technology & customer involvement to evolve a solution together. SOFTWARE process model will provide the best software solutions if these key areas are managed well. Further management of these areas

Selection of Software Process Model

459

is effective if the organization and its developers have knowledge of domain, application, tools and technology.

References Jawadekar Waman S (2004), Software Engineering: Principles & Practices, Tata Mcgraw-Hill Publication Khurana Rohit (2007), Software Engineering: Principles & Practices, Vikas Publication Basili and R. Selby (1993), Experimental Software Engineering Issues: Critical Assessment and Future Directions, LNCS, Vol 706. Spring-Verlag. V. Basili and S. Green (1994), Software Process Evolution at the SEL, in IEEE Software, July 1994. M. J. Bassman, F. McGarry, R. Pajerski (1994) Software Measurement Guidebook, Software Engineering Series, SEL-94-002 L. Briand; V. Basili; Y.M. Kim; D. R. Squier (1994) A Change Analysis Process to Characterize Software Maintenance Projects, Proc. of the Int’l Conf. on Software Maintenance.

460

Key Drives of Organizational Excellence

49

Sensor Network Security Rakesh Prasad Sarang

This chapter as sensor networks security continues to grow, so does the need for active security mechanisms. Sensor network is a special type of network. Because sensor networks security may interact with sensitive data and operate in hostile unattended environments, a sensor network should be a distributed network without a central management point. This will increase the vitality of the sensor network. However, if designed incorrectly, it will make the network organization difficult, inefficient and fragile. It is imperative that these security concerns be addressed from the beginning of the system design. However, due to inherent resource and computing constraints security in sensor networks poses different challenges than traditional network design and Computer security. There is currently enormous research potential in the field of wireless sensor network security. Thus, familiarity with the current research in this field will benefit researchers greatly. With this in mind, this surveys the major topics in wireless. Sensor network security, and present the obstacles and the requirements in the sensor security, this chapter discusses recent research results on supporting Services and applications in sensor networks.

INTRODUCTION Sensor networks security are quickly gaining popularity due to the fact that they are potentially low cost solutions to a variety of real-world challenges Their low cost provides a means to deploy large sensor arrays in a variety of conditions capable of performing both military and civilian tasks. But sensor networks also introduce severe resource constraints duet their lack of data storage and power. Both of these represent major obstacles to the implementation of traditional computer security techniques in a wireless sensor network. The unreliable communication channel and unattended operation make the security defenses even harder. Indeed, as pointed out in wireless sensors often have the processing characteristics of machines that are decades old (or longer), and the industrial trend into reduce the cost of wireless sensors while maintaining similar computing power. With that in mind, many researchers have begun to address the Challenges of maximizing the processing capabilities and energy reserves of wireless sensor nodes while also securing them against attackers.

Sensor Network Security

461

All aspects of the wireless sensor network are being examined including secure and client routing In addition to those traditional security issues, we observe that many general-purpose sensor network techniques (particularly the early research) assumed that all nodes are cooperative and trustworthy. This is not the case for most, or much of, real-world wireless sensor networking applications, which require a certain amount of trust in the application in order to maintain proper network functionality. Researchers therefore began focusing on building a sensor trust model to solve the problems beyond the capability of cryptographic security. In addition to those traditional security issues; we observe that many generalpurpose sensor network techniques (particularly the early research) assumed that all nodes are cooperative and trustworthy. This is not the case for most, or much of, real-world wireless sensor networking applications, which require a certain amount of trust in the application in order to maintain proper network functionality. Researchers therefore began focusing on building a sensor trust model to solve the problems beyond the capability of cryptographic security (Chan et al, 2003). Sensor networks and other large-scale networks of small, embedded devices may require novel routing techniques for scalable and robust data dissemination. Directed diffusion is an example of such a technique. Directed diffusion incorporates data-centric routing coupled with application-specific in-network processing. Such techniques can help establish energyefficient data dissemination paths between sources (sensors) and sinks (data processing or human interface devices). We classify the main aspects of wireless sensor network security into four major categories: the obstacles to sensor network security, the requirements of a secure wireless sensor network, attacks, and defensive measures. The organization then follows this classification. For the completeness of the chapter, we also give a brief introduction related security techniques, while providing appropriate citations for those interested in a more detailed discussion of a particular topic. (Carman et al 2000). Architecture of Sensor Network

Adversary

Base station

Sensor node

Low power radio link Figure 2: Representative Sensor Network

462

Key Drives of Organizational Excellence

---- Low latency, high bandwidth Sensor network legend. All nodes may use low power radio links, but only laptop-class adversaries and base stations can use low latency, high bandwidth links.

SENSOR NETWORK APPLICATIONS As stated in the introduction, sensor network applications represent a new class of applications that are 1.

Data driven, meaning that the applications collect and analyze data from the environment, and depending on redundancy, noise, and properties of the sensors themselves, the applications can assign a quality level to the data, and

2.

State based, meaning that the application’s needs with respect to sensor data can change overtime based on previously received data.

Typically sensors are battery-operated, meaning they have a limited lifetime during which they provide data to the application. A challenge of the design of sensor networks is how to maximize network lifetime while meeting application quality of service (Quos) requirements. For these types of applications, the needs of the application should dictate which sensors are active and the role they play in the network topology. To further illustrate this point, we discuss some septic sensor network applications and how they can benefit from this form of interaction, (Braginsky and Estrin, 2002).

Environmental Surveillance Consider an environment where multiple sensors (e.g., acoustic, seismic, video) are distributed throughout an area such as a battlefield. A surveillance application can be designed on top of this sensor network to provide information to an end-user about the environment. The application may require minimum percentage sensor coverage in an area where a phenomenon is expected to occur. The sensor network may consist of sensors with overlapping coverage areas, providing redundant information. If the application does not require all this redundant information, it would be desirable to conserve energy in some sensors by allowing them to sleep, thereby lengthening the lifetime of the network. For example, as sensors use up their limited energy, the application would like to use different sets of sensors to provide the required Quos (in this case, minimum sensor coverage area). This requires that the application manage the sensors over time. Such management can be as simple as turning sensors on and or as complex as selecting the routes for data to take from each sensor to the collection point in a multi-hop network. Furthermore, the needs of the surveillance application may change as a result of previously received data. For example, if the application determines that an intrusion has occurred, the application may assume a new state and require more sensors to send data to more accurately classify the intrusion. This Implementation of these tasks can be complex, and they are difficult to incorporate into applications.

Home/Office Security Home/office security systems are becoming increasingly complex, monitoring for not only intrusion into the space but also the occurrence of substances such as or carbon monoxide

Sensor Network Security

463

gas. To be able to monitor the application variables, the security system must obtain data from heterogeneous sensors such as acoustic, motion, heat, and vibration sensors scattered throughout the home/office. Making these sensors wireless and battery powered allows them to be easily placed in existing homes without major household modifications. To make the sensor network last as long as possible, the application may only want a subset of the sensors activated at any time. Once a sensor’s activation has been triggered through some event, the application must analyze the data and decide how to change the configuration of active sensors. This can be modeled as the application changing state based on received data. For different application states, different sets of sensors should be activated to provide the greatest benefit to the security application. Thus, the application needs to be able to control which sensors are activated over time. At the same time, to allow the application to work as long as possible, the set of sensors activated for a given application state should be chosen wisely application. Performing such optimizations and controlling the sensors and network functionality from within the application would place an unreasonable burden on the application, (Brutch and Ko, 2003).

Medical Monitoring As a final example, consider a personal health monitor application running on a PDA that receives and analyzes data from a number of sensors (e.g., ECG, EMG, blood pressure, blood flow, pulse ox meter). The monitor reacts to potential health risks and records health information in a local database. Considering that most sensors used by the personal health monitor will be battery operated and use wireless communication, it is clear that this application can benefit from intelligent sensor management that provides energy-efficiency as well as a way to manage requirements, which may change over time with changes in the patient’s state. For example, higher quality might be required for certain health-related variables during high stress situations such as a medical emergency, and lower quality during low stress situations such as sleep.

Security Requirements A sensor network is a special type of network. It shares some commonalities with a typical computer network, but also poses unique requirements of its own. Therefore, we can think of the requirements of a wireless sensor network as encompassing both the typical network requirements and the unique requirements suited solely to wireless sensor networks, (Chan et al , 2005).

Data Confidentiality A sensor network is a special type of network. It shares some commonalities with a Data confidentiality is the most important issue in network security. Every network with any security focus will typically address this problem first. In sensor networks, the confidentiality relates to the following: 1.

A sensor network should not leak sensor readings to its neighbors. Especially in a military application, the data stored in the sensor node may be highly sensitive.

2.

In many applications nodes communicate highly sensitive data, e.g., key distribution therefore it is extremely important to build a secure channel in a wireless sensor network.

464 3.

Key Drives of Organizational Excellence Public sensor information, such as sensor identities and public keys, should also be encrypted to some extent to protect against traffic analysis attacks. The standard approach for keeping sensitive data secret is to encrypt the data with a secret key that only intended receivers possess, thus achieving confidentiality.

Data Integrity Data confidentiality is the most important issue in network security. Every network with the implementation of confidentiality, an adversary may be unable to steal Information. However, this doesn’t mean the data is safe. The adversary can change the data, so as to send the sensor network into disarray. For example; a malicious node may add some fragments or manipulate the data within a packet. This new packet can then be sent to the original receiver. Data loss or damage can even occur without the presence of a malicious node due to the harsh communication environment. Thus, data integrity ensures that any received data has not been altered in transit.

Data Freshness Even if confidentiality and data integrity are assured, we also need to ensure the freshness of each message. Informally, data freshness suggests that theater is recent, and it ensures that no old messages have been replayed. This requirement is especially important when there are shared-key strategies employed in the design. Typically shared keys need to be changed overtime. However, it takes time for new shared keys to be propagated to the entire network. In this case, it is easy for the adversary to use a replay attack. Also, it is easy to disrupt the normal work of the sensor, if the sensor is unaware of the new key change time. To solve this problem a nonce, or another time-related counter, can be added into the packet to ensure data freshness. (Aura et al, 2001)

Availability Adjusting the traditional encryption algorithms to fit within the wireless sensor network is not free, and will introduce some extra costs. Some approaches choose to modify the code to reuse as much code as possible. Some approaches try to make use of additional communication to achieve the same goal. What’s more, some approaches force strict limitations on the data access, or propose an unsuitable scheme (such as a central point scheme) in order to simplify the algorithm. But all these approaches weaken the availability of a sensor and sensor network for the following reasons: 1.

Additional computation consumes additional energy. If no more energy exists, the data will no longer be available.

2.

Additional communication also consumes more energy. What’s more? As communication increases so too does the chance of incurring a communication conflict.

3.

A single point failure will be introduced if using the central point scheme. This greatly threatens the availability of the network.

The requirement of security not only affects the operation of the network, but also is highly important in maintaining the availability of the whole network.

Sensor Network Security

465

Self-Organization A wireless sensor network is a typically an ad hoc network, which requires every sensor node be independent and flexible enough to be self-organizing and self-healing according to different situations. There is no fixed infrastructure available for the purpose of network management in a sensor network. This inherent feature brings a great challenge to wireless sensor network security as well. For example, the dynamics of the whole network inhibits the idea of pre-installation of a shared key between the base station and all sensors several random key pre-distribution schemes have been proposed in the context of symmetric encryption techniques. In the context of applying public-key cryptography techniques in sensor networks, an efficient mechanism for public-key distribution is necessary as well. In the same way that distributed sensor networks must self-organize to support multi hop routing, they must also self-organize to conduct key management.( Deng et al, 2002).

Figure 3: Establishing Keys between Small Groups vs. the Entire Sensor Network Building trust relation among sensors. If self-organization is lacking in sensor network, the damage resulting from an attack or even the hazardous environment may be devastating.

Time Synchronization Most sensor network applications rely on some form of time synchronization. In order to conserve power, an individual sensor’s radio may be turned for periods of time. Furthermore, sensors may wish to compute the end-toned delay of a packet as it travels between two pair wise sensors. A more collaborative sensor network may require group synchronization for tracking applications, etc. In the authors propose a set of secure synchronization protocols for sender-receiver (pair wise), multihop sender-receiver (for use when the pair of nodes are not within single-hop range), and group synchronization.

Secure Localization Often, the utility of a sensor network will rely on its ability to accurately and automatically locate each sensor in the network. A sensor network designed to locate faults will need

466

Key Drives of Organizational Excellence

accurate location information in order to pinpoint the location of a fault. Unfortunately, an attacker can easily manipulate no secured location information by reporting false signal strengths, replaying signals, etc A technique called Verifiable Multilateration (VM) is described. In multilateration, advice’s position is accurately computed from a series of known reference points. In this technique authenticated ranging and distance bounding are used to ensure accurate location of a node. Because of distance bounding, an attacking node can only increase its claimed distance from a reference point. However, to ensure location consistency, an attacking node would also have to prove that its distance from another reference point is shorter since it cannot do this, a node manipulating the localization protocol can be found, for large sensor networks, the SPINE (Secure Positioning for sensor Networks, (Anderson et al, 1996).

Flexibility Sensor networks will be used in dynamic battlefield scenarios where environmental conditions, threat, and mission may change rapidly. Changing mission goals may require sensors to be removed from or added to an established sensor node. Furthermore, two or more sensor networks may be fused into one, or a single network may be split in two. Key establishment protocols must be flexible enough to provide keying for all potential scenarios a sensor network may encounter. Protocols that require knowledge of what other nodes will be co-deployed are discouraged, whereas protocols with minimal preconceptions are encouraged.

Authentication An adversary is not just limited to modifying the data packet. It can change the whole packet stream by injecting additional packets. So the receiver needs to ensure that the data used in any decision-making process originates from the correct source. On the other hand, when constructing the sensor network, authentication is necessary for many administrative tasks (e.g. network reprogramming or controlling sensor node duty cycle). From the above, we can see that message authentication is important for many applications in sensor networks. Informally, data authentication allows a receiver to verify that the data really is sent by the claimed sender. In the case of two-party communication, data authentication can be achieved through a purely symmetric mechanism: the sender and the receiver share a secret key to compute the message authentication code (MAC) of all communicated data. (Albers and Camp, 2002). Each sensor also shares a unique symmetric key with each locator. This key is also preloaded on each sensor.

Features 1.

Sensor Networks are written for practitioners, researchers, and students and relevant to all application areas, including environmental monitoring, industrial sensing and diagnostics, automotive and transportation, security and surveillance, military and battlefield uses, and large-scale infrastructural maintenance.

2.

Skillfully integrate the many disciplines at work in wireless sensor network design: signal processing and estimation, communication theory and protocols, distributed

Sensor Network Security

467

algorithms and databases, probabilistic reasoning, energy-aware computing, design methodologies, evaluation metrics, and more. 3.

Demonstrate how querying, data routing, and network self-organization can support high-level information-processing tasks.

Obstacles of Sensor Security A wireless sensor network is a special network which has many constraints compared to a traditional computer network. Due to these constraints it is difficult to directly employ the existing security approaches to the area of wireless sensor networks. Therefore, to develop useful security mechanisms while borrowing the ideas from the current security techniques, it is necessary to know and understand these constraints.

CONCLUSIONS The chapter discussed the four main aspects of wireless sensor network security: obstacles, requirements, applications, and features. Within each of those categories we have also subcategorized the major topics including routing, trust, denial of service, and so on. Our aim is to provide both a general overview of the rather broad area of wireless sensor network security, and give the main citations such that further review of the relevant literature can be completed by the interested researcher. As sensor networks continue to grow and become more common, we expect that further expectations of security will be required of these wireless sensor network applications. In particular, the addition of public key cryptography and the addition of public-key based key management described in I will likely make strong security a more realistic expectation in the future. We also expect that the current and future work in privacy and trust will make wireless sensor networks a more attractive option in a variety of new arenas.

References Albers, P.; and O. Camp (2002), Security in ad hoc Networks: A General Intrusion Detection Architecture Enhancing Trust Based Approaches. In First International Workshop on Wireless Information Systems, 4th International Conference on Enterprise Information Systems. R. Anderson and M. Kuhn (1996), Tamper Resistance - A Cautionary Note, in The Second USENIX Workshop on Electronic Commerce Proceedings, Oakland, California. R. Anderson and M. Kuhn (1997), Low Cost Attacks on Tamper Resistant Devices, in IWSP: International Workshop on Security Protocols, LNCS. T. Aura, P. Nikander, and J. Leiwo (2001), Dos-Resistant Authentication With Client Puzzles in Revised Papers from the 8th International Workshop on Security Protocols, pages 170–177, Springer-Verlag, 2001. P. Bose, P. Morin, I. Stojmenovi´c;, and J. Urrutia (2001), Routing With Guaranteed Delivery In Ad Hoc Wireless Networks, Wireless Network, 7(6), 609–616. D. Braginsky and D. Estrin (2002), Rumor Routing Algorithm for Sensor Networks, in WSNA’02: Proceedings of the 1st ACM International Workshop on Wireless 44 Sensor Networks and Applications, pages 22–31, New York, NY, USA, 2002, ACM Press. P. Brutch and C. Ko. (2003), Challenges in Intrusion Detection for Wireless Ad-Hoc Networks, in 2003 Symposium on Applications and the Internet Workshops (SAINT’03 Workshops).

468

Key Drives of Organizational Excellence

D. W. Carman, P. S. Krus, and B. J. Matt (2000), Constraints and Approaches for Distributed Sensor Network Security, Technical Report 00-010, NAI Labs, Network Associates, Inc., Glenwood, MD,2000. H. Chan and A. Perrig (2003), Security and Privacy in Sensor Networks, IEEE Computer Magazine, pages 103–105, 2003. H. Chan and A. Perrig. Pike (2005), Peer Intermediaries for Key Establishment in Sensor Networks, In IEEE Infocom 2005. H. Chan, A. Perrig, and D. Song (2003), Random Key Pre-Distribution Schemes for Sensor Networks in Proceedings of the 2003 IEEE Symposium on Security and Privacy, page 197, IEEE Computer Society. J. Deng, R. Han, and S. Mishra (2002), INSENS: Intrusion-Tolerant Routing in Wireless Sensor Networks in Technical Report CU-CS-939-02, Department of Computer Science, University of Colorado, 2002. J. Deng, R. Han, and S. Mishra (2005), Security, Privacy, and Fault Tolerance in Wireless Sensor Networks, Artech House, August 2005. S. Wicker (2002), An Empirical Study of Epidemic Algorithms in Large Scale Multi-Hop Wireless Networks, Intel Research, Tech. Rep. IRB-TR-02- 003, March.

50

Overview of the Embedded Operating System Krishan Kant Yadav

Today embedded operating systems are used in everything from set-top boxes, hand-held PDA's and industrial controllers to cell phones and the Hubble Space Telescope. This chapter provides an overview of the Embedded Operating Systems (OS) by initially describing the terms "Embedded Systems" and "Embedded Operating Systems" and then addressing the various features and applications of the Embedded Operating Systems. This chapter also mentions the widely used Embedded OS in the market and briefly describes the steps necessary to develop these systems. In the end, before concluding this chapter also presents the widely used tools for developing these systems.

INTRODUCTION An embedded system is a special purpose computer that is used inside of a device. For example, a microwave contains an embedded system that accepts input from the panel, controls the LCD display, and turns on and off the heating elements that cook the food. Embedded systems generally use microcontrollers that contain many functions of a computer on a single device. Motorola and Intel make some of the popular microcontrollers. An embedded operating system is an operating system for embedded systems. These operating systems are designed to be very compact and efficient; forsaking much functionality that non-embedded operating systems provide and which may not be used by the specialized applications they run (Bershad et.al, 1995). An embedded operating system is an operating system for embedded computer systems. These operating systems are designed to be very compact and efficient, forsaking many functions that non-embedded computer operating systems provide, and which may not be used by the specialized applications they run, (Savage et al., 1995). They are also real-time operating systems. Some of the commonly used embedded operating systems include Microsoft Windows CE user interface.

470

Key Drivers of Organizational Excellence

FEATURES OF AN EMBEDDED OPERATING SYSTEM For an embedded OS to be regarded as good, it should: l

Modular: Modularity is a concept that has applications in the contexts of computer Be Modular science, particularly programming, as well as cognitive science in investigating the structure of mind. A module can be defined variously, but generally must be a component of a larger system, and operate with in the system independently from the operations of the other components of the system.

l

Be scalable: The property of a multiprocessing computer that defines the extent to which addition of more processors increases aggregate computing capability. Windows NT Server 4.0 is generally considered to be scalable to eight Intel processors.

l

Have a CPU support: There is no meaning of an OS without a compatible CPU.

l

Have a small Foot-print: In computer science, the footprint of a piece of software is the portion of computing resources, typically RAM, CPU time and disc space that it requires in order operating.

l

Have a large device driver database: The greater the device driver database of an embedded operating system, the greater is the number of devices that can be controlled through that particular OS which is desirable.

l

Be Flexible: By flexibility we mean to say that the embedded OS must be adjustable to change or modification, (Mullender et al. 1996).

l

Support for many processor architectures-with a single OS: The embedded OS should ideally be hardware independent. But due to programming constraints hardware independence to a certain limit can be incorporated by providing support for many processor architectures.

APPLICATIONS OF EMBEDDED OPERATING SYSTEMS l

Control and Monitoring applications.

l

Industrial controllers

l

TV set Top Boxes (TiVo).

l

Handheld PDA’s

l

Automobile Computers.

l

Telecomm and Networking hardware

l

Robotics

l

Airways

l

Security systems

l

Recent embedded projects include a home cordless telephone.

Overview of the Embedded Operating System

471

WIDELY USED EMBEDDED OPERATING SYSTEMS l

Wind River Systems: VxWorks, pSOS

l

QNX Software Systems: QNX

l

Green Hills Software: Integrity

l

Mentor Graphics: VRTX

l

Palm Computing: Palm OS

l

Microsoft: Embedded NT/XP: “Real Time” control

l

Windows CE (CE.NET): Internet devices

l

Pocket PC: handheld PC’s and PDA’s

STEPS IN DEVELOPMENT OF EOS Real – time systems usually have varied requirements: l

Functional /business requirements of the system

l

Requirements to support and manage the special hardware of the system

l

Requirements to monitor the system so as to have minimum down time of the system.

Such varied requirements generally necessitate the use of multiple development tools (Endo et al, 1996). The following should be verified before the tools are finalized: l

Integration of various tools with the RTOS and also integration of various developments tools amongst themselves (compatibility with each other).

l

Some of these tools will run on the host machine and in such cases, their integration with the host operating system must be kept in mind (Lucas, 1971).

l

Run time requirements of the executables generated by these tools.

l

The memory requirements of the COTS components should be well within the overall resources available to the system.

The list of development tools includes the modeling tools, cross platform development tools, programming language, IDE, configuration management tools, test coverage tools.

CROSS-PLATFORM DEVELOPMENT The cross-platform development means that the development is done on a different platform (called the source and host platform) than the one on which the system will actually be run ). For example, a system is developed on Windows NT and it is then (called target platform). downloaded onto a custom hardware running a separate RTOS, for the purpose of testing. Cross-platform development brings issues related to differences in the source and target environment. The developers should develop the system as per the facilities available on the target environment and not what is available on the host. For example, the target RTOS may not provide all standard C/C++ libraries, which are otherwise available on a typical

472

Key Drivers of Organizational Excellence

Windows/Unix setup, so such libraries can not be used. Further, the code has to be built (compiled and linked) for the target environment.

INTEGRATED DEVELOPMENT ENVIRONMENT (IDE) IDE includes language compilers, debuggers, editors, and a source code control system. Green Hills, for example, offers its MULTI environment for a number of RTOS platforms, including pSOS and VxWorks. MULTI is a multi language embedded development environment featuring source-level debugging, execution profiling, memory leak detection, graplical class browser, program builder, editor and source code control. MULTI can be hosted on PCs. UNIX Workstations and VAX/VMS systems and supports a number of target processors (Saltzer et al, 1970). Despite the advantages of using a development environment from a third party, some designers may find it more advantageous to use a totally integrated environment from a single vendor. When a single vendor develops both pieces of code, the development environment can include features that are RTOS-aware. Microware, for example, offers a comprehensive development called FasTrak for OS-9 that includes the C compiler, source control, debugger and other features [Park, 1997]. The FasTrak debugger is one module that provides designers with substantial advantages based on its close ties with the RTOS. It can display a broad range of CPU and OS resources, including: CPU and FPU registers, stack frames, local variables, and target system memory. Moreover, it can display and control RTOS task-level resources.

PROGRAMMING LANGUAGE The choice of programming language is very important for real time embedded software. The following factors influence the choice of language: l

A language compiler should be available for the chosen RTOS and hardware architecture of the embedded system.

l

Compilers should be available on multiple operating systems and microprocessors. This is particularly important if the processor or the RTOS needs to be changed in future (Lucas,1971).

l

The language should allow direct hardware control without sacrificing the advantages of a high level language.

l

The language should provide memory management control such as dynamic and static memory allocation.

l

Real-time systems are increasingly being designed using object oriented methodology and using a language that support object oriented concepts.

The languages that are typically used for embedded systems are Assembly Language, c++, Ada and Java (Seltzer et al 1996). Choosing to write code in Assembler should be done on a case by case basis. While code written in assembler can be much faster, it is very usually very processor-specific and less portable than a high level language. C is by far the most popular language and the language that maximizes application portability. C++ is used when real time applications are developed using object oriented methodology. Members of the scientific community, for example, have millions of lines of existing FORTRAN code that implement

Overview of the Embedded Operating System

473

proprietary numerical algorithms. Likewise, developers targeting military applications (for the USA) may require Ada. The availability of languages for a development project is limited to the languages that have been ported to a specific RTOS environment, and in some cases, to languages that have been ported to a specific target single-board computer. Some RTOS vendors offer their own versions of popular languages as they optimize the compiler for RTOS architecture. Microwave, for example, has its own C compiler called Ultra C (Ferrari,1978).

DEBUGGING Once the executable binary image is downloaded onto the target machine for testing purposes, the need to set break points in the program, to observe the execution, arises.

Memory Management: Real time systems allocate memory dynamically during their execution. The following points need to be catered to when designing the embedded software: l

Set up application partition at launch time create all tasks initially (at start up), create stack space for all the tasks.

l

Determine the amount of free memory in the application heap

l

Allocate and dispose blocks of memory (in the application heap) using partition memory, which has fixed-size buffers.

l

Minimize fragmentations in the applications heap caused by blocks of memory that can not move

l

Implement a scheme to avoid low-memory conditions.

Memory fragmentation is a common problem when dynamic memory allocation (use of malloc()/new() in C/C++) is used. In order to prevent this from happening, real-time systems use partitioned memory, which has fixed size buffers. A good approach is not to use dynamic object creation and create objects at the startup itself [Ferrari, 1978]. For hard real-time systems, there is almost no opportunity to do any dynamic memory allocation/retrieval and dynamic memory allocation should be minimized for systems having soft/firm timing requirements. RTOSs do not provide efficient protection against an overflow of memory stacks, as provided by traditional operating systems. So, one would need to keep in mind the maximum stack size allowed for the functions, while declaring local variables of the function. If the total size of local variables exceeds the maximum limit the stack allowed, then the system can overwrite the data in the other’s memory area. This does not apply in the case when in C++, “new” is used because “new” allocates memory from the heap, not from the stack. Some RTOSs do not have task-level memory protection implemented, so memory diagnostics is required to check for memory corruption.

DEVELOPMENT AND TESTING TOOLS These are the tools that Sarawak (2003) has listed in his seminal chapter:

QT/Embedded Qt/embedded, the embedded Linux port of Qt, is a complete and self-contained C++ GUI and platform development tool for Linux based embedded development. Qt/embedded

474

Key Drivers of Organizational Excellence

includes a complete set of classes, operating system encapsulation, data-structure, classes and utility and integration classes. Additionally, Qt/embedded includes a variety of tools to assist in the development, testing and debugging of applications. The broad scope of the Qt/ Embedded API enables it to be used across a wide variety of development projects. Qt/ Embedded is used to develop numerous types of products and devices on the market, ranging from consumer electronics (mobile phones, web pads and set top boxes) to industrial controls (medical imaging devices, kiosks, mobile information systems and others).

GNUPro GNUPro is recognized as the world’s most popular embedded development tools suite, is a collection of tools and runtime technologies that enables the creation, deployment and testing of target software components for devices. The core of the GNUPro tool suite include specially configured,built,tested, and supported versions of the GNU developments tools: i.

GCC version 3.4-mep-031219

ii.

GDB version 6-mep-031219

iii.

Insight, the graphical debugger interface, version 6 mep-031219

iv.

GNU binutils version 2.14-mep-031219

v.

The newlib ISO C runtime library, version 1.10.0-mep-031219 [Park,1997]GNUPro for MeP also includes a complete, customizable software simulator for the MeP core, and the MeP-Integrator tool is used to customize the GNUPro tool suite, enabling or disabling optional instructions, memory maps, UCI/DSP units, hardware engines, and more. This enables full hardware/software co-design of MeP based solutions. This new versions also have ability to customize the GNUPro binaries without rebuilding them from source. With this functionality, users can generate custom MeP tools very quickly. This latest release of Red Hat’s GNUPro for the media embedded processor runs on: a.

Red Hat Enterprise Linux v.3 for x86

b.

Sun Solaris 7,8 and 9 for sparc

c.

Microsoft Windows 2000 and XP

SBC-GX1 Development Kit for embedded Linux – a complete ready –to-run embedded PC platform for application development The SBC-GX1 development kit for embedded Linux with optional java technology features a compact implementation of the GNU/Linux operating system based on the proven v2.6 Linux kernel release. This has been optimizing for use on Arcom hardware platforms and is pre-loaded and configured into the embedded flash [Endo et al., 1992]. This offers an embedded version of Linux with a proven background, combined with a high reliability journaling flash file system. The development kit can also be used with java technology (J2ME) implemented with the IBM J9 JVM. Using the webSphere studio device developer (WSDD) IDE tools for embedded java we can create powerful java applications with rich communication and device management features. Arcom can also provide drivers to

Overview of the Embedded Operating System

475

communicate with a WebSphere MQ Integrator information broker using MQTT (telemetry transport). The development kit includes the SBG-GX1 embedded PC single board computer fitted with 64 Mbytes of DRAM and 16 Mbytes of Flash along with cables, connectors, power supply, software and information CD[Hall,1996].

Other GUI/Windowing toolkits Developers writing applications targeted at a device running one of the Windows Embedded operating systems have a choice of tools. Applications for Windows CE.NET may be developed using platform builder 4.0, or embedded visual C++ 4.0. i.

Plate form builder 4.0 can be used to develop Win32 applications and DLLs (which can expose functions, resources, or be a device driver) or to incorporate application developed using embedded visual C++4.0 into a device image. Plateform builder 4.0 also provides the ability to generate custom Software Development Kits (SDKs) which install into embedded visual C++ providing applications developers the ability to target a custom device.

ii.

Embedded visual C++ 4.0 can be used to develop Win32 applications and DLL’s and can also be used to develop applications based on the Microsoft Foundation Classes (MFC), or Active Template Libraries (ATL). Smart device extensions for Visual Studio.NET is a set of enhancements that extent Visual Studio.NET, in enabling developers to develop, debug and deploy applications for devices running the .NET compact framework and the smart device extensions,Endo,Wang,chen (2002). Applications for Windows XP Embedded may be developed using any tool set that allows targeting a Windows XP professional system, including tools such as Visual Studio.NET, Visual C++ and Visual Basic. Applications may use the full Win32 Application Programming Interface (API) and additional services such as Component Object Model (COM), assuming that these capabilities are included in the specific Windows XP Embedded run-time image. Also, an application may use all of the additional technologies supported by Windows XP Professional, including COM+, DirectX and .NET technologies (Mullender, 1996).

CONCLUSION As embedded systems (PDAs, cellphones, point-of-sale devices, VCRs, industrial robot control, or even our toaster) become more complex hardware-wise with every generation, and more features are put into them by the day, applications they run require more and more to run on actual operating system code in order to keep the development time reasonable.

References Bershad, B., Savage, S., Pardyak, P., Sirer, E. G., Fiuczynski, M., Becker, D., Eggers, S., Chambers, C. (1995), Extensibility, Safety, and Performance in the SPIN Operating System, Proceedings of the 15th Symposium on Operating System Principles, Copper Mountain, CO, Dec., 267-284. Bosch, P., Mullender, S. (1996), Cut-and-Paste File-Systems: Integrating Simulators and File systems, Proceedings of the 1996 USENIX Technical Conference, Jan 1996, pp. 307-318. Computer Systems Research Group, University of California, Berkeley (1994), 4.4BSD System Manager’s Manual, O’Reilly and Associates, 1-56592-080-5.

476

Key Drivers of Organizational Excellence

Computer Systems Research Group, University of California, Berkeley (1994), 4.4BSD User’s Reference Manual, O’Reilly and Associates, 1-56592-075-9. Digital Semiconductor, Alpha 21164 Microprocessor (1995), Hardware Reference Manual, Order Number: ECQAEQB-TE, Digital Equipment Corporation, Maynard, MA. Endo, Y., Wang, Z., Chen, J., Seltzer, M. (1996), Using Latency to Evaluate Interactive System Performance, Proceedings of the 2nd OSDI, Seattle WA, October. Ferrari, D. (1978), Computer Systems Performance Evaluation, Prentice Hall, Englewood Cliffs NJ, 44-64. Hall, M. W., Anderson, J. M., Amarasinghe, S. P., Murphy, B. R., Liao, S. W., Bugnion, E., Lam, M. S. (1996), Getting Performance out of Multiprocessors with the SUIF Compiler, IEEE Computer, December 1996. Intel Corporation (1995), Pentium Processor Family Developer’s Manual, Volume 3: Architecture and Programming Manual, Intel Corporation. Lucas, Henry C. (1971), Performance Evaluation and Monitoring, ACM Computing Surveys, 3(3), September, pp. 79-91. Park, L. (1997), Development of a Systematic Approach to Bottleneck Identification in UNIX systems, Harvard University Computer Systems Technical Report, TR-05-97, April. Saltzer, J.H. and Gintell, J.W. (1970), The Instrumentation of Multics, Communications of the ACM, 13(8), August, pp. 495-500. Seltzer, M., Small, C., Smith, M. (1996), Symbiotic Systems Software, Proceedings of the Workshop on Compiler Support for Systems Software (WCSSS ‘96). Seltzer, M., Endo, Y., Small, C., Smith, K. (1996), Dealing with Disaster: Surviving Misbehaving Kernel Extensions, Proceedings of the Second Symposium on Operating System Design and Implementation, Seattle, WA, October.

Some Futuristic Trends in Data Mining

477

51

Some Futuristic Trends in Data Mining Virendra Singh Kushwah Nitin Paharia

Trends in Data Mining includes further efforts towards the exploration of new application areas and new methods for handling complex data types, algorithm scalability, constraint-based mining and visualization methods, the integration of data mining with data warehousing and database systems, the standardization of data mining languages, and data privacy protection and security. This paper is focusing three important data mining trends and trying to make comparison with classical data mining. Keyword: Trends, web data mining, spatial data mining, temporal data mining

INTRODUCTION The diversity of data, data mining tasks, and data mining approaches poses many challenging research issues in data mining. The design of data mining languages, the development of efficient and effective data mining methods and systems, the construction of interactive and integrated data mining environments, and the application of data mining techniques to solve large application problems are important tasks for data mining researchers and data mining system and application developers[Han,2004]. Data-mining methodologies find hidden patterns in large sets of data to help explain the behavior of one or more response variables. Unlike other methods, such as traditional statistics, there is no preconceived model to test; a model is sought using a pre-set range of explanatory variables that may have a variety of different data-types and include “outliers” and missing data. Some variables may be highly correlated and the underlying relationships may be nonlinear and include interactions. Some data-mining techniques involve the use of traditional statistics, others more exotic techniques such as neural nets, association rules, Bayesian networks, and decision trees [Dunham, 2003].

PRIMARY AIMS FOR DATA MINING There are some primary aims for data mining which are as follows:

478

Key Drives of Organizational Excellence

1.

The primary aim of many data mining application is to better understand the customer and improve customer services.

2.

Some applications aim to discover anomalous patterns in order to help identify, for example, fraud, abuse, waste, terrorist suspects etc.

3.

In many applications in private enterprises, the primary aim is to improve the profitability of an enterprise.

4.

In some government applications, one of the aims of data mining is to identify criminal and fraud activities.

5.

In some situation data mining is to find patterns that are simply not possible without the help of data mining.

CURRENT USAGE OF DATA MINING Data mining is being used extensively in the retail, finance, security, medicine, and insurance industries. Many of these applications were built originally as expert systems. Today, a subset of such systems can be built automatically using data-mining technology to derive rules from contextual historical data (Kittler, 1999). This has opened up new vistas of modeling heretofore deemed impractical because of the volumes of data involved and the transient nature of the models, especially in retail. Some applications in these industries have analogies in manufacturing. As in the medical industry, for example, use of data mining in manufacturing requires that the derived models have physical understanding associated with their components. It is not enough to predict system behavior, since the underlying root cause of the phenomena usually needs to be identified to drive corrective actions. Also, as with medicine, most applications in manufacturing are for diagnostic systems. Manufacturing groups would like to know the circumstances under which they will encounter certain types of process control excursions, equipment events, and final quality levels. Having a system that can crunch historical data and establish related models improves response time when such events occur and follow-up is needed (Prabhu, 2007). There are following three common research issues for challenging activities in data mining: 1.

Web Data mining

2.

Spatial Data Mining

3.

Temporal Data Mining

Web Data Mining (WDM) Web mining is mining of data related to the World Wide Web (Fürnkranz,2004). This may be the data actually present in Web pages or data related to Web activity. Many of these systems are based on machine learning and data mining techniques. Just as data mining aims at discovering valuable information that is hidden in conventional databases, the emerging field of Web mining aims at finding and extracting relevant information that is hidden in Web related data, in particular in (hyper-) text documents published on the Web. Like data mining, Web mining is a multi-disciplinary effort that draws techniques from fields like

Some Futuristic Trends in Data Mining

479

information retrieval, statistics, machine learning, natural language processing, and others. Web mining is commonly divided into the following three sub-areas:

Web Mining

Web Content mining

Web Structure Mining

Web Usage Mining

Figure 1: Web Data mining with its sub-areas

1.

Web content mining (WCM): Application of data mining techniques to unstructured or semi-structured text, typically HTML-documents. It deals with discovering useful information or knowledge from Web page contents. It goes well beyond using keywords in a search engine. In contrast to Web page mining and Web structure mining. Web content mining focuses on the Web page content rather than the links. Web content mining is a very rich information resource consisting of many types of information, for example unstructured free text, image, audio, video and metadata as well as hyperlinks. The content of Web pages includes no machine readable semantic information. Search engines, subject directories, intelligent agents, cluster analysis, and portals are employed to find what a user might be looking for. It has been suggested that users should be able to pose more sophisticated queries than just specifying the keywords.

2.

Web structure mining (WSM): Use of hyperlink structure of the Web as an (additional) information source. It deals with discovering and modeling the link structure of the Web. Work has been carried out to model the Web based on the topology of the hyperlinks. This can help in discovering similarly between sites or in discovering important sites for a particular topic or discipline or in discovering Web communities.

3.

Web usage mining (WUM): Analysis of user interactions with a Web server. It deals with understanding user behavior in interacting with the Web or with a Web site. One of the aims is to obtain information that may assist Web site reorganization or assist site adaptation to better suit the user. The mined data often includes data logs of users’ interactions with the Web. The logs include the Web server logs, proxy server logs, and browser logs. The logs include information logs, and browser logs. The logs include information about the referring pages, user identification, time a user spends at a site and the sequence of pages visited. Information is also collected via cookie files. While Web structure mining shows that page A has a link to page B, Web usage mining shows who or how many people took that link, which site they came from and where they want when they left page B.

The three categories above are not independent since Web structure mining is closely related to Web content mining and both are related to Web usage mining.

480

Key Drives of Organizational Excellence

Spatial Data Mining Recent widespread use of spatial database has lead to the studies spatial data mining (SDM), special knowledge discovery (SKD), and the development of spatial data mining techniques (Shekhar et.al, 2003). Traditional data mining methods assume independence among studied objects and lack the ability to handle the inter-relationship nature of spatial data. Spatial data mining methods can be used to understand spatial data, discover relationship between spatial and non-Spatial variables, detect the spatial distribution patterns of certain phenomena, and predict the trend of such patterns. Foundations of spatial data mining include spatial statistics and data mining. Spatial data mining tasks can be grouped into description, exploration, and prediction. To understand the data, spatial data and spatial phenomena have to be first described and analyzed; and hidden patterns and relationships among spatial or non-spatial variables have to be explored. Based on the current pattern of spatial distribution and the understanding of spatial relationships, future state and trend of the spatial pattern and spatial distribution can be predicted (Tang, 2002). Spatial data mining techniques include, but are not limited to, visual interpretation and analysis, spatial and attribute query and selection, characterization, detection of spatial and non-spatial association rules, clustering analysis, and spatial regression.

Spatial Data

Spatial Data Mining

Knowledge

Figure 2: Participation of Spatial Data Mining between Spatial Data and knowledge

Spatial data mining that distinguish it from classical data mining in the following four categories: input, statistical foundation, output, and computational process as shown in Table. Spatial data mining, especially those related to four important output patterns: predictive models, spatial outliers, spatial co-location rules, and spatial clusters. Table: Difference between Classical Data Mining & Spatial Data Mining Classical Data Mining

Spatial Data mining

Input

Simple types Explicit relationship

Complex types Implicit relationships

Statistical Foundation

Independence of samples

Spatial autocorrelation

Output

Set-based interest measures e.g., classification accuracy

Spatial interest measures

Computational Process

Combinational optimization Numerical Algorithm

Computational efficiency opportunity Spatial autocorrelation planes weeping New complexity: SAR, co-location

For example, Oracle Spatial allows users and application developers to seamlessly integrate their spatial data into enterprise applications and fully leverage the scalability, reliability, and performance of the Oracle8i. This means that spatial and attribute data can now be

Some Futuristic Trends in Data Mining

481

managed in one physical database, thereby reducing processing overhead and eliminating the complexity of coordinating and synchronizing disparate sets of data. Oracle Spatial enables traditional database customers to add useful spatial queries to their applications. It supports Geomatics vendors who need to store, retrieve, and manage very large spatial databases containing hundreds of gigabytes of geodata. Spatial data, in turn, continue to increase in availability and volume with the wide spread use of satellite data, aerial photography, GPS technology, digital cameras, image scanners and map digitizers. One-meter-resolution satellite data is more cost-effective than aerial photography for several applications. High-resolution satellite data has started playing a key role in agribusiness and utility sector, for example enabling users to see everything from farm fields to urban manhole covers. Hence, the current trend is to organize and use digital databases consisting of satellite data, geocoded maps and census information in an integrated manner. These three databases, when modeled and integrated, have provided tremendous impetus on decision-making process for development activities.

TEMPORAL MINING The data that are stored reflect data at a single point in time, called snapshot database. Data are maintained for multiple time points, not just one time point is called temporal database. Each tuple contains the information that is current from the date stored with that tuple to the date stored with the next tuple in temporal order. Temporal data mining is concerned with data mining of large sequential data sets (Laxman,2006). By sequential data, data that is ordered with respect to some index. For example, time series constitute a popular class of sequential data, where records are indexed by time. Other examples of sequential data could be text, gene sequences, protein sequences, lists of moves in a chess game etc. (Aldana, 2000). Here, although there is no notion of time as such, the ordering among the records is very important and is central to the data description/modeling. Time series analysis has quite a long history. Techniques for statistical modeling and spectral analysis of real or complexvalued time series have been in use for more than fifty years. Weather forecasting, financial or stock market prediction and automatic process control have been some of the oldest and most studied applications of such time series analysis. Time series matching and classification have received much attention since the days speech recognition research saw heightened activity (abbott, 2007). These applications saw the advent of an increased role for machine learning techniques like Hidden Markov Models and time delay neural networks in time series analysis. Temporal data mining, however, is of a more recent origin with somewhat different constraints and objectives. One main difference lies in the size and nature of data sets and the manner in which the data is collected. Often temporal data mining methods must be capable of analyzing data sets that are prohibitively large for conventional time series modeling techniques to handle efficiently. Moreover, the sequences may be nominal valued or symbolic (rather than being real or complex-valued), rendering techniques such as autoregressive moving average (ARMA) or autoregressive integrated moving average (ARIMA) modeling inapplicable (Bertone, 2007). Also, unlike in most applications of statistical methods, in data mining there are little or no control over the data gathering process, with data often being collected for some entirely different purpose. For example, customer transaction logs may be maintained from an auditing

482

Key Drives of Organizational Excellence

perspective and data mining would then be called upon to analyze the logs for estimating customer buying patterns. The second major difference (between temporal data mining and classical time series analysis) lies in the kind of information that we want to estimate or unearth from the data (Laxman, 2006). The scope of temporal data mining extends beyond the standard forecast or control applications of time series analysis. Very often, in data mining applications, one does not even know which variables in the data are expected to exhibit any correlations or causal relationships. Furthermore, the exact model parameters (e.g. coefficients of an ARMA model or the weights of a neural network) may be of little interest in the data mining context. Of greater relevance may be the unearthing of useful (and often unexpected) trends or patterns in the data which are much more readily interpretable by and useful to the data owner. For example, a time tamped list of items bought by customers lends itself to data mining analysis that could reveal which combinations of items tend to be frequently consumed together, or whether there has been some particularly skewed or abnormal consumption pattern this year (as compared to previous years), etc. (Laxman, 2006).

CONCLUSION AND FUTURE WORK This paper is focused on trends in data mining in present scenario - web mining, spatial mining and temporal mining. This paper contains primary aims and current usage of data mining. This paper has also made a brief comparison with the classical data mining. However, one hot issue not covered here includes information security in data mining.

References Jiawei Han & Micheline Kamber (2004), Data Mining: Concepts and Techniques, p.p. 478-481 S. Prabhu, N.Venkatesan (2007), Data Mining and warehousing, p.p. 52-53 Richard Kittler, Weidong Wang (1999), The Emerging Role For Data Mining Johannes Fürnkranz(2004), Web Mining Margaret H. Dunham (2003), Data Mining Introductory and Advanced Topics, p.p. 195-220, 221-244, 245-273 Hong Tang, Simon McDonald (2002), Integrated GIS and Spatial Data Mining technique for target marketing of University courses Shashi Shekhar, Pusheng Zhang, Yan Huang, Ranga Raju Vatsavai(2003), Trends in Spatial Data Mining Srivatsan Laxman and P.S. Sastry (2006), A survey of temporal data mining Dean Abbott (2007), Data Mining and Predictive Analytics Walter Alberto Aldana (2000), Data Mining Industry: Emerging Trends and New Opportunities. Alessio Bertone (2007), A Matter of Time: Machine Learning and Temporal Data Mining

A Mathematical Study of Effect of Second Degree Burn in Dermal Parts of Human Body

483

52

A Mathematical Study of Effect of Second Degree Burn in Dermal Parts of Human Body Surabhi Sengar

The thermoregulatory mechanism in a human body is devised in such a way that the impact of environmental changes within certain limits is minimized. Thermal energy is exchanged within the human body via several ways. This problem is related with the second degree burn resulting in a major burn injury. Seconddegree burns involve the superficial (papillary) dermis and may also involve the deep (reticular) dermis layer.In such burns the superficial layer epidermis and the deeper dermis is affected largely.due to such burns the physiological parameters as metabolic heat generation rate and blood mass flow rate increases due to increased capillary pereaility.In view of above the present paper makes an attempt to evaluate the effect of second degree burn injury on temperature distribution in dermal layers of a human body using a transient finite element model. The region under consideration is divided into three layers and appropriate assmptions regarding the variation of parameters and boundary conditions have been made. In view of above, the present chapter makes an attempt to evaluate the effect of second degree burn injury on temperature distribution in dermal layers of a human body using a transient finite element model. The region under consideration is divided into three layers and appropriate assumptions regarding the variation of parameters and boundary conditions have been made.

INTRODUCTION A burn can be an injury caused by heat, cold, electricity, chemicals, or radiation. First-degree burns are usually limited to redness (erythematic), a white plaque, and minor pain at the site of injury. These burns usually extend only into the epidermis. Second-degree burns additionally fill with clear fluid, have superficial blistering of the skin, and can involve more or less pain depending on the level of nerve involvement (Arya, D and Saxena V.P., 1986) Second-degree burns involve the superficial (papillary) dermis and may also involve the deep (reticular) dermis layer Third-degree burns are which most of the epidermis is lost. They additionally have charring of the skin, and sometimes produce hard eschars. An eschar is a scab that has separated

484

Key Drives of Organizational Excellence

from the unaffected part of the body. These types of burns are often considered painless, because nerve endings have been destroyed in the burned areas. However, there is in reality a significant amount of pain involved in a third degree burn. Hair follicles and sweat glands may also be lost. Third degree burns result in scarring elastic banding of the skin can smooth the scarred skin. Third degree burns over large surface areas are often fatal. Fourth-degree burns are a burn in which most of the dermis is lost often burning the muscle underneath. These burns usually present hard-to-reverse damage to the skin and there is little sensation in the burn area as a result. These types of burns will require hospitalization. Grafting is needed to close up the areas. Fifth-degree burns are burns in which most of the hypodermis is lost charring and exposing the muscle underneath. Sometimes, fifth-degree burns can be fatal. Sixth-degree burns are burn types in which almost all the muscle tissue in the area is burned away leaving almost nothing but charred bone. Often, sixth-degree burns are deadly. Sixthdegree burns are the highest in the burn category. A newer classification of “Superficial Thickness”, “Partial Thickness” (which is divided into superficial and deep categories) and “Full Thickness” relates more precisely to the epidermis, dermis and subcutaneous layers of skin and is used to guide treatment and predict outcome

TRADITIONAL NOMENCLATURE 1.

Superficial thickness First-degree Epidermis involvement Erythema, minor pain, lack of blisters Partial thickness — superficial

2.

Second-degree Superficial (papillary) dermis Blisters, clear fluid, and pain Partial thickness — deep Second-degree Deep (reticular) dermis Whiter appearance, with decreased pain. Difficult to distinguish from full thickness Full thickness

3.

Third- or fourth-degree Dermis and underlying tissue and possibly fascia, bone, or muscle Hard, leather-like Escher, purple fluid, no sensation (insensate)

4.

Serious burns, especially if they cover large areas of the body, can cause death; any hint of burn injury to the lungs (e.g. through smoke inhalation) is a medical emergency, (Diller, KR. And Hayes L. J., 1983).

5.

Chemical burns are usually caused by chemical compounds, such as sodium hydroxide (lye), silver nitrate, and more serious compounds (such as sulfuric acid). Most chemicals (but not all) that can cause moderate to severe chemical burns are strong acids or bases. Nitric acid, as an oxidizer, is possibly one of the worst burn-causing chemicals. Hydrofluoric acid can eat down to the bone and its burns are often not immediately evident. Most chemicals that can cause moderate to severe chemical burns are called caustic.

6.

Electrical burns are generally symptoms of electric shock, being struck by lightning, being defibrillated or cardioverted without conductive gel, etc. The internal injuries sustained may be disproportionate to the size of the “burns” seen - as these are only the entry and exit wounds of the electrical current

Scalding is a specific type of burning that is caused by hot fluids (i.e. liquids or gases). Examples of common liquids that cause scalds are water and cooking oil. Steam is a common

A Mathematical Study of Effect of Second Degree Burn in Dermal Parts of Human Body

485

gas that causes scalds. The injury is usually regional and usually does not cause death. More damage can be caused if hot liquids enter an orifice. However, deaths have occurred in more unusual circumstances, such as when people have accidentally broken a steam pipe. Young children, with their delicate skin, can suffer a serious burn in a much shorter time of exposure than the average adult. Also, their small body surface area means even a small amount of hot/burning liquid can cause severe burns over a large area of the body (Pardasani, and Saxena, 1988).

FORMULATION OF THE PROBLEM If q denotes the Temperature at time t and a Position x, measured perpendicularly into the tissue from the skin surface, then one dimensional form of bio heat transfer equation is, Reddy, J. N, 1985).

rc

∂q ∂ Ê ∂q ˆ = Á k ˜ + mb cb (q b - q ) + S ∂t ∂t Ë ∂t ¯

(1)

Where, r = Density of the tissue c= specific heat of the tissue. K= Thermal conductivity of the tissue m= blood mass flow rate. qb= temperature of the blood S= Metabolic heat generation rate The boundary conditions are given by, -k (∂q/∂x) = h (q-q a) + LE

at x=0

Where qa being ambient temperature and q (c,t) = qb= Body core temperature. The initial condition is taken as:q(x,0) = 22.84 + lx. Where l is an unknown constant. After comparing equation (1) with Euler Lagrange’s Equation, we get the following Variational form:2 c ˘ ∂ 1 È Ê ∂q ( i ) ˆ Èh ˘ Í + mb cb q b - q ( i ) 2 - 2Sq ( i ) + r c q ( i )2 ˙dx + Í (q (i ) - q a ) 2 + LE ˙ I (i ) = kÁ ˜ ∂t 2 Í Ë ∂x ¯ ˙ Î2 ˚ 0Î ˚

Ú

(

)

Where c is the total thickness of the skin and subcutaneous region and x is measured along the perpendicular direction to the skin surface.

486

Key Drives of Organizational Excellence

Also we assign the values q0, q1, and q2 to temp. q at the nodal points x = 0, x = a, x =b where a is the thickness of the epidermis and b is the thickness of the dermis and subcutaneous tissue region both. The value of q at x = c is q3 which is assumed to be at body core temperature and hence q3= qb is constant. Let q (r) (r = 1, 2, 3) represents the values of temp. q (x ,t) in the region I, II and III corresponding to epidermis (I), dermis (II) subcutaneous tissue region respectively. Epidermis: (O < x < a) q(i) = q0 + (q1 - q0)x/a K= K1, m1 = 0, S1 = 0 Dermis: (a < x < b) Q(2)

= (bq1-aq2)/(b-a) + [(q2-q1) / (b-a)]x

K2

= (bk1-ak3)/ (b-a) + [(k3-k1) / (b-a)]x

M2

= (x-a)/ (b-a) m3 (1+l1e-at)

0 < l1, a 1

S2

= (x-a/b-a) S0 (1+µ1e-b1t) (1-A)

0 < µ1, b1

Subcutaneous tissue region q (3) = (cq2-bq3)/(c-b) + [(q3-q2)/(c-b)]x K=K3 M3=m (constant) S3= s (constant) Let I1, I2 and I3 be the values of I in three sub regions so that, I = S Ii Hence using equation (2) we get for i=1

I1 =

2 K1(θ1 −θ 0 ) ∂ + ∂t 2a

a (θ1 −θ 0 ) a(θ −θ ) + a+ 1 0 θ 0 2 6 2 2

2

0

+ I 2 = (θ 2− − θ 1 )

2

−α −α Rα qrα + + α 2βf (1+ λ1e t ) − aβsα 2 (1+ λ 1e t ) 2 2 1

1

h (θ 0 − θ a ) 2 + LE θ 0 2

A Mathematical Study of Effect of Second Degree Burn in Dermal Parts of Human Body

+ ( b θ 1 − a θ 2 ) 2 α 2 β r (1 + λ 1e − α 1t ) − aβα 2 (1 + λ 1e − α 1t ) + (θ 2− − θ 1 ) −γβs(1 + λ 1e −α 1t ) − aβγr (1 + λ 1e − α 1t ) − αsS (1 + µ 1e − β 1t ) + δrS (1 + µ 1e − β 1t )

+(bθ 1 − aθ 2 ) −γβs(1+ λ1e−α1t ) + aβγ (1 + λ 1e−α1t ) − αSr (1 + µ1e−β 1t ) + δS(1 + µ1e− β1t ) + ( b θ 1 − a θ 2 )(θ 2 − θ 1 ) 2 α 2 β s (1 + λ 1 e − α 1t ) − 2 β α δ r (1 + λ 1 e − α 1 t ) 2 2 + βθ b r (1 + λ 1e − α 1t ) − aβθ b (1 + λ 1e − α 1t ) +

∂ (bθ1 − aθ 2)2 (θ 2 − θ1)2 (b2 + a2 + ab) 2(bθ1 − aθ 2) (θ 2 −θ1)(a+ b) + + ∂ t 2(b− a) 2(b− a) 3 2(b− a)

I 3 = (θ 3 − θ 2 )

2

K3φ + βn1φ 2

+(cθ 2 − bθ 3) −2βθb−

+

2 (cθ 2 − bθ 3 ) βφ + (θ 2 −θ1) −2βθbn2 −

2 S3 (θ 3 − θ 2 ) (cθ 2 − bθ 3 ) 2βθbn2φ + βθb (c− b) + 2

∂ (cθ 2 − bθ 2 ) (θ − θ ) 2(cθ 2 − bθ 3) (θ 3 − θ 2 ) φ + 3 2 φ n1 + n2φ ∂t 2 2 2 2

S3n2 2

2

Integrals I1, I2, I3 are assembled to obtain 3

I ∑ Ii i =1

Now I is extremized with respect to each nodal temperature ∂I =0 ∂q

Now differentiate I with respect to each nodal temperature

487

488

Key Drives of Organizational Excellence

∂I ∂ C 0θ 0 + C1θ 1 + C 2θ 2 + Wo = γ 0θ 0 + γ 1θ 1 + γ 2θ 2 + ∂θ 0 ∂t

∂I ∂ = ϕ 0θ 0 + ϕ 1θ 1 + ϕ 2θ 2 + A0θ 0 + A1θ 1 + A2θ 2 = W1 ∂θ 1 ∂t

∂I ∂ = η 0θ 0 + η1θ 1 + η 2θ 2 + B0θ 0 + B1θ 1 + B2θ 2 = W 2 ∂θ 2 ∂t \ Now taking lap lace transform of the following equations we get,

(γ 0 + pc0)θ0 +(γ 1 + pc1)θ1++ +(γ 2 + pc2)θ2 = D1 (ϕ0 + pA0 )θ0 +(ϕ1 + pA1)θ1 +(ϕ2 + pA2 )θ2 = D2

(η0 + pB0)θ0 +(η1 +pB1)θ1 +(ϕ2 +pB2)θ2 =D3 Now solving these set of equations by matrix method we get

p5 X0 + p4 X1 + p3 X2 + p2 X3 + pX4 + X5 θ0 = 6 p Y0 + p5Y1 + p4Y2 + p3Y3 + Y4 p2 + Y5 p p5O0 + p4O1 + p3O2 + p2O3 + pO4 + O5 θ1 = 6 p Y0 + p5Y1 + p4Y2 + p3Y3 + Y4 p2 + Y5 p p5V0 + p 4V1 + p 3V2 + p 2V3 + pV4 + V5 θ2 = 6 p Y0 + p5Y1 + p 4Y2 + p 3Y3 + Y4 p 2 + Y5 p Now taking inverse Laplace transform of the following equations we get the required temperature −



θ 0 = δ 1 + δ 2e t + e t /2 δ 3 cos

3 2δ 4 − δ 3 sn 3 t + sin t 2 2 3

A Mathematical Study of Effect of Second Degree Burn in Dermal Parts of Human Body

3 2δ 6 + δ 5 3 t + sin t 2 2 3

e δ 5 cos t /2





θ1 = τ 1 + τ 2e t + e t /2 τ 3 cos

e τ 5 cos t /2

489

3 2τ 4 − τ 3 3 sin t t + 2 2 3

3 2τ 6 + τ 5 3 t + sin t 2 2 3





θ 2 = σ 1 + σ 2e t + e t /2 σ 3 cos

e σ 5 cos t /2

3 2σ 4 − σ 3 3 t + sin t + 2 2 3

3 2σ 6 + σ 5 3 t + sin t 2 2 3

Numerical Values and Discussion: The following values of physiological parameters have been taken: K1= 0.5×10-3 cal/cm s0 c K2=1.0×10-3 cal/cm s0 c K3=1.5×10-3 cal/cm s0 c mbcb =0.525×10-3 cal/cm2ms0c S= 0.3×10-3 cal/cm2ms0 c H=3×10-3cal/cm2ms0c L=579cal/gm E=0.16×10-3gm/cm2s Tb=370c a=0.10 b=0.35 c=0.50 b1=12 µ1=15

490

Key Drives of Organizational Excellence

DISCUSSION The graphs and total calculation says that the tissue temperature increases with the source temperature and time. The skin surface temperature increases more in thicker SST region at higher source temperature. Different numerical values of different parameters have been used. Fig.2 presents nodal values of tissue temperature at different time. Tissue temperature increases with time, however initially there is a decrease which occurs due to finite element technique.

A Mathematical Study of Effect of Second Degree Burn in Dermal Parts of Human Body

491

References Arya, D and Saxena V.P. (1986), Temperature Variation in Skin and Subcutaneous Layers under Different Environmental Conditions - A two Dimensional Study and J. pure appl. Math 17(1) 84-99. Bindra J.S; Saxena, V. P, Arya, D (1986), Transient Heat flow problems in dermal and sub-dermal tissues, Ind J. tech 24, 71-75. Chaterjee, C. C (1984), A Text Book of Human Physiology, Medical Allied Agency, Tenth Edition, 519-521. Diller, KR. And Hayes L, J (1983), A Finite Element Model of Burn Injury in Blood Perfuse Skin, Asme J. Bio. Mech, Engg Los, 300-307. Khudsen, M. and Overgaard, J. (1986), Identification of Thermal Model for Human Tissue, IEEE Trans. Biomed, Eng., Vol. BME-33, 77-485. Patterson, A.M. (1976), Measurement of Temperature Profiles in Human Skin, S.Afr.J.Sc.72, 1976. Pardasani,K.R. And Saxena V.P. (1988), Temperature Distribution in Skin With Uniform Per Fused Tumor in Sub Dermal Tissues, Proc. Nat Con. Bio-Mechanics, IIT Delhi, Dec.17-19, 163-172. Ponder, E. (1962), The Coefficient of Thermal Conductivity of Blood and Various Tissues, J. Gen. Physio., 45, 545-567. Reddy, J.N (1985), An Introduction to Finite Element Method, McGraw Hill., 13-124

492

Key Drives of Organizational Excellence

53

Data Quality For Business Intelligence Virendra Singh Kushwah A.K.Solanki

Is your data having quality for Business Intelligence? For any business intelligence, data is must, but mind seeks to know that is your data having quality. Currently, most data quality measures are developed on an adhoc basis to solve specific problems, and fundamental principles necessary for developing usable metrics in practice are lacking. In this chapter, some principles that can help organizations develop usable data quality metrics are described. Data problems in data definition, data content, data preparation and data presentation can cause business intelligence processes to fail. The chapter is willing to focus on Data Quality and its measurement principle because modern BI environment addresses array of challenges. Keyword: Business Intelligence, Data Measurement, DQ principle

INTRODUCTION Business intelligence (BI) is a business management term which refers to applications and technologies which are used to gather, provide access to, and analyze data and information about their company operations. Business intelligence systems can help companies have a more comprehensive knowledge of the factors affecting their business, such as metrics on sales, production, internal operations, and they can help companies to make better business decisions. Business Intelligence should not be confused with competitive intelligence, which is a separate management concept. For business intelligence, data is a must but the quality of the data is a major concern. Data problems in data definition, data content, data preparation and data presentation can cause business intelligence processes to fail. Here, some of the critical Data Quality (DQ) problems in collection, preparing and presenting data for business intelligence along with DQ principle for mitigation or prevention have been identified. Data quality is the degree of excellence in a database. Quality is assessed relative to the database specification, which defines the desired level of generalization and abstraction. The quality of this specification, and its appropriateness for particular applications, can also be assessed.

Data Quality For Business Intelligence

493

RATIONALE FOR USING BI Business intelligence applications and technologies can enable organizations to make more informed business decisions, and they may give a company a competitive advantage. For example, a company could use business intelligence applications or technologies to extrapolate information from indicators in the external environment and forecast the future trends in their sector. Business intelligence is used to improve the timeliness and quality of information and enable managers to better understand the position of their firm in comparison to its competitors. Business intelligence applications and technologies can help companies analyze the following: changing trends in market share, changes in customer behavior and spending patterns, customers’ preferences, company capabilities and market conditions. Business intelligence can be used to help analysts and managers determine which adjustments are most likely to affect trends. BI systems can help companies develop consistent and “data-based” business decisions— producing better results than basing decisions on “guesswork.” In addition, business intelligence applications can enhance communication among departments, coordinate activities, and enable companies to respond more quickly to changes (e.g., in financial conditions, customer preferences, supply chain operations, etc.). When a BI system is welldesigned and properly integrated into a company’s processes and decision-making process, it may be able to improve a company’s performance. Having access to timely and accurate information is an important resource for a company, which can expedite decision-making and improve customers’ experience. In the competitive customer-service sector, companies need to have accurate, up-to-date information on customer preferences, so that the company can quickly adapt to their changing demands (McKnight, 2001). Business Intelligence enables companies to gather information on the trends in the marketplace and come up with innovative products or services in anticipation of customer’s changing demands. Business Intelligence applications can also help managers to be better informed about actions that a company’s competitors are taking. As well, BI can help companies share selected strategic information with business partners. For example, some businesses use BI systems to share information with their suppliers (e.g., inventory levels, performance metrics, and other supply chain data). BI systems can also be designed to provide managers with information on the state of economic trends or marketplace factors, or to provide managers with in depth knowledge about the internal operations of a business.

DATA QUALITY ISSUES FOR BUSINESS INTELLIGENCE Problems that hamper effective statistical data analysis stem from many sources of error introduction. First, data may not be clearly or accurately defined, causing a mismatch in the definition and the actual facts collected. Data can be captured inaccurately, or samples can be biased in record selection. Data quality decay causes data to become inaccurate when the characteristic of a real-world object changes. For example, if the price of an item changes, updated price values must be captured to assure the integrity of the analysis. It is vital for the analyst to have or to conduct a data quality assessment to assure accuracy – not just validity – and completeness of data early in data preparation to allow time for any correction initiatives and preparation for mining. Data preparation failure occurs when data is transformed in a way that is not able to be analyzed correctly by the data mining tools (English, 2005). Finally, presentation graphics or display may not clearly convey the significance in the discovered patterns. Some examples:

494

Key Drives of Organizational Excellence

Clear, Correct, Complete Data Definition An example of poor data definition comes from a survey taken by university students about student enroll number use. One attribute, “2423656,” was defined as: “Phone Number”, but it is an enroll number. The samples of data collected included “MCA400,” “B5000,” “C9999”, are the examples of enroll number. Due to the lack of clarity of the definition and absence of any data formatting, this data had to be manipulated and transformed for proper analysis.

DQ principle Define data with business subject matter experts. Develop a consensus standard for values or data format. Provide training to information producers. Assess DQ for conformance to standards.

Missing values Data sources often contain observations that have missing values for one or more variables. Missing values can result from data collection errors, incomplete customer responses, actual system and measurement failures, or from a revision of the data collection scope over time, such as tracking new variables that were not included in the previous data collection schema. If an observation contains a missing value, then by default that observation is not used for modeling methods like neural network or linear regression. However, rejecting all incomplete observations may ignore useful or important information still contained in the non-missing variables. Rejecting all incomplete observations may also bias the sample, since observations that have missing values may have other things in common as well.

DQ principle How should we treat missing data values? While there is no single correct answer, there are guidelines. The first and best choice is to go back to the original real-world object and collect the data if it is knowable, such as the birth date of a person, and if the time of collection does not conflict with the time of collection of the other data, such as temperature on a different day from the other data points. For events, such as measurements at a point in time, there must be a reliable recording of the event data to capture it with accuracy. Estimating the “best” missing value replacement technique requires assumptions about the true (missing) data. For example, if a variable’s data distribution follows a normal population response, you may replace a missing value with the mean of the variable. Be aware that replacing missing values with the mean, median or another measure of central tendency is simple, but it can greatly affect a variable’s sample distribution. Use these replacement statistics carefully and only when the effect is minimal. Another imputation technique replaces missing values with the mean of all other responses given by that data source, such as the exit poll responses at a specific precinct. This assumes that the input from that specific data source conforms to a normal distribution. Another technique studies the data to see if the missing values occur in only a few variables. If those variables are determined to be insignificant, the variables can be rejected from the analysis. However, the observations can still be used by the modeling nodes.

Data Quality For Business Intelligence

495

At a point there may be too much missing data for acceptable statistical analysis, and you may have to discard such attributes from the data. Another strategy is to use a modeling technique like decision trees, which automatically handle missing values. Finally, you may want to create a missing value indicator attribute and use it as candidate predictors in the model. The presence or absence of a value itself can be predictive.

Inaccurate values In most cases, inaccurate data can cause processes to fail. The higher the frequency the more severe the failure. Some errors in variables such as salary amount or age, can tolerate some precision error without significant trend discovery failure.

DQ principle As with missing data, the best form of correction is to return to the real-world object to remeasure or discover the correct value.

Value synonyms Where data does not have a standardized value set, there may be different data values that represent the same characteristic, such as unit-of-measure synonyms “12,” “Doz,” and “Dz.” This causes the problem of dilution of patterns involving unit of measure of one dozen items in an order unit. If there were a relatively normal distribution of values among the three synonyms, the frequency of occurrence of unit of measure of one dozen will represent only about one third of all items with a real unit of measure of one dozen.

DQ principle Identify and standardize the synonyms to a single value. This cannot be done arbitrarily; you must involve the business subject-matter experts. The real solution requires this to be standardized in the source processes and databases.

Overloaded variable values Often, data that is not controlled contains values that do not represent the characteristic the variable was designed for. Knowledge workers, in the absence of a well-designed database may have to “force” new facts into existing data elements. These overloaded fields create problems in trend correlation because they represent a different characteristic about the object or event that may bias the correlation of the original characteristic. For example, a Gender Code data element contained supposed “valid values” of “male,” “female,” “initials,” “ambiguous,” and “unknown.” The last three values did not represent gender; they represented why a gender-assignment routine was not able to determine gender by looking at the first name of the person.

DQ principle For overloaded variables that represent multiple characteristics important to trend identification, break them out into separate variables and assure you have the correct definition. However, if the overloaded values are mutually exclusive, this will introduce missing data in both variables. In the Gender Code example above, if you were not able to

496

Key Drives of Organizational Excellence

contact the persons, you would have to provide a value of “unknown” and determine the impact of the missing data on your trend analysis.

Currency Currency represents the age of the data. Different trend analyses may require different ages of information. Identifying meaningful patterns requires having data of a common time period. For example, insurance policies have changes in business rules over time. You would not take policies in force 10 years ago and analyze them against the features of the comparable policies being sold today.

DQ principle Understand the currency of all data required for a given model and assure data selection fits the age requirements.

Concurrency Concurrency is the timing difference of equivalence of data in one data store to another data store based on movement of data from one store to another. Records should be equivalent in content once the records reach a downstream data store. Data that is extracted from different databases may reach a given data set at different times. For example, orders reported today in the order fulfillment database will not be found in the historical order ODS (Operational Data Store) until tomorrow because they are extracted and loaded nightly. Shipments are not loaded until the end of the week. Returns are processed and loaded only after end of month. This causes problems in bringing data together to study patterns when the time periods of the transactions are different. To handle concurrency issues you must assure that the data extracted from multiple data sets represents a single time period.

DQ principle Establish extract schedules (or extract transactions based on dates) from the various databases that will assure that transactions represent events or objects at a single point in time or time period. Solve the root causes by minimizing unnecessary redundant databases and information float. Eliminate the need for moving data to another database if that data can support all processes across the life cycle, such as persons or organizations that may be in a state of “prospect,” “active customer,” “preferred customer” or “inactive customer.” Maintain appropriate date and time stamps and relationships of events to assure correlation of returns to the orders for which they are returned.

DATA QUALITY MEASUREMENT AND ASSESSMENT So far we have seen different issues of Data Quality but one question is remaining that how can you do measurement and assessment your data? Actually measurement and assessment are dependent on each other, in some context. The lack of data quality must be identified as a risk when deploying any insurance industry business intelligence environment. Data quality is the state of completeness, validity, consistency and timeliness that makes data appropriate for a specific business decisions such as insurance product premium determination and rating (Howard, 1998). There are two imperative steps to understanding data quality in the insurance industry:

Data Quality For Business Intelligence

497

Baseline Assessment This assessment’s first purpose is to quantify the quality level of the data fields destined for the business intelligence environment. The data quality assessment sample size should be .02% of the total policy population (mistake data/missing values). The best practice is to calculate data quality on each data element and report the findings in graphical form.

Continued Monitoring Data quality should be monitored on a scheduled basis. Monitoring should have two samples: new data and repeated quality measurement of historical data. The same top 20 data elements included in the baseline assessment should be included in the continued monitoring. The baseline assessment and continued monitoring should include the following measurements for each data element:

Accuracy (a)·

Accuracy is the inverse of error. Many people equate accuracy with quality but in fact accuracy is just one component of quality (Rouse, 2007).

(b)·

Definition of accuracy is based on the entity-attribute-value model

(c)·

v

Entities = real-world phenomena

v

Attribute = relevant property

v

Values = Quantitative/qualitative measurements

An error is a discrepancy between the encoded and actual value of a particular attribute for a given entity. Actual value implies the existence of an objective, observable reality. However, reality may be: v

Unobservable (e.g., historical data)

v

Impractical to observe (e.g., too costly)

v

Perceived rather than real (e.g., subjective entities such as neighborhoods”)

Consistency (a)·

Consistency refers to the absence of apparent contradictions in a database. Consistency is a measure of the internal validity of a database, and is assessed using information that is contained within the database.

(b)

Attribute redundancy is one way in which consistency can be assessed. v

The identification of an inconsistency does not necessarily imply that it can be corrected.

v

The absence of inconsistencies does not necessarily imply that the data are accurate.

498

Key Drives of Organizational Excellence

Completeness (a)

Completeness refers to a lack of errors of omission in a database. It is assessed relative to the database specification, which defines the desired degree of generalization and abstraction (selective omission).

(b)

There are two kinds of completeness: v

Data completeness is a measurable error of omission observed between the database and the specification. Even highly generalized databases can be data complete, if they contain all of the objects described in the specification.

v

Model completeness refers to the agreement between the database specification and the abstract universe that is required for a particular database application. A database is model complete, if its specification is appropriate for a given application.

(c)

Incompleteness can be measured in space, time or theme.

(d)

Errors of commission can also be assessed. These errors can lead to over-completeness.

BUILDING BLOCKS FOR DATA QUALITY MANAGEMENT This section of the paper defines that how to manage your data to achieve quality for Business Intelligence. So, first Data Management has been defined in this section:

Data-Management Data management ensures data integrity and availability through methodologies such as data warehousing, cleansing, profiling, stewardship, modeling and definition. Effective business decisions rely on data accuracy and reliability. 1.

Data Profiling – Gaining an understanding of the existing data relative to quality specifications. This is your starting point from which improvement (and ROI) is measured. From this block you should be able to answer these questions: How complete is the data and how accurate is it? Consider this your baseline measurement from which you base your data quality improvement.

2.

Data Quality – Gaining an understanding of the causes of quality problems. This block relies heavily upon the usage of data quality technology. The results yield an analysis of the root causes of data quality problems and inconsistencies. Once these are known, you can then begin to “fix” the problems, choosing from one of four options.

3.

Data Integration – Collapsing disparate versions of data into a single one. This block demonstrates the recognition that the same data exists in multiple locations and systems with variable content in each system. It is in this block that you standardize the multiple versions (e.g., customers, products, geographies, etc.) to single version of the truth.

4.

Data Enrichment – Incorporating additional external data to gain further insight. Her you combine your integrated internal customer, product or other data with third party data to increase your understanding of your customers (e.g., their demographics, credit history, etc.), competitors, total industry sales, and so on.

Data Quality For Business Intelligence 5.

499

Data Monitoring – The data management effort requires an investment that requires a justification. Therefore, specific, tangible improvement measurements are often necessary to show the worth of this investment. To demonstrate this requires appropriate tracking techniques. There are three categories of data monitoring techniques – data auditing, data trending, and data alerts and controls. Use these to determine if your efforts are indeed paying off.

ACCESSING ACCURATE DATA FOR EFFECTIVE DECISION MAKING According to a recent Business Intelligence Institute survey, more than 50% of respondents (750) said their companies “suffered losses, problems or costs due to poor-quality data.” .Worse, nearly 80% expressed “loss of credibility in system or application” as a consequence of poor-quality data. Another 65% said poor data led to customer dissatisfaction; 39% pointed to compliance problems. And 27% said it meant lost revenue (Claudia, 2005).

Threat Interception Precision tools help you identify the risks, criminals and threatening persons you want to eliminate from your business without burdening your operations with overwhelming false positives.

Governance, Risk & Compliance Apply transparent, robust and repeatable data protocols that cover you with a complete audit trail of your compliance activities (Philip, 2006).

Sales & Marketing Effectiveness Comprehensive data cleansing, appending and relationship-linking powers form the reliable foundation you need for crafting effective pricing, segmentation and promotion strategies.

Strategic Sourcing & Supply Chain Optimization Our robust business data libraries help you master millions of product and parts codes across various applications, systems and partner sources.

CONCLUSION This paper is focused on Data Quality for Data Management resources say BI. DQ principles describe that how to reduce such issues in BI. So, BI follows good Data Quality as per needed. Data Measurement as well as Data Assessment play important role to assess data to get appropriate data for BI. This paper also focused on some Data Management building blocks which are approached by BI for managing the data. At last, some keys associated with accessing accurate data for effective decision making have also been considered.

References Howard Veregin, (1998) Data Quality Measurement and Assessment, NCGIA Core Curriculum in GIScience, down loaded from http://www.ncgia.ucsb.edu/giscc/units/u100/u100.html on Feb 10, 2008

500

Key Drives of Organizational Excellence

Claudia Imhoff (2005), Reaping the Dividends of Data Quality, presented online in Web Seminal held on March 23, 2005. Larry P. English (2005), Information Quality for Business Intelligence and Data Mining: Assuring Quality for Strategic Information Uses, Information Impact International Inc: Brentwood, TN. McKnight, W. (2001), The CRM Ready Data Warehouse, DM Review [Online], Available: www.dmreview.com, (2008, Feb. 14). Christina Rouse (2007), Measuring Data Quality in the Eyes of an Actuary, Direct Marketing: An International Journal, 1(3), 161-171. Russom, Philip (2006), Taking Data Quality to the Enterprise through Data Governance, What Works, Vol. 21, May.

54

Bluetooth Technology Versus Wi-Fi Technology Archana Naik Gurveen Vaseer

Bluetooth is an industrial specification for wireless Personal Area Networks (PANs). Bluetooth connects devices such as mobile phones, laptops, personal computers, printers, digital cameras, and video game consoles with each other over a secure, globally unlicensed short-range radio frequency. Bluetooth technology lives under the IEEE protocol 802.15.1. Bluetooth is embedded in many products such as phones, printers, modems and headsets. The technology is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with phones (i.e. with a Bluetooth headset) or byte data with hand-held computers (transferring files). Bluetooth simplifies the discovery and setup of services between devices. Bluetooth devices advertise all of the services they provide. This makes using services easier because there is no longer a need to setup network addresses or permissions as in many other networks. Wi-Fi or short for "wireless fidelity" is the term for a high-frequency Wireless Local Area Network (WLAN). Wi-Fi, a wireless-technology, promotes standards with the aim of improving the interoperability of wireless local area network products based on the IEEE 802.11 standards. Common applications for Wi-Fi include Internet and VoIP phone access, gaming, and network connectivity for consumer electronics such as televisions, DVD players, and digital cameras. The objectives of our research aim at the comparative study between Bluetooth and Wi-Fi technologies and derive certain characteristic differences among them. Keywords: WLANs (Wireless Local Area Networks), IEEE (Institute of Electrical and Electronic Engineers), Piconet (A piconet is an ad-hoc Bluetooth network with one master and one or more slaves).

INTRODUCTION Wireless is a term used to describe telecommunications in which electromagnetic waves carry the signal over part or the entire communication path (David, 2002). Wireless

502

Key Drivers of Organizational Excellence

technology is a method of data transmission using either infrared (IR) signals or Radio Frequency (RF). Infrared communications are generally less expensive and are designed for use over short distances (the communicating devices need to be in a straight line). Radio frequency communications are more suited for wider areas, often separated by partitions, support cells or roaming users and can be used as an augmentation to already existing local area networks (Miller, 2002). Table 1: Showing Major Types of Wireless Networks

CDPD

Cellular Digital Packet Data

HSCSD

High Speed Circuit Switched Data

PDC-P

Packet Data Cellular

GPRS

General Packet Radio Service

1xRTT

1x Radio Transmission Technology

802.15.1

Bluetooth

MMDS

Multichannel Multipoint Distribution Service

LMDS

Local Multipoint Distribution Service

WiMAX

Worldwide Interoperability for Microwave Access

802.11

Wi-Fi

Two of the more popular types of wireless technology standards available are Bluetooth and the Institute of Electrical and Electronic Engineering’s (IEEE) 802.11 standards - also known as Wi-Fi.

WI-FI Wi-Fi stands for “Wireless Fidelity” and is used to describe wireless solutions that adhere to the 802.11 set of standards developed by the IEEE, the most popular being 802.11b. (Ross, 2001)These standards operate using radio waves and have a range of up to 300 meters. WiFi is an extension of the wired Ethernet and has the same principles as its wired counterpart, thus providing its users with high speed reliable connections to the network. Wi-Fi is currently the standard for WLANs, which consist of multiple access points that generate transmission of a specific radio frequency which can be used by Wi-Fi enabled devices to connect to an organization’s network (Miller, 2002).

Wi-Fi Technology Wi-Fi is technically a trademarked brand name for the wireless standard owned by the WiFi Alliance (Miller, 2002). Short for “wireless fidelity”, Wi-Fi is one of the most popular wireless communications standards on the market. In its fledgling stages, Wi-Fi technology was almost solely used to wirelessly connect laptop computers to the internet via local area networks (LANs).

WIRELESS STANDARDS The official name for the specification is IEEE 802.11, and it is comprised of more than 20 different standards, each of which is denoted by a letter appended to the end of the name

Bluetooth Technology Versus Wi-Fi Technology

503

(Ross, 2001). The most familiar standards are 802.11b and 802.11g (Wireless B and G) which are used in the majority of commercial Wi-Fi devices. Both of these standards operate in the 2.4 GHz band, and the only major difference between the two is the transfer rate. Some consumer electronics, however, use a different standard—Wireless, (A Ghanname, 2007). These devices operate within the 5 GHz range and have transfer rates equivalent to 802.11g. However, since they operate on different frequencies, devices using the 802.11a standard cannot communicate with B and G-enabled devices (Miller, 2002). For this reason, it is important to check the compatibility of components with your wireless network prior to purchasing them.

Comparison of standards The table below provides a brief overview of the three most popular current 802.11 standards, as well as information about the next version of Wi-Fi—802.11n. Table 2: Showing Comparative Features of Major Wi-Fi Systems

Standard

Frequency

Data Transfer Rate Typical (Max)

Range (indoor)

802.11a

5 GHz

25 (50) Mb/sec

about 10 m (30 ft)

802.11b

2.4GHz

6.5 (11) Mb/sec

30 m (90 ft)

802.11g

2.4 GHz

25 (54) Mb/sec

30+ m (90+ ft)

802.11n

2.4 GHz

200 (540) Mb/sec

50m (150ft)

The 802.11 family includes over-the-air modulation techniques that use the same basic protocol. The most popular are those defined by the 802.11b and 802.11g protocols, and are amendments to the original standard. 802.11a was the first wireless networking standard, but 802.11b was the first widely accepted one, followed by 802.11g and 802.11n. 1.

QuickLogic CSSPs are available in a wide range of packages ranging from a tiny 6X6mm package up to 12X12mm. They’ve even worked with customers to produce specialized versions of the smallest package to accommodate a 4-layer board with a reduced I/O count. A “CSSP” or customer-specific standard part is semi-conductor sushi-menu includes things like CE-ATA/IDE, ATAPI, Managed NAND flash control, USB, SD/ MMC controllers, SDIO/SPI, Bluetooth UART, and many more – along with the associated drivers for Linux, Windows Mobile, or CE(David,2002).

2.

3G is the third generation of mobile phone standards and technology, superseding 2G. It is based on the International Telecommunication Union (ITU) family of standards under the International Mobile Telecommunications programme.

3.

4G (also known as beyond 3G), an acronym for Fourth-Generation Communications System, is a term used to describe the next step in wireless communications (Ross,2001). A 4G system will be able to provide a comprehensive IP solution where voice, data and streamed multimedia can be given to users on an “Anytime, Anywhere” basis, and at higher data rates than previous generations.

504

Key Drivers of Organizational Excellence

Figure 1: Showing the Block Diagram of Wi-Fi Technology [ref. Texas Instruments]

Basic Operations When powered on, a Wi-Fi station will scan the available channels to discover active networks where beacons are being transmitted. It then selects a network, be it in ad hoc mode or infrastructured. In the latter case, it authenticates itself with the access point and then associates with it (Ghanname, 2007). If WPA security is implemented, a further authentication step is done, after which the station can participate in the network. Wi-Fi provides for different degrees of quality of service, ranging from best effort to prioritized and, in infrastructure networks, guaranteed services. While being part of a network, stations can keep discovering new networks and may disassociate from the current one in order to associate with a new one, e.g. because it has a stronger signal. Stations can roam this way between networks that share a common distribution system, in which case a seamless transition is possible. A station can sleep to save power, and when it finishes infrastructured mode operation it can de-authenticate and disassociate from the access point.

BLUETOOTH The name Bluetooth is derived from the cognomen of a 10th century king of Denmark, Harald Bluetooth. Bluetooth technology was invented by Ericsson in 1994 and after four years in 1998 some major mobile phone companies such as Nokia, Ericsson, Intel and Toshiba formed a group to promote this technology (David, 2002). Bluetooth uses the radio waves technology, which is not very expensive and has low power consumption. Bluetooth is a simple type of the wireless networking that operates in the digital devices, like mobiles phones, personal computers, PDA, Laptops, digital camera, MP3 players and other Bluetooth enabled devices to form a small network.

Bluetooth Technology Versus Wi-Fi Technology

505

Bluetooth Technology Bluetooth is a telecommunications industry specification that describes how mobile phones, computers, and personal digital assistants (PDAs) can be easily interconnected using a short-range wireless connection. Bluetooth requires that a low-cost transceiver chip be included in each device. The transceiver transmits and receives in a previously unused frequency band of 2.45 GHz that is available globally (Ghanname, 2007).

IEEE 802.15.1 IEEE 802.15.1 and Bluetooth are almost identical regarding physical layer, baseband, link manager, logical link control and apdation protocol, and host control interface.

Figure 2: IEEE Std 802.15.1-2002 text

In Figure 2 the lines from the left-hand graphic clearly identify the shaded portions of this IEEE standard that consist of the text of the Bluetooth specifications. In most cases the shaded portions of this IEEE standard consist of unaltered or minimally altered text from the Bluetooth specifications. The approved IEEE 802.15.1 standard is fully compatible with the Bluetooth v1.1 specification. Bluetooth technology defines specifications for small-form-factor, low-cost wireless radio communications among notebook computers, personal digital assistants, cellular phones and other portable, handheld devices, and connectivity to the Internet.

COMPARISON OF WI-FI TECHNOLOGY AND BLUETOOTH TECHNOLOGY Protocols Used: Wi-Fi uses the standard IEEE Protocol 802.11 whereas Bluetooth uses the standard IEEE Protocol 802.15.1.

506

Key Drivers of Organizational Excellence

Range: Wi-Fi networks have limited range. A typical Wi-Fi home router using 802.11b or 802.11g with a stock antenna might have a range of 32 m (120 ft) indoors and 95 m (300 ft) outdoors. Range also varies with frequency band. Wi-Fi in the 2.4 GHz frequency block has slightly better range than Wi-Fi in the 5 GHz frequency block. Outdoor range with improved (directional) antennas can be several kilometers or more with line-of-sight. Wi-Fi performance also decreases exponentially as the range increases. Wi-Fi is also less reliable and fast as Ethernet or other cable systems, 802.11g networks have a maximum of 54 Mbit/s whilst cables can reach speeds of 1000 Mbit/s or more. Wi-Fi is not suitable for servers or users who need fast internet access, for example, online gamers. Bluetooth is a standard and communications protocol primarily designed for low power consumption, with a short range (power-class-dependent: 1 meter, 10 meters, 100 meters) based on low-cost transceiver microchips in each device. Bluetooth enables these devices to communicate with each other when they are in range. The devices use a radio communications system, so they do not have to be in line of sight of each other, and can even be in other rooms, as long as the received transmission is powerful enough (Oryl, 2007). Class

Maximum Permitted Power mW(dBm)

Range (approximate)

Class 1

100 mW (20 dBm)

~100 meters

Class 2

2.5 mW (4 dBm)

~10 meters

Class 3

1 mW (0 dBm)

~1 meter

(Oryl, 2007) In most cases the effective range of class 2 devices is extended if they connect to a class 1 transceiver, compared to pure class 2 network. This is accomplished by higher sensitivity and transmitter power of the Class 1 device. The higher transmitter power of Class 1 device allows higher power to be received by the Class 2 device. Furthermore, higher sensitivity of Class 1 device allows reception of much lower transmitted power of the Class 2 devices. Thus, allowing operation of Class 2 devices at much higher distances.

Security Security is an important concern on any network (Ghanname, 2007), but it’s especially so for a Wi-Fi where information travels back and forth through the air and is open to eavesdrop and intercept by anyone within range. As a result issues surrounding security come up in almost any discussion of implementing a WLAN. The method by which WLANs protect wireless data streams today is called Wireless Equivalent Privacy, or WEP. Despite the implication of its name, WEP doesn’t really provide privacy equivalent to that of a wired network. A wireless network is inherently less secure than a wired one because it eliminates many of the physical barriers to network access. (Ghanname, 2007)The Bluetooth specification 1.0 describes the link encryption algorithm as a stream cipher using 4 LFSR (linear feedback shift registers). The sum of the width of the LFSRs is 128, and the spec says “the effective key length is selectable between 8 and 128 bits”(Oryl, 2007). This arrangement allows Bluetooth to be used in countries with regulations limiting encryption strength, and “facilitate a future upgrade path for the security without the need for a costly redesign of the algorithms and encryption hardware” according to the Bluetooth specification. Key generation and authentication seems to be using the 8-round

Bluetooth Technology Versus Wi-Fi Technology

507

SAFER+ encryption algorithm. The information available suggests that Bluetooth security will be adequate for most purposes; but users with higher security requirements will need to employ stronger algorithms to ensure the security of their data.

BANDWIDTH, BANDWIDTH USAGE, MODULATION Both protocols use a spread spectrum technique in the 2.4 GHz band, which ranges from 2.4 to 2.4835 GHz, for a total bandwidth of 83.5 MHz. Wi-Fi can also use the 5 GHz band (Ghanname, 2007). Bluetooth uses frequency hopping (FHSS) with 1 MHz wide channels, while Wi-Fi uses different techniques (DSSS, CCK, OFDM) with about 16 MHz wide channels. Frequency hopping is less sensitive to strong narrow band interference that only affects a few channels, while DSSS is less sensitive to wide-band noise (Oryl, 2007). Both standards use ARQ at the MAC level, i.e., they retransmit the packets for which no acknowledgement is received. Since Wi-Fi always uses the same frequency, retransmitted packets only benefit from time diversity, while Bluetooth also takes advantage of frequency diversity, because of the frequency hopping. Future radio layers will likely use UWB for Bluetooth and MIMO for Wi-Fi.

Power Consumption: In Wi-Fi Power consumption is fairly high compared to some other low-bandwidth standards, such as Bluetooth, making battery life a concern. Wi-Fi does consume as much as Bluetooth but in practice, with WPA security, the consumption of a WiFi module is higher than Bluetooth. The power requirements of Bluetooth devices are significantly lower than those of Wi-Fi devices, which was to be expected.

Speed: Wi-Fi operates at 11-megabits-per-second. The 802.11n standard will allow for actual throughput rates of up to 100 megabits per second. Bluetooth operates at about 720kbps, which is from three to eight times the average speed of parallel and serial ports, respectively. This bandwidth is capable of transmitting voice, data, video and still images (Oryl, 2007).

Authentication: Wi-Fi defines two authentication methods l

OSA (Open System Authentication): In OSA mode, the requesting station sends a frame to the AP asking for authentication and the AP always grants authentication; two frames must be exchanged between the stations. This method provides no security and is the simplest for open Access Points (Ghanname, 2007).

l

SKA (Shared Key Authentication): In SKA mode, the requesting station (initiator) sends a frame to the AP asking for authentication; the AP (authenticator) sends a 128-byte clear text, which the initiator encrypts by using a shared secret and sends back to the AP. Encryption is performed by XORing the challenge with a pseudo-random string formed by the shared secret and a public initialization vector (Oryl, 2007). The AP decrypts the text and confirms or denies authentications to the requester, for a total number of four exchanged frames. This is a shared-secret authentication analogous to the one used in Bluetooth.

508

Key Drivers of Organizational Excellence

Bluetooth provides a method for authenticating the devices by means of a shared secret, called a link key, between the two devices (Ghanname, 2007). This link key is established in a special communication session called pairing, during which the link key is computed starting from the address of each device, a random number, and a shared secret (PIN). If both parts must be authenticated, then the procedure is repeated in both senses. The shared secret can be manually entered the first time that the devices are used, or it can be hardwired for paired devices that are always used together (Oryl, 2007). Pairing is a useful feature for devices that are usually used together.

WIRED APPROACH Wi-Fi is the ‘longwire’ (network cable from desk to hub/server) wireless replacement technology. It is designed to allow users to log onto an office/business network without the need to physically attach via a network card (for portables, a LAN adapter may be built into either the PC, or a docking station. Bluetooth is a ‘shortwire’ replacement for the mass of cables we use to connect ‘personal’ devices so they can share information. When talking about ‘personal’ devices we are referring to portable PCs, mobile telephones & headsets, PDAs, digital cameras, MP3 players and so on (Oryl ,2007).

DISCOVERY AND ASSOCIATION Wi-Fi uses the Scan, Authentication, and Association procedures for discovering new devices in the coverage area and establishing new connections. The Scan procedure (whether in active or passive mode) is used for discovering the MAC addresses and other parameters of the Wi-Fi devices in the terminal’s coverage area. In passive mode, the number of channels to probe 50 ms multiplies the average time of the Scan procedure (Ghanname, 2007). In active mode, the device sends a probe request frame and waits for a probe response from the stations that received the probe request. In this case the minimum discovery time, without external interference, in a network far from saturation, is equal to the time needed to transmit a probe request plus a DCF Inter Frame Space interval, plus the transmission time of a probe response, multiplied by the number of channels to probe, that is, 3 ms at 1 Mb/s or 0.45 ms at 11 Mb/s. Bluetooth uses an Inquiry procedure and a Page scheme for discovering new devices in the coverage area and establishing new connections. The Inquiry procedure is periodically initiated by the master device to discover the MAC addresses of other devices in its coverage area. The master device uses a Page scheme to insert a specific slave in the Piconet, by using the slave’s MAC address and clock, collected during the Inquiry procedure. In order to set up a Piconet with the maximum number of active slave devices (seven), an average time of 5 s for the Inquiry phase, and 0.64 s for each Page phase (0.64 · 7 = 4.48 s) are necessary, thus requiring a maximum of 9.48 s. We consider no external interference.

SPATIAL CAPACITY We define spatial capacity as the ratio between aggregated data transfer speed and transmission area used. Bluetooth, in a nominal range of 10 m, allows the allocation of 20 different Piconets, each with a maximum aggregate data transfer speed around 400 kb/s.

Bluetooth Technology Versus Wi-Fi Technology

509

Wi-Fi allows interference-free allocation of 4 different BSSes, each with aggregate transmission speed of 910 kb/s in a nominal range of 100 m, or 31.4 Mb/s in a nominal range of 10 m. Thus, spatial capacities can be evaluated for 802.11g at roughly 0.1 kb/s·m2 at minimum speed or 400 kb/s·m2 at maximum speed, and 25 kb/s·m2 for Bluetooth. It is important to notice that these numbers are intended as a guideline only, since in real cases other factors, such as receiver sensitivity and interference, play a major role in affecting the attainable data transmission speed.

BLUETOOTH AND WI-FI INTERFERENCE CASES If Bluetooth and Wi-Fi operate at the same time in the same place, they will interfere (collide) with each other (Ghanname, 2007). Specifically, these systems transmit on overlapping frequencies, creating in-band colored noise for one another. The sidebands of each transmission must also be accounted for. Interference between Bluetooth and Wi-Fi occurs when either of the following is true: l

A Wi-Fi receiver senses a Bluetooth signal at the same time a Wi-Fi signal is being sent to it. The effect is most pronounced when the Bluetooth signal is within the 22-MHzwide pass band of the Wi-Fi receiver.

l

A Bluetooth receiver senses a Wi-Fi signal at the same time a Bluetooth signal is being sent to it; the effect is most pronounced when the Wi-Fi signal is within the pass band of the Bluetooth receiver.

It is worthwhile to note that neither Bluetooth nor Wi-Fi was designed with specific mechanisms to combat the interference that each creates for the other. As a fast frequencyhopping system, Bluetooth assumes that it will hop away from bad channels, minimizing its exposure to interference. The Wi-Fi MAC layer, which is based on the Ethernet protocol, assumes that many stations share the same medium, and therefore, if a transmission fails, it is because two Wi-Fi stations tried to transmit at the same time.

Data Rate: Wi-Fi overcomes Bluetooth and supports better transfer of large amounts of information. But, when Wi-Fi link quality gets worse(i.e. the further it goes from the access point), the data rate decreases (up to Bluetooth data rate level).Secondly, although the maximum communication speeds are less than Wi-Fi, Bluetooth’s throughput is more than what is needed for payment transaction authorization, settlement transmission and software downloads (Ghanname, 2007). Bluetooth ensures high levels of security and performance, while operating with very low power consumption. This version of Bluetooth provides data rate (1Mbps) much slower than Wi-Fi standard 802.11b (11Mbps) and 802.11g (54Mbps).

Primary Users: Mainly Wi-Fi is useful for the users in corporate offices, campuses, business or conference venues. Bluetooth users include travelers, office and industrial workers, electronic consumers.

CONCLUSION Bluetooth implementation is far superior to the implementations on mobile phones. Bluetooth is one of the most secure wireless technologies available in the market and Class 1 devices

510

Key Drivers of Organizational Excellence

are capable of long-range radio coverage supporting important data rate. Bluetooth’s architecture enables someone to configure a wireless network with up to 7 Bluetooth-enabled devices. For the customers that want a dedicated network for payment applications, Bluetooth is the selection of choice (Ghanname, 2007). Wi-Fi is the most popular radio technology for WLAN. This “Wireless Ethernet” being implemented worldwide does not only offer good radio coverage and good data rate, but it also ensures secure transactions. So which technology is the best one? Both! It depends completely on the application: either Bluetooth for dedicated network for payment, or Wi-Fi to leverage an existing WLAN infrastructure.

References Michael Oryl (2007-03-15), Bluetooth 2.1 Offers Touch Based Pairing, Reduced Power Consumption, Mobile Burn, down loaded on Feb, 2008 from http://labs.daylife.com/journalist/michael_oryl?count=10&offset=20. Taoufik Ghanname (2007-02-14), How NFC Can Speed up Bluetooth Transactions-Today, wireless Net Design Line. Stallings, William. (2005) Wireless Communications & Networks, Upper Saddle River, NJ: Pearson Prentice Hall. Juha T. Vainio (2000-05-25), Bluetooth Security, Helsinki University of Technology. Andreas Becker (2007-08-16), Bluetooth Security & Hacks (PDF), Ruhr-Universität Bochum. Scarfone, K., and Padgette, J. (2008) Guide to Bluetooth Security (PDF), National Institute of Standards and Technology, Retrieved on 2008-10-03. John Oates (2004-06-15), Virus Attacks Mobiles via Bluetooth, the Register. Lasco. A (), F-Secure Malware Information Pages, F-Secure.com. Ford-Long Wong, Frank Stajano, Jolyon Clulow (2005-04), Repairing the Bluetooth Pairing Protocol (PDF), University of Cambridge Computer Laboratory, Retrieved on 2007-02-01. Yaniv Shaked, Avishai Wool (2005-05-02), Cracking the Bluetooth PIN, School of Electrical Engineering Systems, Tel Aviv University. Cambridge Evening News (2007), Phone Pirates in Seek and Steal Mission, Archived from the original on 17 July 2007.

V GENERAL MANAGEMENT

55

Data Management Issues in the Supply Chain Gazala Yasmin Ashraf

Supply Chain Management (SCM) is the coordination and management of a complex network of activities involved in delivering a finished product to the enduser or customer. It is a vital business function and the process includes sourcing raw materials and parts, manufacturing and assembling products, storage, order entry and tracking, distribution through the various channels and finally delivery to the customer. Sarkis (1999) refers to the supply chain as a system, which includes purchasing, and in-bound logistics, production, distribution (outbound logistics and marketing) and reverse logistics. The Supply Chain represents an integrated process wherein a number of various business entities (i.e. suppliers, manufacturers, distributors and retailers) work together to acquire raw materials, convert them into specified final products and deliver final products to retailers (Polenske, 1999). Today's Supply Chain is not just the primary processing mechanism of every manufacturing company. It's multifaceted, multicompany; multinational structure makes it the most complex management challenge found in any enterprise. SCM no longer means just making sure that the right resources and the right materials move to the right place. Now it means ensuring that the entire chain of events involved in producing goods and distributing them to customers, satisfying customers, minimizes costs and maximizes profit. Managing the supply chain in this fashion requires information, but merely pushing information about partners or products into a report that lands on manager's desk every day will not achieve the goals. It requires delivering the data in supply chain in a way that enables managers to know whatever they need to know, whenever they need to, at whatever level of detail they need and that allows them to analyze the data and take action based on the results of their analysis. The essence of data management lies in the ability to know the location and status of all physical components, from raw materials to finished goods, as they move from suppliers through the stages of production to delivery to customers. This chapter tries to identify the data management issues in supply chain and create a visible supply chain by integrating not only IT elements but it is essentially about business performance for ensuring end to end optimization. Keyword: SCM, logistics, RL, multi-company, multinational, data management

514

Key Drivers of Organizational Excellence

INTRODUCTION Supply Chain Management (SCM) is the coordination and management of a complex network of activities involved in delivering a finished product to the end-user or customer. It is a vital business function and the process includes sourcing raw materials and parts, manufacturing and assembling products, storage, order entry and tracking, distribution through the various channels and finally delivery to the customer. Sarkis (1999) refers to the supply chain as a system, which includes purchasing, and inbound logistics, production, distribution (outbound logistics and marketing) and reverse logistics. The Supply Chain represents an integrated process wherein a number of various business entities (i.e. suppliers, manufacturers, distributors and retailers) work together to acquire raw materials, convert them into specified final products and deliver final products to retailers (Polenske, 1999). Supply chain literature has been concentrating on key processes in supply and distribution. Croom (2005) in a recent study proposed an evolutionary model. Focus

B2C’

B2B

Re-Engg Process

B2X

Pipeline Transparency

Systems

Edi, Email, Web

CRM

Resource Planning

E_Procurement

E_Logistics

Hvolby and Trienekens (2002) highlight the difficulty of integrating the supply chain towards demand. l

The market is characterized by the need for increasingly customized products and services.

l

Products and services must be ready in an increasingly short time.

l

Increased business risk, due to market volatility.

To respond to these issues companies must integrate their supply chain toward customers. Companies need a new fulfillment strategy which will require business-process and technology synchronization across the entire chain. Today’s supply chain is, of course the primary processing mechanism of every manufacturing company. But it’s more than that: Its multifaceted, multicompany, multinational structure makes it the most complex management challenge found in any enterprise. Supply chain management no longer means just making sure that the right resources and the right materials move to the right place at the right time. Today it also means ensuring that the entire chain of events involved in producing goods and distributing them to customers satisfies customers, minimizes costs and maximizes profit. Managing a supply chain in this fashion requires information, but merely pushing information about partners or products into a report that lands on manager’s desk every day will not achieve the goals. Nor will a dashboard even if it delivers that same information in real time.

Data Management Issues in the Supply Chain

515

What today’s supply chain managers need instead is a supply chain that is visible. ”Visibility” in this case means that data about the supply chain is delivered in a way that enables managers to know whatever they need to know whenever they need to, at whatever level of detail they need, and that allows them to analyze the data and take action based on the results of their analysis. The essence of supply chain visibility is the ability to know the location and status of all physical components, from raw materials to finished goods, as they move from suppliers through the stages of production to delivery to customers. This paper will describe the characteristics of a visible supply chain, explain why it is important and what IT elements are required for ensuring end to end optimization.

CONTRIBUTION/IMPORTANCE OF VISIBILITY The key to success for any organization is being able to give customers l

what they want,

l

when they want it,and

l

how they want it,

l

all at the lowest cost.

This requires “real-time fulfillment” or “e-fulfillment” or fulfillment supported by ICT. Business performance measurement relies on data that is readily accessible to managers, but data accessibility alone is not sufficient to make the supply chain visible. It is truly visible only if the data is accessible within the context that gives it meaning and can make it useful as part of a decision-making process. Context is especially important in today’s informationrich enterprise environments because it is easy to overload users with too much data, from too many resources, presented to them with too little context of how it relates to other data and business processes and activities. In addition, most business decisions today require collaboration, so visibility also requires that the information be shared among colleagues. Visible system offers users an analytic framework within which they can work with their information. Business Intelligence systems are frequently used as the analytic tools of choice because they can pull data together from disparate sources and make sense of it, which is necessary in creating a visible information environment. The analysis enhances visibility by providing an additional context for the information.

516

Key Drivers of Organizational Excellence

For example, if a manager does an average unit cost analysis of a product component and sees that it is costing more than the minimum amount the organization contracted to pay, he or she can then take action to analyze and correct the problem. Analysis of the component – ordering pattern can determine where quantities can be adjusted to meet minimum pricing requirements. Then the company can use its enterprise resource planning (ERP) system to adjust the component ordering pattern and the manufacturing workflow for that product. Finally, visibility in systems includes the ability to act on the information and the analysis provided. If the data is accessible and available in a meaningful context, and need for decisionmaking. Finally, a visible environment is complete only if it also enables users to act on any decisions that are made.

THE VISIBLE SUPPLY CHAIN Today’s international, multifaceted, multicompany, multiple–partner supply chain makes creating a visible information environment to support it both difficult and necessary. The complexity of supply chain structures and the amounts of data they generate create the need to implement a visible supply chain. Supply chain managers thus must find ways to make their diverse and far-flung manufacturing and distribution initiatives more visible. Other pressures to control supply chain processes come from both external and internal sources. Externally, one source of that pressure is customer demand, particularly for products sold worldwide in markets that are very competitive. When operating at that scope and in such markets, visibility is essential, but gathering and managing the information that enables demand to drive the manufacturing and distribution processes is a complex job because it likewise is dispersed widely. Internally, cost pressures make finding low-cost suppliers and managing their participation in the supply chain a business imperative that can be executed most efficiently when the processes are managed through a visible supply chain. Implementing other cost containment programs such as scrap minimization, efficient transport systems and inventory reduction similarly require a visible supply chain, as do creating and managing initiatives such as distribution programs that meet delivery goals, component quality initiatives and effective target marketing programs. Then there is the issue of aligning the supply chain itself with corporate strategy: To maximize the contribution that the supply chain makes to overall enterprise performance means that supply chain decision- making has to be both deft and on target, which is another ongoing source of pleasure. Supply chain visibility can enable supply chain managers to meet these pressures, and that the creation and maintenance of supply chain visibility should be part of an overall performance management strategy.

TECHNOLOGY PATHWAYS TO A VISIBLE SUPPLY CHAIN A visible supply chain can be implemented by using technologies that are readily available and in many cases are already in place in manufacturing enterprises. Most supply chain data is managed in a combination of the ERP system and specialized software for supply chain management (SCM). To make these technologies work in a visible supply chain environment, key technologies must be added to the mix: Dashboards and other tools that

Data Management Issues in the Supply Chain

517

can track materials and product flow in the supply chain, and information access and business intelligence tools for analysis. Modern dashboard technology applied to a visible supply chain gives production and distribution resource managers the ability to see both numerically and graphically what is happening in their areas of responsibility. In a properly visible environment, managers can see their own data as well as data from processes that affect their work, even though they are not responsible for managing them. In either case, the dashboard user can drill down to find the root causes of the behavior of the monitored resources. Updated data from intra-enterprise ERP and supply chain systems often is available in real time or within one day. Recent research has shown that even where a local ERP system is in place, data often is not available to managers that quickly. This limits visibility into the supply chain and is a problem that should be corrected. When data from intra and inter company sources is not immediately available to the supply chain management system and the dashboards its managers use, the cause may be a delay involving interfaces to an external partner. The delay may also be the result of recent merger activity or a lag in the implementation of modern IT systems in certain divisions. One solution to the problem is to use portals, especially where role –based systems have portal interfaces that expose exactly the required information. Modern XML-based portals have transfer mechanisms built in, which enhance both their visibility and their usefulness in transaction processes. Where data access or transfer is obstructing visibility, mechanisms such as fully automated electronic data interchange (EDI) or XML-based systems may have to be added. Unfortunately, in many instances the only available methods are less than fully automatic- spreadsheets, for example, and other analysis and reporting tools that require manual involvement to transfer the data. No matter, what methods are used to gather the data, it must be available on the manager’s dashboard whenever they need it. If the data is not accessible in real time, the most recent data points must be available. Levels of Integration

518

Key Drivers of Organizational Excellence

Business intelligence tools can provide analysis of the behavior of various supply chain processes and resources. That may include information as basic as performance volumes or costs measured against planned performance metrics, but it may also involve such subtleties as recalculating the resource relationships in the production equations used in manufacturing. Some BI tools can extrapolate from available data estimates of data that is missing or not available in a timely way. Whatever data is available on a particular supply chain dashboard, whether it is at the top level or is found by drilling down, is fair game for analysis using BI tolls. Those tools will take on greater value if the data is shifted into its proper contextual role. For example, a dashboard panel that shows tracking data for all components used in all products is generally economics and requirements for a particular product. A dashboard containing information for materials and production data for that product alone will be more useful to provide visibility and analysis.

STRATEGISING FOR SUPPLY CHAIN VISIBILITY Creating a visible supply chain begins with a project strategy and the people who will have to work with the system when it is implemented. But simply stating the goal of creating a visible supply chain isn’t an adequate project strategy; it must begin with an assessment of the information and analysis systems that are in place for your supply chain. It has to be determined whether they will be adequate to the job and what will have to be added to build them into useful components of a visible supply chain system. The people who are to be involved , including the relevant IT staff and the managers for various parts of the enterprise supply chain that will use the resulting system, jointly should determine the system requirements and what parts of existing systems can be used to meet them. Among the specifications that must be mapped out are which portions of the supply chain need most critically to be visible, which managers deal with those segments and which colleague managers need access to those parts of the supply chain. The system architects also have to determine for each supply chain segment the data elements to be displayed. Business intelligence tools and techniques must be specified for each supply chain element as well. The team also must agree on the nature of the decision making process for the supply chain elements. And last but far from least, the team must determine the measures of success it will use to indicate when the visible supply chain project is complete and ready to be used, and that will tell it going forward how to improve the system on a continuous basis.

F

F

Fo nt

P ro du

F

D elive C lien t

Ou tso

C oord in atio n o f S upp ly C hain S yncro nization of Internal Pro cesses Th e Co m pan y

Integration of Value N etw ork

Data Management Issues in the Supply Chain

519

CONCLUSION Connecting supply chain partners via shared Virtual Enterprise software creates process integration, improves forecasting and product planning and provides real-time access to order and shipment status reducing manufacturing, distribution and sales costs. In a collaborative Virtual Enterprise environment, information must be shared among many companies; with participants adding, using and updating data, as needed for the many roles they play within a value chain. Creating a visible supply chain is an undertaking not to be taken lightly. This project has IT elements, but essentially it is about business performance. Many of the building blocks for a visible supply chain are already in place at most modern organizations, but visibility must be created deliberately and thoughtfully if it is to serve as a competitive differentiator and a facilitator of improved profitability.

References Croom, S. (2005). The Impact of E- Business on Supply Chain Management: An Empirical Study of Key Developments. International Journal of Operations and Product Management, 25 (2), 23 pages. Hvolby, H.; Trienekens, J.H. (2002), Stimulating Manufacturing Excellence in Small & Medium Enterprises (Editorial), Computers in Industry 49 (1), 1 - 2. Polenske, Karen R, (1999), Economic Systems Research, Taylor and Francis Journals, 11(4), 341-48. Joeseph Sarkis (1999), A methodological framework for evaluating environmentally conscious manufacturing programs, Computers and Industrial Engineering, 36 (4), 793-810.

520

Key Drivers of Organizational Excellence

56

Cross-Cultural Communication: A Golden Gate to International Business Girraj Verma Mala Issac

Technological advances in communication, travel, and transportation have made business increasingly global. There is a parallel need to improve global understanding and increase intercultural communication competence. Communicating effectively in the international environment is far more complex than just knowing how to greet people from different cultural backgrounds; individuals should go beyond understanding observable behaviors and develop an understanding of the deep content that is at the center of real messages. This chapter is focused on the problems associated with cross-cultural communication and how awareness of culture, language and communication techniques can increase the advantages to globalization.

INTRODUCTION As the world economy becomes more and more globalized and internationalized. This trend is expected to continue in the foreseeable. Thus, the chances are good that you will have to communicate with people from other cultures. Cross-cultural communications support international business, and international business is the business of the 2000s and beyond. Operating in a cross-cultural environment, individuals may consciously act like natives of that particular target culture at the surface level (Chaney,2002). But unconsciously, they are still inclined to exhibit their own cultural traits. Competent communicators recognize that each member of a culture is an individual, with individual needs, perceptions, and experiences, and should be treated as such. Each person interprets events through his or her mental filter, and that filter is based on the receiver's unique knowledge, experience, and viewpoints. When we talk about culture, we mean the customary traits, attitudes, and behaviors of a group of people. And cross-cultural communication is just as often characterized by success and leads to satisfactory understanding as intercultural communication. In the words of Confucius "Human beings draw close to one another by their common nature, but habits and customs keep them apart." Culture is the main factor that separates human beings from

Cross-Cultural Communication: A Golden Gate to International Business

521

one another, and culture also creates the uniqueness of human beings (Axtell, 1985). Culture is what makes Easterners different from Westerners. Culture makes an Indian an Indian and a Chinese a Chinese. When communication takes place within the home culture, individuals are instinctively aware of the context and less likely to misinterpret the transmitted messages. However, when communicating with individuals from different language and cultural backgrounds without a shared context, the danger of miscommunication increases.

CULTURAL DIVERSITIES People living in different countries have developed not only different ways to interpret events. They have different habits, different values, and different ways of relating to one another. These differences are a major source of problems when people of different cultures try to communicate (Axtell,1985). Unfortunately, people tend to view the ways of their culture as normal and the ways of other cultures as bad, wrong, peculiar or such. The work of Ronen and Victor argues that real failure in the international business arena frequently results from the inability of people to understand their lack of desire to interact with those in diverse cultures. This is more prevalent than the lack of technical or professional skills. Hall states that "culture is communication and communication is culture." Knowledge of culture is essential to understand the communication process. Naturally, an understanding of the behavior pattern and symbols used by different cultural groups play a crucial role in communication and acquisition of knowledge (Chaney, 2002). It has wide ranging implications for both verbal and nonverbal communication. All communication is either verbal or nonverbal. Verbal communication consists of sharing thoughts through the meanings of words, while nonverbal communication shares thoughts through all other means.

NONVERBAL ASPECTS OF INTERCULTURAL COMMUNICATION Cultural diversities are more prominently visible in nonverbal communication. Very few nonverbal messages have universal meanings. Some of the areas of nonverbal communication that can be misused are color, time distance touch, voice, body language, feelings, social behavior clothing, etc. Many think that the impression or image associated with certain colors crosses cultural boundaries. For example, while red means danger to us, it may be associated with festive occasions in china; mourning is symbolized by black color is western countries but by white in the Far East (Martin, 2002). The language of time is as different among cultures as the language of words. Americans, Canadians, Germans and Japanese are very time conscious and very precise about appointments. In some other cultures (especially those of the Middle East and some parts of Asia), people view time in a more relaxed way. It is easy to see how such different views of time can cause people from different cultures to have serious miscommunication problems. People from different cultures often vary in their attitudes toward distance. In North America people generally stand at some distance from each other while talking. In Middle Eastern states they come quite close together, while in Latin America they touch each other quite

522

Key Drivers of Organizational Excellence

frequently while communication (Chaney, 2002). In the absence of that touch communication will be poorer. Touching behavior is very culture-specific. Many Asian do not like to be touched, except for a brief handshake in greeting. However, handshakes in much of Europe tend to last much longer than in the United States and Canada, and Europeans tend to shake hands every time they see each other, perhaps several times a day. Similarly, in much of Europe, men often kiss each other upon greetings; unless an American or Canadian businessman is aware of this custom, he might react inappropriately. The voice - even as it delivers English that is to be translated into another language - carries meaning. From the view point of those from many cultures, Americans tend to speak too loudly and too much. They often do not give adequate time for a reply and fill uncomfortable silences with words. In some cultures, such as the Japanese, silence is not negative, but rather may be a time for introspection. In some countries, the custom may allow men to speak loudly and in a gruff voice, and women - if they speak in business settings at all - are to sound quiet, reserved and perhaps childlike. Indeed, attitude toward women in business across cultures, for vocal and other reasons, can very dramatically (Hall, 1956). Body positions and movements differ by culture and the differences can affect communication. For example, in some cultures, people sit; in other cultures, they squat. We tend to view squatting as primitive. But how correct is this view? Actually, squatting is a very normal body position. It is said that sitting is more advanced or better? As we know, movements of certain body parts are a vital form of human communication. Some of these movements have no definite meaning even within a culture. But some have clear meanings, and these meanings may differ by culture. To us an up-and-down movement of the head means yes and a side-to-side movement of the head means no. Certain hand, finger and thumb movements/signals are understood and well received all over the world (Martin, 2002). For example, the 'V' signal made with the forefinger and middle finger to signify 'victory' is known and observed all over the world. But one must be careful about certain other signals that are likely to be misunderstood or differently interpreted in different cultures. For example the 'Okay' sign made by placing the thumb and forefinger in an 'O' shape is an obscene gesture in Latin America and the middle East, but a very common and positive gesture in the United States. This shows that, to a large extent there are different meanings attached to similar signals in different cultures. Eye contact also varies among cultures. We tend to look a speaker in the eyes, perceiving the action to be one of openness and honesty. In other country, such conduct may be interpreted as far too aggressive (Hall, 1956). Our feelings about space are partly an outgrowth of our culture and partly a result of geography and economics. For example, Americans and Canadians are used to wide-open spaces and tend to move about expansively, using hand and arm motions for emphasis. But in Japan, which has much smaller living and working spaces, such abrupt and extensive body movements are not typical. Likewise, Americans and Canadians tend to sit face to face, so that they can maintain eye contact, whereas the Chinese and Japanese (to whom eye contact is not important) tend to sit side by side during negotiations. American business attire is widely defined even in the United States, where we stereotype the appearance of such professionals as bankers, advertisers, accountants, or artists. Even those stereotypes may be hazardous when we encounter "business casual". When we wear

Cross-Cultural Communication: A Golden Gate to International Business

523

our usual clothing in another country, we may find the colors too flamboyant, the weight uncomfortabl3e for local conditions, or the length of a skirt or the absence of sleeves noticeably incorrect. Social behavior is very culture-dependent. For example, in the Japanese culture, the matter of who bows first upon meeting, how deeply the person bows, and how long the bow is held is very dependent upon one's status (Martin, 2002). Competent communicators become familiar with such role-related behavior and also learn the customs regarding giving (and accepting) gifts, exchanging business cards, the degree of formality expected, and the accepted means of entertaining and being entertained.

VERBAL ASPECTS OF INTERCULTURAL COMMUNICATION In the sphere of international business communication we come across vastly different ways of expression, both oral and written. The people on earth use more than 3,000 languages (Ronen, 1986). Because few of us can learn more than one or two other languages well, problems of miscommunication are bound to occur in international communication. Although English is the major language for conducting business world wide, it would be naive to assume that it is the other person's responsibility to learn English. As a matter of fact, only about 8.5% of the world's population speaks English competently. This means that English only speakers cannot communicate one-to-one with more than 90% of the people in this world (Lesikar, 2002). Within verbal communication, four areas deserve attention: Jargon and slang, acronyms, humor and vocabulary and grammar (Beamer, 2001). Jargon has more of a business orientation, but still has phrases unique to American culture, such as "the bottom line" or delivering a "dog and pony show." Even though the words may translate directly into another language, the meanings often do not. Avoid Jargon and slang. Acronyms - the initial letters of a series of words - also should be avoided. People from other counties may be unfamiliar even with such common American acronyms as CEO, R & D, or VP. Use the full version the first time and perhaps each time. A third area of verbal difficulty is humor. It varies dramatically across cultures. Americans often stereotype British humor as understated and dry or perceive Asians as sharing little humor (Ronen, 1986). American business person is advised to avoid initiating humor. English is often the language of business, but not all foreign business people are adequately prepared to handle faulty communication. Since written punctuation carries rules for how to speak, you should speak your punctuation. Hesitate at commas, and speak in complete sentences. Grammar and syntax should be used appropriately.

TIPS FOR COMMUNICATING ACROSS CULTURES In intercultural communications, one has to use the following tips: l

Have an understanding of one's cultural heritage and its impact on one's behavior.

l

Develop skills and increase confidence to more effectively communicate with foreigners.

524

Key Drivers of Organizational Excellence

l

Maintain a non judgmental mind open to new ideas and avoid assuming that the U.S. culture is the only correct or dominant one(Ronen,1986).

l

Listen carefully to what is being communicated, trying to understand the other person's feelings. Learn about your host country - its geography, form of government, largest cities, culture, current events and the like.

l

Command the language of the nonnative English speakers with whom you communicate. Avoid slang, jargon and other figures of speech.

l

Be aware of the problems caused by language differences.

l

Be specific and illustrate your points with concrete examples.

l

Ask questions carefully to make sure you are understood.

l

Check the accuracy of the communication with written summaries.

l

For important communications, consider back translation the technique of using two translators, the first to translate from one language to the other and the second to translate back to the original.

l

Use a variety of media: handouts, audiovisual aids, models and the like.

l

Write or talk simply and clearly.

This chapter provides useful guidance for communicating with people from different cultures - both internationally as well as domestically. The foregoing examples illustrate only a few of the numerous differences that exist among cultures. The communication techniques presented in this chapter should be modified to fit the culture involved. There are different meanings attached to similar signals in different cultures. Hence business people have to be constantly making serious efforts to acquaint themselves with the cultural aspects nonverbal communication across national/regional boundaries.

References Ronen, S (1986), Comparative and Multinational Management, New York: John Wiley & Sons; Victor, DA. (1992), International Business Communication, New York: Harper Collins. Axtell, R.E., ed. (1985), Do's and Taboos around the World, New York: John Wiley & Sons. Chaney, L. H. & Martin, J. S. (2000), International Business Communication, 2nd ed. Upper Saddle River, NJ: Prentice-Hall. Linda Beamer and Iris Varner (2001), Intercultural communication in the Global Workplace, New York: McGrawHill/Irwin. Hall, E. (1959), The Silent Language, New York: Doubleday. Ferraro, G. (1990), The Cultural Dimension of International Business, Englewood Cliffs, NJ: Prentice Hall. Dodd, C. (1987), Dynamics of Intercultural Communication, Dubuque. 1A: Brown. Sinha, K.K. (1999), Business Communication, New Delhi: Galgotia Publishing Company Ober, Scot (2000), Contemporary Business Communication, Chennai: All India Publishers & Distributors Regd. Penrose, John M., Rasberry, R.W. & Myers, R.J. (2004), Business Communication for Managers: An Advanced Approach , South Western: Thomson Learning. Lesikar, R.V. & Flatley, M.E. (2002), Basic Business Communication, New Delhi: Tata McGraw-Hill.

Success: A Methodology To Design Effective E-Commerce Websites

525

57

Success: A Methodology To Design Effective E-Commerce Websites Hema Banati Monika Bajaj

Internet is transforming the world’s economy and at the center of this revolution is Technology. The focus in technology has shifted from the back office to frontline. The firm’s relationship with its customer is now changing from “face to face” to “screen to face” due to the advent of commercial service on the Internet. The user interface of the organization on the World Wide Web is a key component that makes available the Bricks store on the Clicks for its users. This interface between the emerchants and the web customers is provided by the website of the organization. The website is the web-face of the organization on the web. In an e-commerce environment where cutthroat competition exists amongst organizations effective design and development of its website is therefore crucial. Various studies have been done in the past to assess the impact of design of user interface of the website on its success. It has been found that a user friendly, usable interface helps the company to reach million of customers, marketing the new products and promoting public relations. Since e-commerce websites are especially designed with the motive to promote the company’s image as well as its business, identification of the various factors that influence the success of the web site is imperative. This chapter explores the influence of different factors such as Psychology, Usability, Cognitive, Code, Emotions, Security, Social on the SUCCESS of an e-commerce website. It presents a distinctive way of balancing these factors for enhancing the sales graph of an organization.

INTRODUCTION The 21st century has witnessed the significant phenomenon that of “e-commerce”, commerce enabled by the Internet technologies. This technology has shifted the traditional business from Brick & Mortar store to the clicks on the web sites. In an e-commerce environment where every business transaction is executed through the internet; Website, a gateway, to Internet plays a crucial role. Websites are the “virtual store fronts” and act as a vehicle for business to business and business to consumer transactions. It is an interface between the companies

526

Key Drives of Organizational Excellence

and the customers. On one side it enables the customer to access the products and services information and make purchases online and on the other side it enable the company to reach millions of customers, reducing the cost incurred in providing services to the customer, leveraging advertising cost, promoting public relation and test- markets for new products and services. The websites that can react intelligently and provide a personalized response to each customer can be an effective strategy to increase the customer satisfaction. This influence of web sites on e-commerce environment has made it essential for all organizations to make web presence felt in an effective manner in the e-world.. In such a situation, the imperative question is “how to design a successful website?”. A site that translate into more transaction for the organization, in effect transforming a casual visitor to a buyer. Since the customers are crucial to business so the factors that influence the attitude of customer affect the success and failure of the site. Our approach is to explore those factors that affect the various stages in the journey from internet user or traditional customers to become regular online customers.

E-COMMERCE WEBSITE An e-commerce web site is affected by various factors from different disciplines. Fig 1 presents some of the factors from different disciplines and fields which can affect the ecommerce business.

Figure 1: Factors that affect the success of e-commerce websites.

Psychology Psychology is an essential element of modern selling. Understanding what triggers people to buy, make the difference between sale and failure. Human beings are always afraid from

Success: A Methodology To Design Effective E-Commerce Websites

527

the risk caused by the change in the current system. Shopping on the Internet is a new concept and perceived to be quite risky as compared to traditional shopping methods. It is bound to be much uncertain regarding the values of services it provides. Therefore few customers are online customers. According to Inter-market (1999) Group, 64% of current Internet users have used the web to search the product online but only 32% have actually made at least a purchase online. People are willing to change the current system if the new system will provide them more comfort and convenience. Internet users consistently rated online shopping experience highly for aspects such as rich selection variety and shopping convenience (Williams Group 1999). There are various psychological factors that affect the consumer choice for online shopping Motivation prominent factor is “Motivation Motivation”. A motive is an internal energizing force that orients a person’s activity towards satisfying a need or achieving a goal (www.udel.edu/). Actions are affected by number of motives, not just one motive. The most common motives that influence the internet user to become visitors of e-commerce websites are convenience, variety and enjoyment.

Convenience Life today especially in urban areas has become more time constrained. In current environment where people climb higher in their professional careers, the demands on their time increase, forcing them to look for retail formats where they have to spend the least time. The Internet proves to be the ideal solution for them. People can shop on the Internet in the comfort of their home environment, it saves time and effort, and they are able to shop anywhere, anytime.The services provided by the Internet tend to reduce the time the consumer spends on shopping (travel time, time spent parking, time spent travelling from the parking to the store, time spent in the checkout lines) either directly or indirectly. While retail stores aim at reducing these time costs the Internet stores go one step further by completely (almost) eliminating these costs. The only time component remaining is the time spent browsing the web sites which is possible whenever the customer has time. Therefore, a great attraction of online shopping is the convenience that it affords.

Variety Internet provides an ocean of range of products across the world. Hence a vast variety is at the disposal of the customer. It enables people from distinct areas to enjoy products and services that otherwise are not available to them. Internet provides the right kind of product at right price.

Enjoyment Another psychological factor is “enjoyment”. Enjoyment provides the intrinsic motivation for Internet shopping. Intrinsic value or “enjoyment” derives from the appreciation of an experience for its own sake (Holbrook, 1994) . Enjoyment results from the fun and playfulness of the online shopping experience, rather than from shopping task completion. Childers et al. (2001) found “enjoyment” to be a consistent and strong predictor of attitude toward online shopping. If consumers enjoy their online shopping experience, they have a more positive attitude toward online shopping, and are more likely to adopt the Internet as a shopping medium. Ecommerce web sites which provide variety of products in less time and

528

Key Drives of Organizational Excellence

make shopping, more enjoyable and pleasant experience away from crowds influence the internet users.

Usability Success of online shopping and online transactions depends upon the quality of the website. Several websites are present and user shall be motivated to use those web sites that enable them to perform the task efficiently, effectively and satisfactorily. The term “Effectively” means completeness and accuracy with which user can achieve specified goals. “Efficiency” means how quickly a task can be completed. “Satisfaction” helps in retaining the user and motivates the user to revisit the web site. Usability the Software Quality attribute caters to this aspect. According to ISO 9241-11 (Smith, 1996) usability is “The degree to which a product can be used by specific users to reach specific goals with efficiency, effectiveness and satisfaction in a given use context”. In a simple term we can define usability as a quality of system that makes it easy to use, easy to learn, easy to remember and subjectively pleasing. On the web, usability is a necessary condition for survival. High usable site will not only provide satisfaction to the user but also earn goodwill for the organization or stakeholders.

Cognition Decision Visitors to websites will often make spontaneous decision whether they are going to stay or leave the site. It is the crucial step to convert visitors into buyers Factors that affect the visitor’s decision and compel them to stay and access the site are Appeal and Usefulness.

Appeal is defined as an emergent property coming about when different system attributes are experienced positively by the user. It can also be expressed as the system’s likeability and has obvious consequences as far as acceptability are concerned (Egger, 1999). Thousands of electronic stores are ready for visit and purchase. One person might leave an e-commerce website right away without further exploration if he/she doesn’t like it. This indicates that if a website can not attract a user at the first sight, it might have been out of the game even if it’s highly helpful and convenient to use. Even if a person spends a little while at the website, the negative first impression may bias his/her cognitive judgments and lead to avoidance behaviour. (Anderson 1981&1982, Campbell and Pisterman 1996, Fernandes, et al. 2003, Lindgaard, et al., Mynatt, et al. 1977). A commercial system that induces positive feelings will attract more users.

Usefulness “Usefulness” is defined as the individual’s perception that using the new technology will enhance or improve her/his performance (Davis, 1989, 1992). “Usefulness” refers to consumers’ perceptions that using the Internet as a shopping medium enhance the outcome of their shopping experience. These perceptions influence consumers’ attitude toward online shopping and their intention to shop on the Internet. By investing in a computer and learning to shop on the Internet, the consumer expects a desired result, such as satisfactory secure experience of internet in return from shopping on the Internet. If this return meets their expectations, consumers’ “usefulness” of the Internet as a shopping medium will be positive. If online shopping meets this idea by enabling the consumer to accomplish the shopping

Success: A Methodology To Design Effective E-Commerce Websites

529

task he or she has set out to perform, then consumers will judge the Internet shopping performance positively (Mathwick et al., 2002). This leads to positive perceptions regarding the usefulness of online shopping. Ecommerce website that makes the user feel that it is worth to use internet in comparison to ordinary brick and mortar store, influences the traditional customers to become online customers.

Code Code of Ethics is a self- regulated tool to provide businesses and Internet businesses with a well-respected means to foster consumer trust and confidence in the business and on the web. It provides businesses and online businesses with guidelines to help address important consumer protection issues raised by customers and e-commerce. Code of ethics for on-line businesses is designed to guide ethical “business to customer” conduct in electronic commerce. The Code usually contains practical, performance-based guidelines and establishes goals for on-line businesses such as give proper credit to intellectual property, be honest and trustworthy, respect the privacy of others, honor property rights including copyrights and patent, be fair and take action not to discriminate, respect existing laws pertaining to professional work, honor contracts, agreements, and assigned responsibilities. All these guidelines help in gaining the confidence of the customer and build customer’s trust in the company. The websites should adhere to the code/policy to bolster the consumer’s faith.

Trust Trust, in a social psychological sense, is the belief that other people will react in predictable ways (Tang and chi). In brief, trust is a belief that one can rely upon a promise made by another (Pavlou, 2003). In the context of e-commerce, trust beliefs include the online consumers’ beliefs and expectancies about trust-related characteristics of the online seller (McKnight and Chervany, 2002). However, the analysis of trust in the context of electronic commerce should be considered as the relationship between firm and individual aspects. For example, on-line consumers are required to share personal detail (such as mailing address, telephone number), financial information (such as credit card numbers), and suffer from the risk of products or services not matching the description on the website, and the risk of damage during the delivery process, etc. There seems little assurance that customers will receive the products or services comparable to the ones they ordered according to the description and image on the computer screen. Customers also do not know how the retailer will deal with the personal information collected during the shopping process. Therefore, trust is an important factor in the buyer-seller relationships in electronic commerce (Sonja and Ewald, 2003). Trust is also one of the most frequently cited reasons for consumers not willing to purchase online (Lee and Turban, 2001). Empirical research has shown that trust increases customer intention to purchase a product from a company as well as intention to return to that company (Jarvenpaa et al., 2000). Ecommerce web site which follows the code of ethics increases the trust of customer.

Emotion Customers’ loyalty is crucial factor for the success and growth of customer-centric business. If no customer is willing to revisit the website, its business value becomes zero regardless of

530

Key Drives of Organizational Excellence

its technical or managerial assets (Lee et al. 2000). A bad online shopping experience discourages customers to revisit the site. Huang (2003) found that a customer’s emotions experienced in a virtual shopping environment are positively related to his/her intention to explore or shop at this environment. The most common emotional factor that influenced the e-commerce customer to revisit the site is satisfaction. In case prior online shopping experiences resulted in satisfactory outcomes and were evaluated positively, this leads consumers to continue to shop on the Internet in the future (Shim et al., 2001). Satisfied customers continue purchasing through the Internet, whereas dissatisfied customers discontinue subsequent purchases. Moreover, empirical studies showing the relationship between customer satisfaction and loyalty in online and offline environments indicate that they have a mutual relationship such that each positively reinforces the other, and this relationship is further strengthened online (Shankar et al. 2003). Bhattacherjee’s (2001a, 2001b) model points that confirmation and satisfaction are the primary determinants of the intention to repurchase. Confirmation is the assessment of customers’ perceived performance versus their original expectation and it determines the extent to which their expectation is confirmed. In turn, customer satisfaction is formed based on the customers’ confirmation level and the expectation on which that confirmation is based. The model hypothesizes that the intention to repurchase is determined by satisfaction, which is in turn influenced by confirmation. Finally, the intention to repurchase is formed by satisfied customers. Ecommerce web site that provides a pleasant and good experience encourages customers to revisit the site.

Security As the surge of online consumers continues, e-commerce security is drawing attention from businesses and consumers alike. The E-commerce security area focuses on securing the collection, transmission, processing and storage of information use to run the business. This information includes all data, network links, Internet systems, financial transactions and exchange of business documents. A high level of security and privacy in the online shopping experience has a positive effect on consumer trust, owing to the lowered risk involved with exchanging information. In general, the level of trust, interpersonal as well as institutional, is positively related to consumers’ attitude and intention to shop on the Internet. Violation of consumers’ trust in online shopping, in terms of privacy invasion or misuse of personal information, negatively influences attitude toward online shopping and leads to reluctant behaviour among consumers to shop on the Internet in future occasions (Monsuwe et.al. 2004) Customers expect to conduct secure online communications and protection from invasion at no additional cost to them. They want guarantees (authentication) from merchants and other businesses that their Web sites are genuine and they want this reaffirmed every time they go online. Lost consumer data files and disclosures of unauthorized access to sensitive personal data will affect the consumers’ confidence. Faith and confidence compromises in the web lead to loss in reputation. So any information supplied will be appropriately safeguarded is essential for the end user.

Social People’s wants, learning and beliefs are generally affected by subjective norms. Based on the Theory of Planned Behaviour, it is expected that subjective norm will have an influence on

Success: A Methodology To Design Effective E-Commerce Websites

531

the intentions of consumers to engage in online transactions. Subjective norm is the influence of a person’s normative beliefs that others approve or disapprove a particular behaviour. Beliefs arising from social pressure are termed normative beliefs (Ajzen, 1991). People’s intentions to perform a particular action are a function of subjective norm, or their perception that important others think they ought to do so. Subjective norm can be decomposed into (a) societal norm and (b) social influence. (Pavlou and Chai 2002) l

Societal norm refers to adhering to the larger societal fashion (large circle of influence)

l

Social influence reflects adhering to opinions from family, friends, and peers (small circle of influence).

Table 1: Influence of Success factors on the various stages of the ecommerce web site user. STAGES INTENTION

PURCHASE

REPURCHASE

FACTORS psychology

Convenience Variety Enjoyment

Usability

Ease of Use

Cognition

Usefulness Appeal

Code

Trust

Emotions

Satisfaction Confirmation

Security Social

Privacy Societal Norms Social Influence

Consumers may believe that their circle of influence that includes reference groups, opinion leaders, social class, family, friends, and peers would favour certain online behaviours, and this belief tends to influence their positive intentions and behaviour towards online purchase. Reference group, opinion leader are also individuals that are satisfied by the e-commerce. Finally we can say that individuals’ experience influence the whole society. Good experienced and satisfied customers are publicity of the business by word of mouth. One satisfied person can motivate the whole society and in turns the society consist the circle of influence. The cycle restarts and the journey of customer again start with a positive attitude towards e-commerce. Table 1 summarizes all the above discussed factors affecting the online customers’ behaviour. Study of customer’s behavior at different stages plays a significant role in the success of web sites. This study provides a balance approach to improve the quality of the ecommerce website by considering different factors that affect the customer at various stages. The table does not negates the influence of these factors on other stages, but merely stresses the notable affect of these factors in the mentioned stages . Fig 2 presents the way these factors help in converting an online browser to a regular thus affecting the SUCCESS of an e-commerce website.

Key Drives of Organizational Excellence

532

Figure 2: Relationship of Success factors and the success of ecommerce website

CONCLUSION In Ecommerce environment, website is an important media to achieve the sales target of the organization. It represents the company’s image on the web. A website can helps in improving the goodwill of the organization as well as loyalty of the customer, which can translate a casual visitor to regular customer and hence the web site can be termed as “Successful” website. This paper provides a methodology to design a successful ecommerce web site and a balance approach to consider different factors that influence customer at different stages. A valuable consideration on this SUCCESS methodology while designing ecommerce website leads to enhancement in business. It focuses on all the relevant factors that directly affect the customers’ attitude such as convenience, variety, enjoyment, ease of use, usefulness, appeal, trust, satisfaction, confirmation, privacy, social influence. The impact of these factors on the journey of the customer right from the time of inviting the visitors, influencing them to become customers and encouraging the customers to become regular customers is also presented. This study therefore has a global impact on e-world.

References Ajzen, I. (1991), The Theory of Planned Behaviour, Organizational Behaviour and Human Decision Processes, Vol. 50: 179-211, 1991. Anderson, N. H. (1981), Foundations of Information Integration Theory, London: Academic Press. Anderson, N. H. (1982), Methods of Information Integration Theory, London: Academic Press. Bhattacherjee, A. (2001a), Understanding information systems continuance: An expectation-confirmation model, MIS Quarterly 25(3), 351-370. Bhattacherjee, A. (2001b), An Empirical Analysis of the Antecedents of Electronic Commerce Service Continuance, Decision Support Systems 32, 201-214. Campbell, A. and S. Pisterman (1996), A Fitting Approach to Interactive Service Design, the Importance of Emotional Needs, Design Management Journal, Fall, pp. 10-14

Success: A Methodology To Design Effective E-Commerce Websites

533

Childers, T.L., Carr, C.L., Peck, J. and Carson, S. (2001), Hedonic and Utilitarian Motivations for Online Retail Shopping Behaviour, Journal of Retailing, 77(4), pp. 511-35. Davis, F. D. (1989), Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology, MIS Quarterly, 13, 319-339 Davis, F.D., Bagozzi, R.P. and Warshaw, P.R. (1992), Extrinsic and Intrinsic Motivation to use Computers in the Workplace, Journal of Applied Social Psychology, 22(14), pp. 1109-30. Egger, F.N. (1999), Human Factors in Electronic Commerce: Making Systems Appealing, Usable & Trustworthy, Graduate Students Consortium & Educational Symposium, 12th Bled International E-Commerce Conference, June 1999, Bled, Slovenia Fernandes, G., G. Lindgaard, R. Dillon and J. Wood (2003), Judging the Appeal of Web Sites, in Proceedings of The 4th World Congress on the Management of Electronic Commerce, McMaster University, Hamilton, ON, January 15-17, 2003. Holbrook, M.B. (1994), The nature of customer value: an axiology of services in the consumption experience, in Rust, R.T. and Oliver, R.L. (Eds), Service Quality: New Directions in Theory and Practice, Sage, Newbury Park, CA, pp. 21-71. Huang M.H. (2000), Information Load: Its Relationship to Online Exploratory and Shopping Behaviour, International Journal of Information Management, 20(5), 337-347. Inter Market Group. (1999), One-Third of Internet Users Have Made Online Purchasers, Internet Commerce Briefing, September, Available at http://cyberatlas.internet.com/big_picture/demographics/ article.html Technical Committee ISO! TC 159, ISO 9241-11 Standards (1998), Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs), Part 11: Guidance on usability. J.C Williams Group (1999), Canadian Online Retail Should Pass $1 Billion, National Retail Bulletin, Available at http://cyberatlas.internet.com/big_picture/demographics/article.html. Jarvenpaa, S. L., Tractinsky, N. & Vitale, M., (2000), Consumer Trust in an Internet store, Information Technology and Management, 112, 45-71. Lee, J., Kim, J., and Moon, J. Y. (2000), What Makes Internet Users Visit Cyber Stores Again? Key Design Factors for Customer Loyalty. CHI Letters 2(1), 305-312. Lee, M. K. O. & Turban, E. (2001), A Trust Model for Consumer Internet Shopping, International Journal of Electronic Commerce, 61, 75-92. Lindgaard, G., Fernandes, G.J., Dudek, C., Brownet, J., 2006. Attention web designers: you have 50ms to make a good first impression! Behavior and Information Technology 25 (2), 115-126. Mathwick, C., Malhotra, N.K. and Rigdon, E. (2002), The Effect of Dynamic Retail Experiences on Experiential Perceptions of Value: An Internet and Catalogue Comparison, Journal of Retailing, 78(1), 51-60. McKnight, D. H. & Chervany, N. L. (2002), What Trust Means in E-Commerce Customer Relationships: An Interdisciplinary Conceptual Typology, International Journal of Electronic Commerce, 62, 35-59. Monsuwe T. P., Dellaert B. G.C. and Ruyter K.D. (2004), What Drives Consumers to Shop Online? A Literature Review, International Journal of Service Industry Management, 15(1), 102-121. Mynatt, C. R., M. E. Doherty and R. D. Tweney (1977), Confirmation Bias in a Simulated Research Environment: An Experimental Study of Scientific Inference, Quarterly Journal of Experimental Psychology. Pavlou, P. A. (2003), Consumer Acceptance of Electronic Commerce—Integrating Trust and Risk with the Technology Acceptance Model, International Journal of Electronic Commerce, 73, 69–103. Paul A. Pavlou and Lin Chai (2002), What Drives Electronic Commerce Across Cultures? A Cross-Cultural Empirical Investigation of the Theory of Planned Behaviour, Journal of Electronic Commerce Research, 3(4). Shankar, V., Urban, G. L. & Sultan, F. (2002), Online Trust: a Stakeholder Perspective, Concepts, Implications, and Future Directions, Journal of Strategic Information Systems, 113(4), 325-344.

534

Key Drives of Organizational Excellence

S. Shim, M.A. Eastlick, S.L. Lotz, and P. Warrington (2001), An Online Repurchase Intentions Model: The Role of Intention to Search, Journal of Retailing, 77, 397-416. Smith, W.: ISO and ANSI (1996), Ergonomics Standard for Computer Products, A Guide to Implementation and Compliance (eds.), Prentice Hall Saddle, New Jersey. Sonja G. K. & Ewald A. K. (2003), Empirical Research in on-Line Trust: A Review and Critical Assessment, International Journal of Human-Computer Studies Tzy-Wen Tang and Wen-Hai Chi (2005), The Role of Trust in Customer Online Shopping Behavior Perspective of Technology Acceptance Model, In proceedings of NAACSOS, June 26—28, 2005, Notre Dame, Indiana, USA.

Value Based Management: A New Way For Organizational Excellence

535

58

Value Based Management: A New Way For Organizational Excellence Kulkarni Sharad Raghunath

Most companies today still operate according to Taylor’s top-down vision of the workplace. However, the advent of robotics, advanced informational systems and the globalization of production, marketing and distribution is forcing a basic change in the views towards the role of the worker and the nature of the workplace. Due to global and technological changes, organizations are recognizing that their survival and success will require changes in the way they “do business.” They have to seek new and more flexible ways of rewarding and motivating their workers while controlling costs and delivering ever-higher levels of value to their customers. They are also realizing that these objectives are obstructed by the adversarial nature of the surrounding economic and cultural environment. Businesses need to have a new way of thinking. This new way of thinking would not reject the critical role of systems, but would redesign systems to put people first. It would create a new management approach that re-humanizes the workplace. It would shift power, responsibility and control over modern tools and advanced organizational systems from the few to every person affected by the process. One comprehensive approach, for this purpose, developed by the Center for Economic and Social Justice, is called “Value-Based Management” or “VBM.” This concept develops an ongoing ownership culture and helps to create an environment which respects the dignity of all forms of productive work. VBM recognizes that, regardless of a person’s function or role in the organization, they are all workers. This chapter highlights on the need of value based management and its application for organizations in Indian context.

INTRODUCTION Recent years have seen number of new management approaches for improving organizational performance: total quality management, flat organizations, empowerment, continuous improvement, reengineering, kaizen, team building, and so on. Many have succeeded – but quite a few have failed. Often the cause of failure was performance targets that were unclear

536

Key Drives of Organizational Excellence

or not properly aligned with the ultimate goal of creating value. Value-based management (VBM) tackles this problem. It provides a precise and unambiguous metric – value – upon which an entire organization can be built. Current business problems are several and can be summated as Shareholders want bigger returns; Business is losing to competitors on price and on performance; Customer expects high quality at low price; Enough attention is not paid to ‘important’ customers; Today’s structure does not support the segmentation strategy; Financial information gets attention, but it isn’t enough; Employees understand the need to change, but they don’t change; and New business opportunities are not adapted quickly. The thinking behind VBM is simple. The value of a company is determined by its discounted future cash flows. Value is created only when companies invest capital at returns that exceed the cost of that capital. VBM extends these concepts by focusing on how companies use them to make both major strategic and everyday operating decisions. It is an approach of management that aligns organization’s overall aspirations, analytical techniques, and management processes to focus management decision making on the key drivers of value.

VALUE: THE VALUE MINDSET The first step in VBM is embracing value maximization as the ultimate financial objective for a company. Traditional financial performance measures, such as earnings or earnings growth, are not always good proxies for value creation. To focus more directly on creating value, companies should set goals in terms of discounted cash flow value, the most direct measure of value creation. Such targets also need to be translated into shorter-term, more objective financial performance targets. Companies also need non-financial goals – goals concerning customer satisfaction, product innovation, and employee satisfaction, for example – to inspire and guide the entire organization. Such objectives do not contradict value maximization. On the contrary, the most prosperous companies are usually the ones that excel in precisely these areas. Non-financial goals must, however, be carefully considered in light of a company’s financial circumstances. Objectives must also be tailored to the different levels within an organization. For the head of a business unit, the objective may be explicit value creation measured in financial terms. A functional manager’s goals could be expressed in terms of customer service, market share, product quality, or productivity. A manufacturing manager might focus on cost per unit, cycle time, or defect rate. In product development, the issues might be the time it takes to develop a new product, the number of products developed, and their performance compared with the competition. Even within the realm of financial goals, managers are often confronted with many choices: boosting earnings per share, maximizing the price/earnings ratio or the market-to-book ratio, and increasing the return on assets, to name a few. It can be strongly believed that value is the only correct criterion of performance. Decision making can be heavily influenced by the choice of a performance metric. Shifting to a value mindset can make an enormous difference. Real-life cases that show how focusing on value can transform decision making are described in the inserts “VBM in action.” An important part of VBM is a deep understanding of the performance variables that will actually create the value of the business – the key value drivers. Such an understanding is essential because an organization cannot act directly on value.

Value Based Management: A New Way For Organizational Excellence

537

It has to act on things it can influence – customer satisfaction, cost, capital expenditures, and so on. Moreover, it is through these drivers of value that senior management learns to understand the rest of the organization and to establish a dialogue about what it expects to be accomplished. A value driver is any variable that affects the value of the company. To be useful, however, value drivers need to be organized so that managers can identify which have the greatest impact on value and assign responsibility for them to individuals who can help the organization in meeting its targets. Value drivers must be defined at a level of detail consistent with the decision variables that are directly under the control of line management. Generic value drivers, such as sales growth, operating margins, and capital turns, might apply to most business units, but they lack specificity and cannot be used well at the grass roots level. Value drivers can be useful at three levels: generic, where operating margins and invested capital are combined to compute ROIC; business unit, where variables such as customer mix are particularly relevant; and grass roots, where value drivers are precisely defined and tied to specific decisions that front-line managers have under their control. By Values, different stakeholders may perceive different things. Markets and owners expect that economic value be created; Customers may expect to obtain desired goods and services on time and at competitive prices; Employees may expect a substantive and meaningful job with commensurate compensation; Suppliers may expect to be paid on time; and Society may expect that the environment will be improved. Value is critical to organizations because 1.

The creation of value is the primary goal of managers in leading companies.

2.

Organizations exist to create value for all constituencies / stakeholders.

3.

Stakeholders include customers, owners, managers, employees, suppliers and society in general.

4.

Organizations determine the degree to which they will prioritize the interests of each stakeholder group and will therefore balance performance goals accordingly.

VALUE BASED MANAGEMENT (VBM): PRINCIPLES VBM is very different style of management system. It is not a staff-driven exercise. It focuses on better decision making at all levels in an organization. It recognizes that top-down command-and-control structures cannot work well, especially in large multi business corporations. Instead, it calls on managers to use value-based performance metrics for making better decisions. Value based management is an integrated, strategic and financial approach to the general management of Business (Beck, 1999). When VBM is implemented well, it brings tremendous benefit. It is like restructuring to achieve maximum value on a continuing basis (Fowler, 1999). It has high impact, often realized in improved economic performance, as illustrated in following Exhibit. The objectives of Value Based Management are manifold. They include driving value for stakeholders, facilitation in the deployment of strategy and management philosophy and the establishment of accountability at all levels in the organization. The need for VBM is driven by several facets. Superior executive performance is defined by delivery of value - to investors, customers, employees and others who have material influence. Growth in market value is a key to executive survival and the winning organization. Executives need new

538

Key Drives of Organizational Excellence

skills, new tools and a responsive organization to deliver value. Possession of facts is critical when making high risk decisions and Information Portals provide consistent communication to employees and other stakeholders. Business

Change in Behaviour

Impact

Retail household goods

Shifting from broad national program to focus on small regional needs

Increase in potential value by 3040 %.

Insurance

Repositioning products to create more value

25 % rise in potential value.

Oil Production

New planning and control process to help in change management

Reduction in non-performing activities.

Banking

Growth against saturation though equities are similar

Tremendous rise in potential value (124 %).

Telecom

New ideas for value creation

Rise in potential value by 240 % to 246 %.

(Source: The Mckinsey Quarterly 1994)

VALUE BASED MANAGEMENT: DRAWBACKS Value-based management is not without pitfalls. It can become a staff-captured exercise that has no effect on operating managers at the front line or on the decisions that they make. VBM aligns a company’s overall aspirations, analytical techniques, and management processes with the key drivers of value. The focus of VBM should not be on methodology. It should be on the why and how to change the corporate culture. A value-based manager is as interested in the subtleties of organizational behavior as in using valuation as a performance metric and decision-making tool. When VBM is working well, an organization’s management processes provide decision makers at all levels with the right information and incentives to make value-creating decisions (Harari O. 1992). Take the manager of a business unit. VBM would provide him or her with the information to quantify and compare the value of alternative strategies and the incentive to choose the value-maximizing strategy. Such an incentive is created by specific financial targets set by senior management, by evaluation and compensation systems that reinforce value creation, and – most importantly – by the strategy review process between manager and superiors. In addition, the manager’s own evaluation would be based on long- and short-term targets that measure progress toward the overall value creation objective. VBM operates at other levels too. Line managers and supervisors, for instance, can have targets and performance measures that are tailored to their particular circumstances but driven by the overall strategy. A production manager might work to targets for cost per unit, quality, and turnaround time. At the top of the organization, on the other hand, VBM informs the board of directors and corporate center about the value of their strategies and helps them to evaluate mergers, acquisitions, and divestitures. Value-based management can best be understood as a marriage between a value creation mindset and the management processes and systems that are necessary to translate that mindset into action. Taken alone, either element is insufficient. Taken together, they can have a huge and sustained impact. A value creation mindset means that senior managers are fully aware that their ultimate financial objective is maximizing value; that they have clear rules for deciding when other

Value Based Management: A New Way For Organizational Excellence

539

objectives (such as employment or environmental goals) outweigh this imperative; and that they have a solid analytical understanding of which performance variables drive the value of the company. They must know, for instance, whether more value is created by increasing revenue growth or by improving margins, and they must ensure that their strategy focuses resources and attention on the right option. Management processes and systems encourage managers and employees to behave in a way that maximizes the value of the organization. Planning, target setting, performance measurement, and incentive systems are working effectively when the communication that surrounds them is tightly linked to value creation. When VBM is working well, management processes provide decision makers at all levels with information and incentives to make value-creating decisions. Senior managers must have a solid analytical understanding of which performance variables drive the value of the company.

VALUE BASED MANAGEMENT: PROCESSES Adopting a value-based mindset and finding the value drivers gets only halfway home. Managers must also establish processes that bring this mindset to life in the daily activities of the company. Line managers must embrace value-based thinking as an improved way of making decisions. And for VBM to stick, it must eventually involve every decision maker in the company. There are four essential management processes that collectively govern the adoption of VBM. First, a company or business unit develops a strategy to maximize value. Second, it translates this strategy into short- and long-term performance targets defined in terms of the key value drivers. Third, it develops action plans and budgets to define the steps that will be taken over the next year or so to achieve these targets. Finally, it puts performance measurement and incentive systems in place to monitor performance against targets and to encourage employees to meet their goals. These four processes are linked across the company at the corporate, business-unit, and functional levels. Clearly, strategies and performance targets must be consistent right through the organization to achieve its value creation goals.

VALUE BASED MANAGEMENT: STRATEGY DEVELOPMENT Though the strategy development process must always be based on maximizing value, implementation will vary by organizational level. At the corporate level, strategy is primarily about deciding what businesses to be in, how to exploit potential synergies across business units, and how to allocate resources across businesses. In a VBM context, senior management devises a corporate strategy that explicitly maximizes the overall value of the company, including buying and selling business units as appropriate. That strategy should be built on a thorough understanding of business-unit strategies. At the business-unit level, strategy development generally entails identifying alternative strategies, valuing them, and choosing the one with the highest value. The chosen strategy should spell out how the business unit will achieve a competitive advantage that will permit it to create value. This explanation should be grounded in a thorough analysis of the market, the competitors, and the unit’s assets and skills. The VBM elements of the strategy include:

540

Key Drives of Organizational Excellence

1.

Assessing the results of the valuation and the key assumptions driving the value of the strategy. These assumptions can then be analyzed and challenged in discussions with senior management.

2.

Weighing the value of the alternative strategies that were discarded, along with the reasons for rejecting them.

3.

Stating resource requirements. VBM often focuses business-unit managers on the balance sheet for the first time. Human resource requirements should also be specified.

4.

Summarizing the strategic plan projections, focusing on the key value drivers. These should be supplemented by an analysis of the return on invested capital over time and relative to competitors.

5.

Analyzing alternative scenarios to assess the effect of competitive threats or opportunities.

Developing business-unit strategy does not have to become a bureaucratic time sink; indeed, the time and costs associated with planning can even be reduced if VBM is introduced simultaneously with a reengineering of the planning process.

VALUE BASED MANAGEMENT: ELEMENTS 1.

Creating Value. How to increase or generate maximum future value.

2.

Managing for Value. Governance, change management, organizational culture, communication, leadership.

3.

Measuring Value. Valuation process applied to check the outcome.

Value Based Management is dependent on the corporate purpose and the corporate values. The corporate purpose can either be economic (Shareholder Value) or can also aim at other constituents.

EXAMPLE (VALUE BASED MANAGEMENT): FUZZY FINANCE Financial management systems consist of various policies, procedures and levels that a leadership team has at its disposal to guide, control and drive its operations and strategies (Stewart, 2007). For this purpose following questions about relevant elements have to be answered:

(Source: www.valuationissues.com)

Value Based Management: A New Way For Organizational Excellence

541

1.

Goals Goals: What kinds of goals are set and how are targets established? What are the tradeoffs and what are the priorities when business aims and financial objectives are in conflict?

2.

Communication Communication: How does the management committee discuss financial goals and review progress toward achieving them with employees, the board and investors?

3.

Planning Planning: Which procedures are employed to identify the most valuable business strategies? How are planning alternatives prompted and evaluated? How is risk assessed?

4.

Capital Budgeting: How is capital spending projects approved? everything from ordinary decisions to replace or maintain equipment, entering a new market, building a plant, stepping up research, or making an acquisition to the opposite decisions like to divest, downsize, outsource and prune? How those decisions are made, who is responsible for making the decisions, and how are managers held accountable for delivering promised results after the fact?

5.

Decisions: How do managers make operating decisions that involve tradeoffs, such as about pricing for margins versus market share, aiming for fixed versus variable cost structures, and choosing between integration, outsourcing or partnering alternatives?

6.

Measurement: What set of metrics are used to keep track of results and to highlight successes and failures needing attention? What is the hierarchy of financial and operational metrics, and how are they organized into an overall scorecard?

7.

Incentives: Last, but certainly not least, how is business performance measured and rewarded, and how are monetary incentives aligned with the long-term interests of shareholders?

KEYS TO SUCCESSFUL IMPLEMENTATION OF VBM Following are some guidelines which ensure successful implementation of value based management concept. 1.

Establish explicit, visible top management support.

2.

Focus on better decision making among operating (not just financial) personnel.

3.

Achieve critical mass by building skills in a wide cross-section of the company.

4.

Tightly integrate the VBM approach with all elements of planning.

5.

Underemphasize methodological issues and focus on practical applications.

6.

Use strategic issue analyses that are tailored to each business unit rather than a generic approach.

7.

Ensure the availability of crucial data (e.g. business-unit balance sheets).

8.

Provide standardized, easy-to-use valuation templates and report formats to facilitate the submission of management reports.

9.

Tie incentives to value creation.

10.

Require that capital and human resource requests be value based.

542

Key Drives of Organizational Excellence

BENEFITS OF VALUE BASED MANAGEMENT The VBM is very effective tool for existing business practices. It can maximize value creation consistently, increases corporate transparency, help organizations to deal with globalized and deregulated capital markets, align the interests of (top) managers with the interests of shareholders and stake-holders, facilitates communication with investors, analysts and stakeholders, and improves internal communication about the strategy. Added to it VBM prevents undervaluation of the stock, sets clear management priorities, facilitates to improve decision making, helps to balance short-term, middle-term and long-term trade-offs, encourages value-creating investments, improves the allocation of resources, and streamlines planning and budgeting. VBM also allows organizations to set effective targets for compensation, facilitate the use of stocks for mergers or acquisitions, prevent takeovers, and better manage increased complexity and greater uncertainty and risk.

LIMITATIONS OF VALUE BASED MANAGEMENT VBM is an all-embracing, holistic management philosophy, often requiring culture change. Because of this, VBM programs are typically large scale initiatives. To be successful they take considerable time, resources and patience. 1.

Value creation may sound simpler than corporate strategy, but it isn’t. It is actually more or less the same.

2.

Economic Value Added, Performance Management and balanced score- card are very powerful management support tools and processes. However they have their own costs. Therefore it is generally not advisable to go too deep in detail and use measuring methods that are over-complex. Extreme caution should be taken not to measure the wrong things as this will almost certainly lead to value destruction.

3.

VBM requires strong and explicit CEO and Executive Board support.

4.

Comprehensive training and management consultancy are advisable or even necessary, but can be quite costly.

5.

The perfect model for evaluation is not yet invented. Each model have certain drawbacks.

CONCLUSION The success of value based management concept depends on appropriate de-centralization of power, responsibility and accountability of policy to avoid potential abuses which occur when these are centralized or concentrated. Those in higher levels within an organization should avoid making decisions which can be made most efficiently and competently by those at lower levels. Value-Based Management is not village democracy where every decision is not voted upon by all members of the company nor is it “management by committee.” Rather, VBM builds checks-and-balances in the company’s governance and accountability system. It protects the property rights of all shareholders, but allows executives flexibility to make traditional executive decisions.

Value Based Management: A New Way For Organizational Excellence

543

References Beck, K. (1999), Extreme Programming Explained: Embrace Change, Addison: Wesley. Favaro J. M. (2000), Managing Requirements for Business Value, IEEE Software, March 2002. G. Bennett Stewart (2007), Focused Finance, Valuation Issues, 2007. Harari O. (1992), You are not in a Business to Make a Profit, Management Review, July 1992, pp 53-55. Mckinsey (1994), The Business Impact, The Mckinsey Quarterly,

544

Key Drivers of Organizational Excellence

59

Impact of Foreign Direct Investments on M.P.'S International Trade Nitin Tanted Hitendra Bargal S.Mahalati

M.P. is located in the heart of incredible India. It is an industrial hub right in the epicenter of India's commercial activities. The key to development in any region is Connectivity and it has played well for the State which has a near perfect equidistant link to India's metros and lies in close proximity to most business places. Madhya Pradesh Government accords highest priority to the industrial sector on account of the vital role it plays in balanced and sustainable economic growth. It plays a crucial role in the process of economic development by value addition, employment generation, equitable distribution of national income, regional dispersal of industries, and mobilization of capital, entrepreneur skills and contribution to exports. Thus the researchers have attempted to focus on the impact FDI on the state GDP and International trade. Further the key policies for attracting FDI by the state government are also analyzed in the study.

INTRODUCTION Madhya Pradesh, the second largest Indian State covering 9.5% of the country's area, is bestowed with rich natural resources, a gifted climate and fertile agro-climatic conditions. With a rich cultural heritage, an excellent quality of life, a flourishing industrial base, peaceful labour force, a progressive & investor friendly environment, Madhya Pradesh is a great place to set up new industries. The new Madhya Pradesh came into existence on November 1, 2000, following its bifurcation to create the new state of Chhattisgarh. With a Net State Domestic Product of US$ 9.8 billion, Madhya Pradesh is the ninth largest state economy in India. M.P. is located in the heart of incredible India. It is like an industrial hub right in the epicenter of India's commercial activities. The key to development is Connectivity. As the State has a near perfect equidistant link to India's metros and lies in close proximity to most business places. Many important railway tracks and highways pass through Madhya Pradesh. The State has about 70,000 Kms of road, over 6,000 Kms of railway lines, four airports, and 25 air

Impact of Foreign Direct Investments on M.P.'S International Trade

545

strips with regular air services to airports, including private airline operations. More than 1800 companies and 19 industrial growth centers are located in M.P. Thus it makes a good social infrastructure accessible to industrial units. Nevertheless, the rates of prime land in the state are still among the lowest in the country. Madhya Pradesh is already one of the India's most prosperous trade centers, boasting of many distinguished industrial groups. The auto and pharmaceutical sectors have special presence. A proactive and peaceful industrial work atmosphere, availability of basic infrastructure facilities, natural resources and skilled workforce have made it an attractive investment destination. Madhya Pradesh Government accords highest priority to the industrial sector on account of the vital role it plays in balanced and sustainable economic growth. It plays a crucial role in the process of economic development by value addition, employment generation, equitable distribution of national income, regional dispersal of industries, and mobilization of capital, entrepreneur skills and contribution to exports. The industrial growth requires a drive towards attracting more and more Foreign Direct Investment. The investment climate is central to growth and improves outcomes for society as a whole. It reflects the many location specific factors that shape the opportunities and incentives for firms to invest productively, create jobs, and expand. This can only be facilitated if an industrial policy statement of the State is there which sets out clear and credible specific measures to improve the investment climate towards better regulation by removing barriers to competition. The state government has revived the Industrial Policy and a New Industrial Policy was developed in the year 2004 to drive industrial growth by clearly spelling out various components of incentives being offered. The new policy was also essential to remove the existing barriers and for creation of congenial and hassle-free investment climate thus also to boost investor's confidence, a series of proactive measures are being proposed.

ANALYSIS OF THE STATE INDUSTRIAL POLICY 2004 Madhya Pradesh has all potential to become an attractive destination for industrial investment and the only thing required is to proceed in a planned manner. Thus state government has realized and adopted the Industrial Promotion Policy 2004. The objective and thrust behind this policy are: l

Effective implementation of single window system through establishment of Madhya Pradesh Trade and Investment Facilitation Corporation.

l

Enhancement of infrastructure in the identified industrial clusters

l

Promotion of different industrial clusters in the state, in view of availability of raw material, skilled labour and market.

l

Setting up of an Industrial Infrastructure Development Fund.

l

Revival of sick industrial units by granting special packages.

l

Defining incentives schemes through exemption from stamp duty, registration charges, entry tax, etc.

Besides this there are General objectives which also depict in the draft of Industrial Policy 2004 to make state administration industry friendly by simplifying rules and procedures.

546

Key Drivers of Organizational Excellence

l

To accelerate the pace of industrialization and make Madhya Pradesh industrially a leading state.

l

To maximize employment prospects.

l

To attract NRI and foreign investment by developing world-class infrastructure.

l

To create congenial environment for the development of small, medium and large industries.

l

To ensure balanced regional development by generating employment in the non-farm sector.

l

To integrate the different employment oriented schemes in order to provide employment opportunities on a sustainable basis

l

To rationalize commercial tax rates to make the state's industries competitive vis-à-vis industries in other states

l

To provide direction to industrialization, keeping in view the available local resources and the existing industrial base.

l

To ensure private sector participation in the state's industrialization process.

l

To financially strengthen the undertakings of Department of Industries, enabling them to play a pivotal role in the promotion of industries.

The basic aim was to simplify the approval process so as to motivate more and more investors. To facilitate the process, the state government has established District Trade and Industry Centres for new industrial investment in the districts. These centres will be responsible for co-ordinating and following up with other government agencies in the state for speedy approvals and clearances. The state government also proposes to simplify the approval process for setting up industrial units in the state by empowering committees at district and state levels. To promote a more conducive policy framework for various emerging sectors, Madhya Pradesh has formulated sector-specific government policies for information technology, tourism and biotechnology, Infrastructure, Power, Special Economic Zones and for promoting Exports and Foreign Capital Investments. The state government has taken a measure which mainly includes: l

The establishment of the Madhya Pradesh Trade and Investment Facilitation Corporation which will assist foreign investors and NRIs in getting necessary clearances quickly. Fast track clearance process is adopted for this purpose.

l

An action plan is chalked out for export oriented industries so that optimum benefits of Government of India's export promotion schemes could be derived.

l

Entrepreneurs of the State will be trained on the export potential and export procedure through actual exporters and the concerning agencies dealing with exports by way of organizing training programs, seminars and conference on the regular basis. Industries and exporters of the State participating in foreign trade fairs would be encouraged.

l

Regular awareness programs are to be launched to establish effective and meaningful dialogue of the industries-entrepreneurs of the state by inviting the organization set up

Impact of Foreign Direct Investments on M.P.'S International Trade

547

by India and other countries for foreign trade. Financial assistance is provided to Govt., Semi Govt. and Govt. sponsored agencies to organize such programs. l

Special facilities are to be given to import advanced technology, patent registration and processing of intellectual property right (I.P.R.). To promote research & developed activities, complete expenditure incurred in obtaining patent registration up to a maximum limit of Rs. 2 lacs is reimbursed.

GROWTH OF FDI IN M. P. Madhya Pradesh (M.P.) presents a scene of under-development and wide-spread poverty along with tremendous potential for development, highlighting a case of missed opportunities for development. The state had inherited much of its backwardness at the time of its birth on first November, 1956 - its feudal character, its large size, its large population of socially and economically disadvantaged people and its poor social and physical infrastructure. Despite more than 55 years of planned development, not much progress could be achieved to overcome its under-development and improve its relative position among the states of Indian Union. The state continues to be recognized among the five major states of India which are nick named as "BIMARU" states of India, these being, Bihar, Madhya Pradesh, Rajasthan, Orissa and Uttar Pradesh. When compared with other states of India, M.P. does not find any respectable position, as the approved FDI in M.P till March 2007 was 4.58 percent which was very less in comparison to total inflow of FDI in India. Still M.P. holds the seventh position among Indian States. Relatively M.P. is much behind all the six states which holds cumulatively 77.73% of the Total FDI received till March 2007. But the states that are behind M.P. are very close and comparatively the gap is very narrow. It indicates threat on M.P. to lose its position. When we bifurcate the time period (1991-2007) between 1991-1998 and 1999-2007, the position becomes more severe. The FDI approval had been decreased from First period to Second Period. Table 1 reveals that in the first period the Approved FDI was 6.16 percent of the total, which has been reduced to 2.45% in the second period. The main reason may be the bifurcation of state into Madhya Pradesh and Chhastisgarh on 1st Nov 2000. The re-organized state of M.P. has a population of 6.03 crores as per 2001 census. The state occupies 7th rank in terms of population and second in terms of area, next to Rajasthan. The growth rate of population has come down from 27.24% in the previous decade to 24.34% during 1991-2001.Thus, there are lots of efforts are to be done at state level if the FDI inflow position is to be increased. The states behind Madhya Pradesh are very close and can supersede the state any time. Table 1: Percentage of Share in total FDI for which locational details are known

State

1991-1998

1999-2007

Maharashtra

17.92

31.28

Delhi

18.1

13.87

Tamil Nadu

10.98

12.41

Karnataka

10.74

11.72

Gujarat

8.04

9.6 Contd...

548

Key Drivers of Organizational Excellence

Andhra Pradesh

6.38

6.32

Madhya Pradesh

6.16

2.45

West Bengal

5.95

2.1

Orissa

6.26

0.5

Uttar Pradesh

2.45

2.17

Haryana

1.78

1.8

Rajasthan

1.81

0.85

Punjab

1.54

0.57

Kerala

0.48

1.03

Himachal Pradesh

0.28

0.9

Goa

0.38

0.56

Bihar

0.18

0.71

Others

0.56

1.15

Total

100

100

Source: Indiastat.com

IMPACT OF FDI ON INTERNATIONAL TRADE OF THE STATE Table 2 shows for different states, their relative share in the FDI inflows during 2001-02 to 2006-07, FDI approvals during the same period, and exports and imports of companies located in the state. In a number of cases, it has been necessary to club some states because the FDI data are not available for those states separately. It is seen from Table 2 that Maharashtra, Karnataka and Gujarat and Delhi (along with adjoining areas) account for a dominant part of FDI inflows. And, these are also the states that together account of a major part of exports, imports and trade.

THE IMPACT OF FDI ON EXPORTS FROM THE STATES There is a positive impact of FDI on exports from any state. This has been tested through the hypothesis. Hypothesis H0=FDI inflow does not have any impact on the Export from the states. H1=FDI inflow does have impact on the Export from the states Table 2: FDI inflows and Trade (in percent)

S.No.

States

FDI Inflows

Exports

Imports

Trade

1

AP

4.92

4.03

3.12

3.49

2

ASM

0.07

0.36

3.37

2.13

3

BHR

0

1.11

2.69

2.04

4

GUJ

5.13

17.11

27.24

23.07

5

KAR

11.14

13.43

11.24

12.14 Contd...

Impact of Foreign Direct Investments on M.P.'S International Trade 6

KER

0.48

1.14

549 3.41

2.47

7

MP

0.3

2.18

1.6

1.84

8

MAH

29.34

20.56

18.68

19.45

9

ORS

0.57

1.84

1.16

1.44

10

RAJ

0.03

1.9

1.03

1.39

11

TN

8.63

5.24

9.24

7.59

12

UP

0

3.88

6.51

5.43

13

WB

2.2

7.66

1.57

4.08

14

PNJ

2.37

4.04

4.76

4.46

15

DLH

34.01

14.53

3.35

7.95

16

GOA

0.81

1

1.03

1.02

Source: Indiastat.com

The correlation between FDI inflow and Exports from various states is significant as it is clear from the figure 1 where the value of r is 0.787 which is very close to 1.Hence the null hypothesis is rejected. That is also seen in table 2 which shows a positive and strong relationship between the level of exports and the scale of FDI inflow. The coefficient of trade is positive and statistically significant at one percent level, indicating clearly a positive relationship between exports and FDI inflows. Thus, it signifies that FDI enhance the productivity of the state which leads to economic development. The foreign investor also provides marketing support by marketing the production internationally which leads to increase in exports. Table 3: Result: Correlation

FDI Inflow

Export

FDI Inflow

Export

Pearson Correlation

1

.787

Sig. (2-tailed)

-

.000

N

16

16

Pearson Correlation

.787

1

Sig. (2-tailed)

.000

.

N

16

16

Correlation is significant at the 0.01 level (2-tailed).

THE IMPACT OF FDI ON IMPORTS TO THE SATES There is a least positive impact of FDI on imports to any state. This has been tested through the following hypothesis. Hypothesis H0=FDI inflow does not have any impact on the Imports to the states. H1=FDI inflow does have impact on the Imports to the states

550

Key Drivers of Organizational Excellence Table 4: Correlation Results

FDI Inflow

Import

FDI Inflow

Import

Pearson Correlation

1

.361

Sig. (2-tailed)

.

.170

N

16

16

Pearson Correlation

.361

1

Sig. (2-tailed)

.170

.

N

16

16

Correlation is significant at the 0.01 level (2-tailed). The correlation between FDI inflow and Imports to various states is significant as it is clear from the figure 2 where the value of r is 0.361 which is positive but very far from1.Hence here also the null hypothesis is rejected. That is also seen in table 2 which shows a least positive relationship between the level of imports and the scale of FDI inflow. The coefficient of trade is positive and statistically significant at one percent level, indicating clearly a positive but weak relationship between Imports and FDI inflows. Thus it signifies that FDI inflow reduces the imports to the state. As it is evident that FDI increases the productivity which makes the state becoming self sufficient in production of goods and services. Their imports requirement reduces which restricts the valuable foreign currency to move out from the country. This helps in generating employment which leads to economic growth and development.

THE IMPACT OF FDI INFLOW ON INTERNATIONAL TRADE OF THE STATES There is a positive impact of FDI inflows on international trade of any state. This has been tested through the following hypothesis. Hypothesis H0=FDI inflow does not have any impact on the International Trade of the states. H1=FDI inflow does have impact on the international trade of the states. Table 5: Correlation Results

FDI Inflow

Trade

FDI Inflow

Trade

1

.553

Sig. (2-tailed)

.

.026

N

16

16

Pearson Correlation

.553

1

Sig. (2-tailed)

.026

.

N

16

16

Pearson Correlation

Correlation is significant at the 0.05 level (2-tailed). The correlation between FDI inflow and international trade of various states is significant as it is clear from the figure 3 where the value of r is 0.553 which is mid positive but very far from 1.The null hypothesis is rejected over here. That is also seen in table 2 which shows a mid positive relationship between the level of imports and the scale of FDI inflow. The coefficient

Impact of Foreign Direct Investments on M.P.'S International Trade

551

of trade is positive and statistically significant at five percent level, indicating clearly a positive but a mid relationship between international trade and FDI inflows. As international trade consists of both exports and imports which tends it to exist a mid relationship. Foreign direct investment (FDI) and trade have become more closely interconnected in the framework of efficiency oriented, integrated international production strategies pursued by transnational corporations (TNCs). No doubt, FDI and trade support one another in this context. Also, there is a need for state policy coordination and national trade policies. Various locational factors also affect FDI inflow into a particular state. UNCTAD studies have listed various economic and policy factors that determine the FDI inflow and selection of location. Primary amongst them are economic growth, market size, profitability, cheap labour and developed infrastructure amongst economic factors; and private sector development, macro economic reforms and liberalization amongst policy factors. The Trade and Development Report (1997), indicates that trade, investment- and technologyled globalization has posed a major challenge to employment generation, income and wealth distribution within the countries. That shows the importance of making international trade and investment policies more conducive to employment, income and wealth generation in developing countries commensurate with their natural, human, entrepreneurial and other resources. There should be both value creation and value realization of their resources states in the context of international trade and investment flows. Competitiveness, transfer of technology and managerial skills are important benefits of trade and investment for development and economic growth of the state. However, much would depend on how the investing firm actually transfers relevant and state of the art technology, shares, managerial skills and competitive advantages such as global marketing and distribution networks and brand image. It is not sufficient for liberal trade and investment policies to be in place to ensure maximum positive impact on development and economic growth of any state in all their dimensions. Unless some responsibilities are also taken by investing firms, the positive nexus between trade and investment on the one hand, and development and economic growth on the other, may not materialize. Table 2 also reveals that the state like Madhya Pradesh depicts the different picture in terms of FDI inflow and exports. As compared to the FDI inflow of 0.3%, the exports are 2.18% and imports are 1.6%. This is because of indirect flow of foreign funds in the state. Though many companies have their setup/factories in Madhya Pradesh but these are the branches of the companies which are located in some other states. Thus FDI, inflow is recorded in those states.

CONCLUSION The economic base of the state economy is poor, though there is the potential for improving this base. The state government has to mobilize increasing proportion of incremental incomes in the state through tax and non-tax measures, particularly from such pockets which have the capacity to bear the burden. The policies and strategies of the state have to be designed in such a way that the economy can improve its performance and yields a larger surplus. The major areas in which efforts have to be concentrated are Rural Development, the development of human resource and physical infrastructure. The state has the potential of becoming the warehousing hub of the country because of its central location.

552

Key Drivers of Organizational Excellence

It has the potential for industrial development because of its resources endowments. It can attract more capital, both domestic and foreign, provided it builds up and strengthens its physical and social infrastructure. But Fiscal and financial concessions cannot serve as substitutes for efficient infrastructure. Basically, the best choice and selection of industrial location will be governed by industrial and social infrastructure and the speed with which the problems faced by the project implementing agencies are sorted out and solved. Further, the Will of the policy makers and efforts of the officers to implement it, will lead to rapid development of the state. Only making policies will not serve the purpose of making state as a developed state.

References United Nations (1997), Trade and Development Report, United Nations Conference on Trade and Development, Geneva, 1997. Govt. of Madhya Pradesh (1994), Industrial Promotion Policy, http://www.mpsidc.org/ www.destinationmadhyapradesh.com www.mpgovt.nic.in http://www.mpindustry.org/mpindustry/policy_newIndPolicy.asp

Enterprise Systems in Contemporary Educational Institutes for Administration

553

60

Enterprise Systems in Contemporary Educational Institutes for Administration: An Analysis Praveen Kumar

In recent years many educational institutes have undertaken ERP (enterprise resource planning) systems for administrative functionality and knowledge management. Some institutions have successfully implemented such enterprise systems; others are still searching for success. Analyzing such projects, discussions with the faculties, user staff and authorities at institutions in various stages of implementation, have enabled to analyze the success and failure of ERP systems in Educational institutions. ERP systems typically include application modules to support common educational business activities such as finance, accounting, human resources, library management and student management systems etc. This chapter is based on the performance of ERP system in educational organizations with their risks and benefits.

INTRODUCTION Defining an ERP system is an integrated solution, sharing a centralized database, with all ‘users’ Human Resources/Payroll/Benefits, E-procurement, Accounting, Budgets, etc being served by the same database through one point of entry.. Data need only be entered or updated once, reducing errors, time and labor for reports, analysis, and planning and program management. Ultimately, time and resources are shifted to innovating, problem solving and direct service to customers rather than inputting, processing, organizing, verifying and related “busy work” that burns through time and money, (ERP Newsletter, 2001) Enterprise resource planning (ERP) systems are powerful software packages that enable organizations to integrate a variety of disparate functions. In particular, ERP systems can provide the foundation for a wide range of web-based processes. This study focuses on the ERP life cycle, from the decision on whether or not to adopt an ERP system to the time when the system goes “live” and the risks associated with the adoption of ERP systems. The major goal of ERP system is to lower the costs for database systems, database administration, implementation, training and system maintenance. ERP implementation is primarily a people project; respondents found that 69% of the reasons for failure of the ERP implementations were people problems [Deloitte, 1999] (Figure 1).

554

Key Drivers of Organizational Excellence

T e c h n o lo g y P ro c e s s P e o p le

According to a recent survey, administrative/ ERP /information systems tops the issues list for educational institutions, three issues rank in the top ten for all four areas of strategic importance, future significance, IT leaders’ time, and cost where second and third issues are infrastructure and security respectively, (Camp and DeBlois, 2007). Enterprise applications were meant to be the 1990’s panacea for organizations seeking to reduce cost through automated business processes. The logic ran that by implementing packaged enterprise resource planning (ERP) software packages, built around standardized processes, large organizations would achieve significant cost savings. In reality, much of the recorded success can be attributed to the extensive amount of business process re-engineering that had to accompany these projects rather than the software itself. Many ERP systems are not packages as such, but a series of tightly integrated business processes that have to be assembled and which are frequently customized to suit local practices or conditions. This has the advantage of allowing organizations choice about how to construct best practices, but the resulting systems are flawed. Enterprise Resource Planning (ERP) is more than simply a software package. Implementing ERP will involve the entire business and will require changes throughout the institution. Because of the scope, complexity and continuous nature of ERP, the project-based approach to managing the implementation process resulted in failure rates of between 60% and 80%.

LITERATURE REVIEW The primary objective of the study was to find out the benefits and risks of ERP and non-ERP approaches from various information systems at different professional education institutes. Specifically, the research questions considered are: what are the actual benefits of implementing ERP for administration in a professional education institute? In order to study the risks and benefits of the ERP implementations a previous study by ECAR (2002) and Deloitte study (1999) have been used. ECAR examined the state of recent ERP implementations in higher education, with a focus on implementation experiences involving budget, timeline and customization; outcomes of those implementations; and future plans through web based survey. We have considered various ERP and non-ERP implementations for analysis. Considered factors were vendor selection, various application modules, and reasons for implementations, cost, timeline and degree of customization

Enterprise Systems in Contemporary Educational Institutes for Administration

555

ANALYSIS Generally, the education industry faced with a choice between two different strategies for the future of its core administrative systems, including student, human resources /payroll, and financial. The possible two approaches are ERP approach or legacy approach. The ERP approach is implementing enterprise resource Planning (ERP) systems, using vendor supplied software to provide enterprise-wide systems for student, human resources/payroll, library and finance. The legacy approach is a stand-alone approach of renewing and extending the institution’s existing applications software. The top level management always surrounded by many queries like; If the institute abandons the legacy approach in favor of an ERP approach then assessment should be done properly like what are the risks and consequences? If the institute stays with non-ERP approach strategy, what are the risks and what are the investments and actions necessary to sustain and extend the non-ERP approach? This study attempts to answer these questions. It outlines the benefits and risks of the two choices. ERP and Legacy defined, Enterprise Resource Planning, or ERP, refers to the current generation of vendor-supplied packages that provide human resources/payroll, student, financial, and other information systems. Many institutions are following an ERP approach, which involves procuring, installing, and implementing these packages to replace their older legacy administrative systems. Legacy systems are that human resources/payroll, student, financial, and other information systems that were developed in-house or that are based on packages purchased years ago that have since been modified, often extensively, to meet the institution’s needs.

THE ERP APPROACH BENEFITS Benefits of this approach are to replace aging legacy systems; to upgrade these systems; to improve customer service; to transform institutional business practices. [ECAR, 2002] (Table 1) shows the benefits from ERP. Table 1: Benefits, Deloitte (1999)

Benefits Improved Management Decision Making Improved Financial Management Improved Customer Service and Retention Ease of expansion/growth and increased flexibility Faster, more accurate transactions Headcount reduction Cycle time reduction Improved inventory/asset management Fewer physical resources/better logistics Increased revenue

Many institutions feel that the ERP system had achieved what they had intended and most feel that system had at least partially met their goals. Another perceived benefit of the ERP approach is that, once installation is complete, the vendor supplies ongoing maintenance

556

Key Drivers of Organizational Excellence

and support for the system (Leon, 2007). However, it is important to remember that the institution pays for this service. In addition, the institution must continue to invest in its own technical staff to integrate the ERP system into the rest of the computing environment and to test and maintain ongoing system modifications and vendor-mandated upgrades to hardware and software. ERP implementations can be successful for some institutions, especially institutions with less complex academic, administrative, and computing environments (ECAR, 2002) A Computer Sciences Corporation (CSC) study (2001), which surveyed 1009 IS managers from around the world, identified “Optimizing enterprise wide systems” as their main priority. In the landmark Deloitte’s study (1999), 49% of the sample considered that an ERP implementation is a continuous process, as they expect to continually find value propositions from their system. Davenport et al (2002) believes that the potential of ERP systems can be classified under three groups; Integrate, Optimize, and Informate. Integrate is where a company is able to integrate their data and processes internally and externally with customers and suppliers. While Optimize benefits include the standardization of business processes incorporating best business practice and Informate is the ability to provide context rich information to support effective decision making.

Figure 2: Deloitte (1999) Implementation phases

The process of achieving additional benefits from an ERP implementation is referred to as “second wave” implementations. Deloitte (1999) believed that there are a number of phases that occurs post implementation (Figure 2). In the Stabilize phase companies familiarize themselves with the implementation and master the changes which occurred. The Synthesize phase is where companies seek improvements by implementing improved business processes, add complimentary solutions, and to motivate people to support the changes. The final stage, Synergize is where process optimization is achieved resulting business transformation.

THE ERP APPROACH: RISKS ERP projects are extremely costly, with initial installations running into lakhs and crores of rupees. ECAR (2002) identified costs that can be gained from the ERP costs (Table 3)

Enterprise Systems in Contemporary Educational Institutes for Administration

557

Table 2: ERP Costs, Ecar Study (2002)

ERP Costs Costs were greater than institutions originally planned. Efficiencies did not translate into cost savings. The larger the institution, the less likely it was to finish on time and on budget, regardless of the vendor. Workloads in departments and colleges actually increased in the short-term for 69 percent of the ECAR study respondents. Those institutions also reported a need for higher levels of technical skills and increased training for staff members Mandatory updates of vendor software were expensive and time consuming.

There are considerations to take into account when weighing the costs of an ERP approach. Even universities implementing ERP systems have found they needed to invest in some of the elements of a non-ERP approach, including data-warehousing techniques, best-of-breed applications, and middleware infrastructure. Many higher education institutions continue to rely on legacy systems, For example, nearly half of the institutions that participated in the ECAR ERP survey (2002) were still using legacy systems that were implemented before 1995, and two-thirds of the ERP institutions were continuing to use legacy systems for one or two of the three core business areas (student, financial, human resources/payroll). Additional factors which have been identified as contributing to failed implementations: include lack of management commitment, failure to include key personnel on the project team, poor lines of communication, poorly written or incomplete needs analysis reports, and issues such as political intrigue, conflict, hidden agendas and people issues (Bingi, Sharma and Godla, 1999); (Sumner, 1999). Deloitte (1999) have also given the list of barriers for ERP implementation. (Table 3) Table 3: Barriers Deloitte (1999)

ERP Barriers

Focus

Lack of Discipline

People

Lack of Change Management

People

Inadequate Training

People

Poor Reporting Procedures

Technical

Inadequate Process Engineering

Process

Misplaced Benefit Ownership

People

Inadequate Internal Staff

People

Poor Prioritisation of Resources

Technical

Poor Software Functionality

Technical

Inadequate Ongoing Support

Technical

Poor Business Performance

Process

Under Performed Project Team

People

Poor Application Management

Technical

Upgrades Performed poorly

Technical

Most institutions feel that they could not focus on much besides the core systems replacement, especially in the initial phases of an ERP project, and have postponed other potentially

558

Key Drivers of Organizational Excellence

high-payoff information technology projects, sometimes for years (ECAR, 2002). Institutions find that an ERP implementation requires a high level of commitment from the senior executives and a substantial investment of time and energy from the institution’s central support organizations and college and department business offices. Also, the ERP approach requires institutions to adopt business practices dictated by ERP software. The ECAR study (2002) found a direct correlation between user satisfaction and the number of modifications made to ERP software to meet local business needs. Yet the study clearly found that such modifications increased the cost, time, and risk of successfully implementing ERP systems. In addition, it is important to note that modifications need to be re-implemented and re-tested every time software upgrades are required by the vendor (Leon, 2007). If these vendor upgrades are not implemented, then eventually the software will no longer be supported by the vendor, and one of the advantages of ERP implementation will be lost. Once an institution decides to move ahead with an ERP approach for one of its core administrative systems, then entire system must be replaced. The ERP strategy creates an institutional dependency on the support received from the providing vendor. Once dependent on a vendor, the institution has very little leverage against non-competitive price increases for support.

THE NON-ERP APPROACH OR LEGACY SYSTEMS BENEFITS The institutions could address the deficiencies in their administrative applications by improving user interfaces and the access to information. They believe that business practices contained within their existing applications software has sufficient value that retaining the applications software is less risky and more economical than replacing it. Choosing a nonERP approach may allow the institutions to leverage their previous technology investments and focus their resources on next-generation technology enhancements while avoiding an upheaval in campus operations. Institutions decide how best to improve the efficiency of their business processes, rather than requiring them to conform to practices dictated by ERP technology. The non-ERP approach allows the institutions to make improvements incrementally, according to their own needs, priorities, and resources. The non-ERP strategy involves less dependency on a single-source vendor than does the ERP approach. ECAR research bulletin (2002) claimed certain factors are critical to making the non-ERP approach successful. First is providing adequate staffing and staff development, second is maintaining the currency of the technical infrastructure and third is maintaining functional currency of the applications..

THE NON-ERP APPROACH OR LEGACY SYSTEMS RISKS The biggest risk non-ERP approach is failure to make the technology and human resources investments necessary to keep the existing applications software responsive to ever-changing and growing requirements and to keep the complex supporting computing environment technologically viable. Specifically the risks of the non-ERP approach is maintaining technical currency means keeping up with the modifications necessary to ensure that the systems are technologically current, stable, reliable, and secure. This has become more critical as the university’s computing environment grows increasingly complex and interdependent. And also, maintaining functional currency which means continually integrating new

Enterprise Systems in Contemporary Educational Institutes for Administration

559

externally and internally mandated requirements into both systems and operations and making other adjustments necessary to meet evolving requirements.

RECOMMENDATION For implementing ERP an educational institute must develop an appropriate decisionmaking framework as early as possible; consensus decision making does not work for ERP projects (ECAR, 2002). Recognize leaders as early for selecting and implementing processes and plan your budget, schedule and information technology resources for ERP implementation. Consultants, auditors and peers also play a vital role in implementation. ERP implementation takes time so be skeptical about vendor promises about release dates of versions, features and quality (ECAR, 2002). Consider benefits and cost of customization for an ERP application because cost may vary accordingly. Continuous training and education is also very important during the implementation phase. There must be a good communication system especially for describing complexity of the processes. Legacy data conversion is the complicated task so it must be converted early which will also be helpful for testing and demonstration. Information security is the most critical issue so it must be taken at the higher priority. ERP projects are late and over budget (ECAR, 2002) so be ready for that. Change creates lot of problems for the people using ERP so change management planning should be done in advance. ERP projects never get completely 100%. And if non-ERP approach fits in your processes then stick to it.

CONCLUSION Many educational institutions have started implementing an Enterprise Resource Planning (ERP) system to address a number of problems such as poor administrative functionality, need of integrated data and information or poor systems. The institutions also started recognizing that they eventually may need to replace their legacy systems with newer technologies, possibly a future version of an ERP. The institutions either can continue to allocate the incremental resources necessary to support the non-ERP approach or commit to the considerable investments required to implement ERP systems. Either choice will require investment. Implementing an ERP system would be an extremely challenging, costly, and disruptive undertaking for the any institution, particularly in light of the current expanding professional education environment (Leon, 2007). It is expected that over the period of time the ERP approach will be more economical, involve less risk to the institution, and meet the administrative computing needs into the future. Even if sufficient money is not available, it is very likely that a non-ERP approach may also offer satisfactory performance instead of turning to an expensive one-size-fits-all ERP approach. This chapter shows that the expected benefits of ERP are unrealistic although it certainly improves the administration of educational organization. This research has touched the issues of people, technology, cost and processes for assessing risks and benefits of the ERP and non-ERP approaches.

References Alexis Leon (2007), ERP Demystified (second edition), Tata McGraw Hill: New Delhi

560

Key Drivers of Organizational Excellence

Bingi P. Sharma M. K. and Godla J. K. (1999), Critical Issues Affecting an ERP Implementation, Information Systems Management, 16(3), Summer, pp. 7-14. CSC, (2001), Critical Issues of Information Systems Management; www.csc.com/aboutus/uploads/ CI_Report.pdf Davenport, T., Harris, J. and Cantrell, S. (2002), The Return of Enterprise Solutions, Accenture. Deloitte (1999), ERP’s Second Wave, Deloitte Consulting. ERP Newsletter (2001), State of Iowa (US), 1(2), Available at http://www.infoweb.state.ia.us/newsletter/ erp/erp_apr.pdf, down loaded on Feb, 2008. In the ECAR study (2002), The Promise and Performance of Enterprise Systems in Higher Education, Issue 22. John S. Camp, Peter B. DeBlois and the EDUCAUSE Current Issues Committee (2007), Current Issues Survey Report. Down loaded from: http://www.evaluationcentre.com/erp_software/strategy/ white_papers.go. Sumner, M. (1999), Critical Success Factors in Enterprise Wide Information Management Systems Projects, Americas Conference on Information Systems, Milwaukee Wisconsin, August 13-15.

Reverse Logistics: Trends, Practice and Implications

561

61

Reverse Logistics: Trends, Practice and Implications Salma Ahmed

Logistics is the process of planning, implementing, controlling the efficient, cost effective flow of raw material, in-process inventory, finished goods, and related information from the point of origin to the point of consumption for conforming to customer requirements. Reverse logistics includes all of these activities but in a reverse direction. Therefore, reverse logistics can be defined as The process of planning, implementing, and controlling the efficient, cost effective flow of raw materials, in-process inventory, finished goods and related information from the point of consumption to the point of origin for the purpose of recapturing value or proper disposal. Therefore reverse logistics is the process of moving goods from their typical final destination for the purpose of capturing value, or proper disposal. Beyond this, remanufacturing and refurbishing activities are also included in the definition of reverse logistics. Reverse logistics includes reusing containers and recycling packaging materials. Reverse logistics also includes processing returned merchandise due to damage, seasonal inventory, restock, salvage, recalls, and excess inventory. It also includes recycling programs, hazardous material programs, obsolete equipment disposition, and asset recovery. Therefore, reverse logistics can be defined as the process of planning, implementing and controlling the efficient, cost effective flow of raw materials, in-process inventory, from the point of consumption to the point of origin for the purpose of re-capturing value or proper disposal. It includes re-manufacturing, re-furbishing, re-cycling activities and return goods merchandise due to product damage (product recalls).

INTRODUCTION Logistics is the process of planning, implementing, controlling the efficient, cost effective flow of raw material, in-process inventory, finished goods, and related information from the point of origin to the point of consumption for the purpose of conforming to customer requirements. Reverse logistics includes all of these activities but in a reverse direction. Therefore, reverse logistics can be defined as The process of planning, implementing, and controlling the efficient, cost effective flow of raw materials, in-process inventory, finished goods and related information from the point of consumption to the point of origin for the purpose of recapturing value or proper disposal. Therefore reverse logistics is the process of

562

Key Drivers of Organizational Excellence

moving goods from their typical final destination for the purpose of capturing value, or proper disposal. Beyond this, remanufacturing and refurbishing activities are also included in the definition of reverse logistics. Reverse logistics includes reusing containers and recycling packaging materials. Reverse logistics also includes processing returned merchandise due to damage, seasonal inventory, restock, salvage, recalls, and excess inventory. It also includes recycling programs, hazardous material programs, obsolete equipment disposition, and asset recovery (Cooper, 1994). Therefore, reverse logistics can be defined as the process of planning, implementing and controlling the efficient, cost effective flow of raw materials, in-process inventory, from the point of consumption to the point of origin for the purpose of re-capturing value or proper disposal (Tibben-Lembke, 1998). It includes re-manufacturing, re-furbishing, re-cycling activities and return goods merchandise due to product damage (product recalls).

IMPORTANCE OF REVERSE LOGISTICS Reverse logistics is important for: 1.

Assets re-utilization.

2.

Assets recovery (To capture the value, which otherwise will be lost)

3.

Reducing costs through recycling

4.

Environmental concern e.g.: Waste recycling, Hazardous waste management e.g.: Car batteries disposal.

5.

Customer Relations Management, e.g. after sales service, buy back guarantee

ACTIVITIES INVOLVED IN REVERSE LOGISTICS There are four main reverse logistic processes. l

Collection: Collection refers to bringing the products from the customer to a point of recovery (Thomas, 2007).

l

Combined inspection / selection /sorting: In the inspection / selection and sorting phase products are being sorted according to the planned recovery option and within each option, products are sorted according to their quality state and recovery route.

l

Re-processing or Direct recovery: Direct recovery includes re-use and re-sale. Re-use refers to end-of-use returns which often contain valuable components which can be reused while re-sale refers to supply chain returns (products in good condition) which can be sold at a discount rate or in a secondary market.

l

Redistribution: is the process of bringing the recovered goods to the new users.

l

Refurbishing: Large installation, building or other civil object is refurbished after which it is again in a better state.

Reverse Logistics: Trends, Practice and Implications

563

l

Remanufacturing/Retrievals: Products are dismantled and their parts are used in the manufacturing of the same products (remanufacturing) or of different products (retrieval).

l

Recycling: In case of recycling, products are processed in order to get the desired quality after which they are being reused e.g paper pulp and glass.

l

Incineration: Products are burned and the released energy is captured.

Therefore, the reverse logistics activities would include the processes a company would use to collect used, damaged, unwanted (stock balancing returns), or outdated products, as well as packaging and shipping materials from the end-user or the reseller. The major issues in reverse logistics refer to how the firm could effectively and efficiently get the products from where they are not wanted to where they can be processed, reused, and salvaged. Also, the firm must determine the “disposition” of each product. Therefore, for each product, the firm needs to decide the final destination for products inserted into the reverse logistics flow.

REASONS FOR REVERSE LOGISTICS There are various reasons for reverse logistics to emerge as a major business activity. These reasons are: l

Business Requirement

l

Good Corporate Citizenship

l

Scrap Disposal or Waste Management

l

Green Concern

l

Buy-backs

l

Re-Calls

A Business Requirement For some companies, managing reverse logistics is an essential component of the business. For instance, for the bottling plants reverse flow is a business requirement. The complexities involved in managing reverse flow for such an organization is minimal as the sourcing party as well as the supplier party remain the same for the forward as well as the reverse distribution. The network of parties involved and the design of the supply chain remain the same. Further, the chances of wear and tear, of damaged products, product recalls also are minimal.

Good Corporate Citizenship Reverse logistics is an area that a company uses effectively for corporate social responsibility. Such firms use reverse logistics for philanthropic reasons. In the developed countries, companies collect old clothes back from homes to re-distribute them to schools and homeless shelters. A shoes manufacturer also encourages consumers to return old shoes to their company stores. In return for bringing in an old pair of shoes, the customer receives a 20

564

Key Drivers of Organizational Excellence

percent discount on a new pair of shoes. This program has been very successful in providing shoes to those in need. However, these activities enhance the value of the brand and are also a marketing incentive to purchase their products. Even Nike encourages consumers to bring their used shoes back to the store where they were purchased. These shoes are then send to the company where they are shredded and made into basketball courts and running tracks. Managing these unnecessary reverse flows is costly. But it is an effective part of CRS activities.

Scrap Disposal or Waste Management This is the sector which is the largest in terms of value as well as volume. However, this is also the most unorganized of all sectors. This includes collection of news papers, plastics, batteries, scrap iron wastes, etc. This is also prominent for the automobile sector wherein scrap aluminum is disposed off in similar manner. In such cases the collection is made by the un-organized sector; that is newspapers are collected from households at their doorstep, further send to an aggregator, who supplies to the manufacture or the re-cycler of the material. This is the sector which is the greatest potential of being tapped profitably.

Green Concern When the life of products expires, the products are disposed off by the consumers. These add to the land fill. To manage this, standardized take-back laws are in development across the European Union and other developed countries of the world. Although these policies are not yet implemented, work is under way to draft common policies for all member countries. In different countries, the policies are different, but the major areas of concern are for the following products: l

White goods: refrigerators, freezers, heating equipment, water boilers, washing machines, dishwashers, and kitchen equipment

l

Brown goods: sound equipment, televisions, photocopiers, and fax machines

l

Computers

l

Automobiles

l

Batteries

Such companies are also encouraging buy back policies.

Buy-Back: Reverse logistics also plays a role when products are sold back to the manufacturing concern. The True Value shops set up by Maruti Udyog Limited acts as a channel intermediary for reverse flow of products. Here the old products are valued and sold back. These are instances of buy back arrangements created by manufacturer organizations. Such arrangements have become very popular by the FMCG manufacturers. Re-Calls: Product re-calls refer to the situations wherein the product is found to be defective on large scale and is called back from the market (Ethiraj, 2007). Very often quoted instance is that of re-call of Nokia batteries which took place recently.

DIFFERENT AREAS OF APPLICATION The magnitude and impact of reverse logistics varies from industry to industry. It also varies depending on the firm’s channel choice. The importance of reverse logistics activities is

Reverse Logistics: Trends, Practice and Implications

565

highlighted in the developed countries but its significance has yet to take a place of prominence in the Indian set up. Within specific industries, reverse logistics activities can be critical for the firm. In firms where the value of the product is largest, or where the return rate is greatest, much more effort has been spent in improving return processes. One prominent example is the auto parts industry. According to some estimates, there are currently 12,000 automobile dismantlers and re-manufacturers operating in the United States. In India these activities have yet to pick up (Chopra and Meindl 2003).

Application Areas The list of industries where reverse logistic plays an important role: l

Publication houses: (40-50% by volume): To take back the unsold volumes for reuse.

l

Beverage industries: To collect reuse the empty bottles e.g. Coca cola & Pepsi.

l

Heavy industries: To collect and reuse the waste.

l

Consumer goods industry: To fulfill the commitments of after sale service and buy back guarantee.

l

Pharmaceutical industries: To collect the expired formulations and drugs for environment friendly disposal.

l

Automobile industries: To fulfill the commitments of after sale service and buy back guarantee.

COMPLEXITIES IN CHANNEL MANAGEMENT Managing channel in reverse logistics is very different from managing channel in forward flow of goods. (Refer table-3). There exist immense complexities in managing the channel in reverse logistics simply because reverse flow does not take place all the time, there are no well defined channel through which products need to pass, there are no limitations of time associated with reverse flow and also because maintenance or adherence to quality is also not an issue. According to Dale Rogers, Professor of Supply Chain, University of Neveda, moving backward through the supply chain is more difficult and complex because there isn’t a priority and products are moving against the normal flow (Tompkins, and Dale, 1994). Also reverse logistics is seen as a cost and therefore importance is not relegated to it; while the forward flow is seen as a profit generation activity. It must establish convenient collection points to receive the used goods, defective goods so that more efficient use of material can be achieved. It would require special packaging and storage systems so that value of good can be maintained and is not lost due to careless handling. Also it involves development of transportation modes that is compatible with the existing forward logistic system.

REVERSE LOGISTICS INFORMATION SYSTEMS Managing reverse logistics has its related complexities because of lack of (non-existence of) information system tracking and managing the reverse flow of products. For a successful reverse logistics system, a reverse logistics information system is required. Reverse logistics has its added complexities because it involves moving beyond the boundary of the

566

Key Drivers of Organizational Excellence

organization. Thus a system has to developed which will track product flow across customer, retailer, wholesaler, and manufacturer and so on. Thus developing systems that have to work across boundaries adds additional complexity to the problem. For example a retailer may have a system that track returns at store level. This system should be able to create a database at store level itself so that the retailer be able to begin tracking returned product and follow it all the way back through the supply chain. This information system should also provide detailed information programs with respect to reverse logistics measurements, such as return rates, recovery rates, and returns inventory turnover etc. This would be very valuable information for important decisions like investigating into reasons for high level of defects, high return rates for unsatisfactory customers.

BARRIERS TO EFFECTIVE REVERSE LOGISTIC SYSTEM In organizations, managing reverse logistics was found to be a difficult proposition also because various barriers existed in organizations which acted as impediment in its successful implementation (Ravi and Shankar, 2004), (Refer table-2).These were found to bel

Importance of reverse logistics relative to other issues: Executives did not consider reverse logistics to be of much importance. It was often pushed to a position of secondary importance.

l

Company policies: It did not figure in company policy decisions.

l

Lack of systems: Reverse logistics system did not exist.

l

Management inattention: Management in most organizations still does not give reverses logistic a place of importance.

l

Financial resources: Because financial resources are scarce, they are diverted to more important concerns.

l

Personnel resources: Because human resources are scarce and also involve a cost, they are deployed to areas of priority.

l

Legal issues: There are not very strict laws laid down by government demanding management of wastes, etc.

CASE STUDY OF AUTOMOBILE SECTOR The Indian economy is on a growth trajectory and the automobile sector. It is one of the core sectors of the economy with forward as well as backward linkages with several key segments. As such it is capable of driving economic growth. It contributed 5% to the GDP (IN 2005-06) and 5% to the total industrial output. Its turnover is to the range of 10 billion US dollars and the auto-component segment contributing 2.7 US billion dollars. The Indian Automobile sector has the distinction of being the second largest two wheeler manufacturer, world’s largest motor cycle manufacturer, the fifth largest commercial vehicles manufacturer, the second largest tractor maker, and fourth largest car market in Asia (Agarwal, 2007),. Logistics costs in the automotive industry account for 2-3% of sales whereas in auto components industry it is around 3-4%. This refers to costs in forward flow.

Reverse Logistics: Trends, Practice and Implications

567

In automobile, when a vehicle reaches the end of its life, it eventually ends up as a second hand vehicle and re-sold. A better arrangement could be setting up of second- hand dealer network to manage these efficiently. A better valuation of products and sale after refurbishment can take place. The dealers play a critical role in helping the used products flow back for refurbishment and re-sale. Yet after this stage, the vehicles add to the land fill. In the developed countries, initiatives have been taken to control the land-fill. These products land up at an auto salvage yard or auto dismantler. There are auto-dismantlers set up by the automobile manufacturing companies. There, an assessment is made of the components of the vehicle. Any parts or components that are in working order that can be sold in the very condition in which it is, are removed and sold. Many other components, like engines, alternators, starters, and transmissions which are found to be in fairly good condition are after some refurbishing or remanufacturing sold to a customer. Once all reusable parts have been removed from the vehicle, its materials are reclaimed through crushing or shredding. Shredded metals are generally reclaimed, and the remaining material, known as fluff, is not recycled (cannot be recycled). It is estimated that every year, automotive recyclers handle more than 10 million vehicles. Also these steps help to supply more than 37 percent of the nation’s ferrous scrap for the scrap-processing industry. Further, it is estimated that 25 percent (approximately by weight) of the material in a car is not recycled in the United States and approximately 35 percent of the nonmetal material left after shredding a car is plastic. This adds to the landfill and to reduce the amount of land-filled plastic, firms are trying a number of alternatives. One solution is to reduce the number of types of plastics used, and to label the parts for easier separation after disassembly. Ford, for example, reduced the number of grades of plastic that it specifies from 150 to 20 only. To increase the recyclability of cars, the big three automakers in the U.S. have collaborated to form the Vehicle Recycling Development Center (VRCD). At the VRDC, efforts are made to build vehicles which could be disassembled more easily. Avenues in newest trends in engineering, Design For Disassembly (DFD) are being explored. Disassembly of a product is made easier by reducing the number of parts, rationalizing the materials, and introducing snap fitting of components instead of chemical bonds or screws. These objectives fit in well with other current manufacturing. Strategies of global sourcing, design for manufacture, concurrent engineering, and total quality. These initiatives have yet to take place in India. The reverse flow, if managed efficiently, effectively and intelligently, could reveal opportunities to be tapped to generate stream of revenue flow (Chopra, Kalra, & Meindl, 2007).

REVERSE LOGISTICS AS A STRATEGIC WEAPON Organizations must view reverse logistics as a strategic weapon and utilize this as an opportunity to be tapped. Retailers who face high level of returns need to analyze the reason for these returns. Products falling in high-return categories are catalog, toys, and electronics. Many retailers and manufacturers have liberalized their return policies over the last few years due of competitive pressures. Even today a satisfied customer is what all businesses look forward to. And a part of a satisfied customer involves taking back their unwanted products or products that the customers believe do not meet needs and managing the reverse flow smoothly (Mitra and Pande, 2007).

568

Key Drivers of Organizational Excellence

Retailers look for ways to analyze the returns process, and tap if efficiently. A system should be arranged through which customers return to retailers, retailers to wholesaler, wholesaler to manufacturer where the product id disassembled into its component parts (Saxena and Guha, 2007).

CONCLUSION Reverse logistics practices differ from industry to industry. Industries where returns form a larger portion in terms of value and volume should develop efficient and effective reverse logistics systems. For instance, technology industry is hit by obsolescence and retail industry by seasonality. For such sectors, there exists immense opportunities for better application of reverse logistics systems and creating value. However, the systems and processes are built are usually to support the forward flow of products and managers are so contained in managing the forward flow that they do not perceive synergies of managing returns within the entire context of the supply chain. Also forward flow is seen as contributing to profitability while reverse flow as contributing to costs. However, if reverse flow is used effectively, it could go a long way in generating revenue streams. Reverse logistics could be used to create a viable re-sale channel and information technology could act as an efficient enabler, it could serve as a platform to communicate product information, return policies and manage reverse flow efficiently. This would help in growth of customer relations, contain and control costs, enhance profitability and also increase corporate citizenship and responsibility. Investment recovery is an important area that could be explored. Re-manufacturing and refurbishing old products and putting them into the secondary market, leasing used parts equipment, auctions and liquidation can create revenue streams. Further, cross-industry consortia could also be very useful for sharing the requirements for recycling (but without a competitive clash of interest) pool their resources so that they can cost-effectively tackle their electronic and electrical wastes. For reverse logistics to be successful, collaboration between the supply chain partners is also very critical. Therefore, the future direction of supply chain could be in mastering reverse logistics. As someone has rightly said where there is pain, there are also opportunities for gain.

References Economic Times Intelligence Group (2002), The Economic Times knowledge Series, Supply chain & Logistics, Economic Times Chopra, Sunil and Peter Meindl (2003), Supply Chain Management, 2nd edition, Pearson Education: New Delhi. Chopra, S., Kalra D.V. & Meindl, P (2007), Supply Chain Management, Pearson: New Delhi. Tompkins, A. James; Harmelink, A. Dale (1994), The Distribution Management Handbook, Mc Graw Hill Inc: New York. Cooper James (1994), Logistics and Distribution Planning-Strategies for Management, 2nd ed, Kogan Page: London. Tibben-Lembke. R. S. (1998), Going Backwards: Reverse Logistics Trends and Practices, Retrieved on September 13, 2007 from Reverse Logistics Executive Council Web site: http://www.rlec.org. Bloomberg J David, Hanna B Joe and Lemay Stephen (2002), Logistics, Printice Hall of India: New Delhi. Agarwal Vineet (2007), Logistics: A Gear for Auto Industry to Cut Costs, Business Line, May 28. Ravi V and Shankar Ravi (2004), Analysis of Interactions among the barriers of reverse logistics, Technological Forecasting and Social Change, 72(8), 1011-1029.

Reverse Logistics: Trends, Practice and Implications

569

Philip Thomas (2007), Panic After Nokia Battery Recall, Economic Times, Aug. 17, p-1. Mitra Moinak and Pande Bhanu (2007), Recall: Admission isn’t Brand Submission, Economic Times, Aug. 17, p-4. Ethiraj Govindraj (2007), Lessons from China’s Recall Episode, Business Standard, Sept 4, p-11. Nokia distances itself from faulty batteries, Business Standard, Sept 4, 2007, p-5 Saxena Ruchita and Guha Sumana (2007), Metal Toy Recall May Improve Quality In India, Business Standard, Sept 7, p-5

570

Key Drivers of Organizational Excellence

Annexure Table 1: Reverse Logistics Activities

Common Reverse Logistics Activities Return to Supplier

Products

Resell Sell via Outlet Salvage Recondition Refurbish Remanufacture Reclaim Materials Recycle Landfill Reuse

Packaging

Refurbish Reclaim Materials Recycle Salvage Table 2: Barriers to Reverse Logistics

Imperative to Other Issues Company policies Lack of systems Management inattention Financial resources Personnel resources Legal issues Table-3: Difference between forward and reverse logistics

Forward Logistics

Reverse Logistics

Forecasts relatively straight forward

Forecasting more difficult

One-to-many distribution points

Many-to-one distribution points

Product quality uniform

Not uniform

Product packaging uniform

Not uniform

Destination/ Routing clear

Not clear

Pricing relatively uniform

Pricing dependent upon many factors

Importance of speed recognized

Speed often not considered a priority

Forward distri cost easily visible

Reverse costs less directly visible Contd...

Reverse Logistics: Trends, Practice and Implications Inventory management consistent

Not consistent

Negotiation between parties straight forward

Negotiations complicated by various factors

Marketing methods well known

Marketing complicated by various factors

Visibility of process-more transparent

Visibility of process-less transparent

571

572

Key Drivers of Organizational Excellence

62

Nemmadi: A Peace of Mind Application for Rural People of Karnataka Sameer K. Rohadia

Improving Governance is a part of a development process. It is argued that corruption can be curbed by systematic changes in governance through introducing participation, transparency, accountability and probity in administration. The right to good governance is also considered as an essential part of the citizen's rights that one can expect from the government. The growth of Internet technology is changing the way people live, communicate and work. Besides fast delivery of services, Internet technology enables to bring more transparency in governance and many benefits to the e-governance community.

INTRODUCTION The growth of Internet technology is changing the way people live, communicate and work. The Internet has brought in a fundamental change in our personal and professional lives. It is radically changing the way governments operate across the globe. Besides fast delivery of services, Internet technology enables to bring more transparency in governance and many benefits to the e-governance community. Achieving the widespread use of IT for the benefit of people is the major challenge faced by the policymakers. As a step towards meeting the above challenge, progress is seen in every State with respect to the efforts made by the governments, commercial entities, non-profit and grant-making bodies to expand Internet connectivity. E-governance is composed of information technology, people and governments. E-governance is the application of electronic means to improve the interaction between government and citizens; and to increase the administrative effectiveness and efficiency in the internal government operations. Further, it is the application of IT to the Government processes to bring Simple, Moral, Accountable, Responsive and Transparent (SMART) governance. Government of Karnataka (GoK) has been a pioneer in leveraging Information Technology in easing the lives of both urban and rural citizens. One of the most path breaking of these eGovernance applications is BHOOMI that enables ‘over the counter’ delivery of computerized land records to farmers from the 203 taluka (Tehsil) offices of the State. While the Bhoomi

Nemmadi: A Peace of Mind Application for Rural People of Karnataka

573

programme tremendously benefited the farmers there was a demand for establishing of delivery centres for land records at the village level itself. The need for decentralization of Bhoomi catalyzed the development of the ‘Nemmadi programme’ of the government of Karnataka. To collect her monthly pension of Rs. 125, Jayamma (65) travels 13 kms from her village Kadadakatte to taluka headquarters Bhadravathi, spending Rs. 30 and using up the whole day. Now, her hamlet will have a kiosk, which will allow her to withdraw the amount under 10 minutes. Thanks to Nemmadi, this certainly would give peace of mind (nemmadi) to Jayamma and her fellow villagers.

BACKGROUND The State government understood that it could not establish and operate computer centres at every village and hence decided to establish these centres in the villages under a Public Private Partnership (PPP) model. It was also apparent that these village telecentres would not be viable if only land records were delivered from these centres and for viability, other eGovernance services also needed to be delivered through these village telecentres. It started in Mandya taluka office in May 2004 initially with only five services under the ‘Rural Digital Service’ model; it was expanded to 37 services later. In the period from 4th May to 6th September, the Nemmadi model was piloted in 13 talukas of four districts of the Karnataka State and services were delivered to citizens through 70 village tele-centres. Vipin Singh, Director, Electronic Delivery of Citizen Services (EDCS), e-Governance Secretariat says, “The Government decided to go in for a phased rollout and initially establish 800 of these at village hobli level and later expand the number of tele-centres to 5,000. In September 2006, the project was awarded to a private partner through a transparent tendering process and by April 2007, about 750 tele-centres were functioning across Karnataka.

NEMMADI PROJECT Government of Karnataka has set up, through PPP model, a network of 800 telecentres at village hobli level. It has also got set up 177 back-offices at taluka level. The telecentres would deliver range of G2C and B2C services at the citizen’s doorsteps. The Government of Karnataka’s vision for the Nemmadi project is that IT enabled Government services should be accessible to the common man in his village, through efficient, transparent, reliable and affordable means. The mission of the Nemmadi project is to deliver efficient Government services at the Citizen’s doorstep. Elaborating on the objectives of Nemmadi, Mr. Singh adds, “We hope to create efficient virtual offices in all villages to enable Government departments and agencies to focus on their core functions and responsibilities by freeing them from routine operations like issuing of certificates and land records. Muthu and Ravi, residents of Ramanagara town says, “Earlier we had to approach the tahsildars, shirastedars, village accountants or the revenue inspectors to obtain certificates. Since the establishment of Nemmadi, the centre forwards the application and asks us to collect certificates within a maximum period of ten days. It has made our task hassle-free.”

574

Key Drivers of Organizational Excellence

A Nemmadi Telecenter delivering RTCs

The initial investment of Rs 30 crore has come from the state government and a consortium comprising Comat Technologies, N-Logue and 3i Infotech, with 3i bearing almost 90 percent of the cost. “Our investment is to the tune of at least Rs 25 crore,” says Anirudh Prabhakaran, 3i Infotech’s COO for South Asia. “That covers costs of all equipment, connectivity, VSAT operations, real estate leasing, power, UPS and peripherals. Further, the cost of running telecenters is also our responsibility. As far as operations are concerned, 3i Infotech has been the interface between the government and the project management team,” he explains.

Objectives of the Nemmadi Project l

To create efficient and smart virtual offices of state government in all the villages.

l

Initially, to provide copies of land records and 38 other citizen services of the revenue department in a convenient and efficient manner through 800 village Tele-centres across rural Karnataka.

l

To scale up the operations to cover all other G2C services of all the departments.

l

To enhance the accountability, transparent and responsiveness of the government to citizen’s needs.

l

To provide government departments and agencies means of efficient and cost effective methods of service delivery to citizens.

l

To manage the delivery of services through PPP (Public Private Partnership) model.

l

To enable government departments and agencies to focus on their core functions and responsibilities by freeing them from the routine operations like issuing of certificates, land records, collection of utility bills of citizens and thereby enhancing the overall productivity of the administrative machinery.

Nemmadi: A Peace of Mind Application for Rural People of Karnataka

575

Salient Features of Nemmadi Project l

Single window system for all government services at the Village level.

l

No need for any written applications to be submitted for any service.

l

Manual system to be stopped altogether.

l

Kiosk operators will get service charges which is Rs. 15 for every service.

l

Kiosk operators will collect and remit government fees at taluka offices.

l

To cover all the talukas and hoblis in the state.

l

No provisioning of services at taluka offices. True decentralization to village level.

l

Kiosk operators are expected to provide services on turnkey basis.

l

Citizen database creation and updating is part of the overall work process.

l

Signature less documents to be issued after being digitally signed by the appropriate authority.

l

Bar coded certificates are issued for verification.

l

Backend databases requirement is done away with, to start the project, much unlike the Bhoomi project, which needed a database to start with.

l

The project represents a hybrid model, which integrates computer and manual processes at appropriate situations.

l

It is a socially responsible project, which meets its development objectives and impacts the rural citizen’s present way of living.

l

It aims at fulfilling the FIVE dimensions prescribed by the European Commission in 1997 as “availability, continuity, affordability, accessibility and awareness”, which are considered as pointers for building utility-oriented, sustainable and acceptable bridge between rural and urban areas.

l

By following the Public Private Partnership (PPP) model, the project proves that in building the Global Information Infrastructure (GII), and in ensuring as wide an access as possible to its services, government and the industry have a shared responsibility.

l

It is the project that generates revenue to the implementers while at the same time meeting its social mission of bridging the digital divide by taking the technology benefits to the doorstep of the rural citizens.

l

The project combines the customer-friendly attitude of private sector, backed by reliability of the government sector.

l

As an incidental development to the implementation, the project is contributing to the reduction in unemployment by providing employment to the locally available educated unemployed youth.

l

Besides, government services, the tele-centres will also facilitate delivery of a number of private services like payment of electricity and telephone bills, sale of mobile bills,

576

Key Drivers of Organizational Excellence sale of insurance services, application for loans, digital photography, telemedicine, computer education, access to exam results etc.

INFRASTRUCTURE REQUIREMENTS The various components of the service infrastructure are:

Village Telecentre The village Telecentre is the primary channel for accepting citizen requests and delivery of the certificates to the citizen. As mentioned below, in case the citizen makes a request for a certificate that has been issued previously and that certificate still remains valid, then the request can be fulfilled over the counter. The RDS application at the village telecenter is a thick application which normally uses online connectivity to write directly to central database but has the capability to work in offline mode when connectivity is not available. In offline mode, it writes data into the local queue and on restoration of connectivity, syncs with the central database at the state data center.

State Data Center The RDS application uses the SDC for both routing of requests from village Telecentre to the Taluka back office and from Taluka back office to the village Telecentre. The RDS database at the SDC is a consolidated database of all the Taluka databases. The RDS database at SDC is a single database unlike that for Bhoomi where each taluka database is maintained individually.

Nemmadi: A Peace of Mind Application for Rural People of Karnataka

577

Taluka back office The requests for RDS services by citizens can be grouped into three categories: a)

Certificates that have already been issued and are within the validity period of the certificate.

b)

Fresh requests

c)

Requests for which an entry exists in a database.

Requests of type b) & c) are processed at the RDS back office. Subsequent to receiving the electronic request from the Telecentre through the SDC, the request is processed by appropriate authority for verification and validation. On receiving the comments of such appropriate authority, the final certificate is generated and is digitally signed by the competent signatory. This digitally signed certificate can now be downloaded at the village telecenter itself and issued to the applicant.

Taluka server RDS uses the same server that Bhoomi uses. The Server would contain the database of RDS transactions at the Taluka. The database at the Taluka is replicated to the SDC on a real time basis.

Wide Area Network For the current delivery of Nemmadi services and Bhoomi services, the State government has set up a network of VSATs linking each of the taluka servers to the State Data Centre. Later once the Karnataka State Wide Area Network (KSWAN) is set up, these taluka servers will connect to SDC through the KSWAN.

Nemmadi Revenue Model The revenue model of Nemmadi assumes a complex form. A PPP partner (having Telecentre Agency) would earn its revenue from: l

A share of the user charges for e-Governance services. The user charges are different for different category of services, depending on the area, season, nature of service, and the amount of database that goes into it.

l

Each print costs between Rs. 7 and Rs. 15, which is split between the government and private partners.

l

A Transaction charge for bill collection for Utility Companies, Panchayat property taxes etc.

l

Fees from the Government of Karnataka for providing services under various Government programs like Sarva Shiksha Abhiyan.

l

Data entry charges for operations like crop updation or for digitization of data of various Government offices.

l

Hiring of hand helds to departments of Government of Karnataka.

578

Key Drivers of Organizational Excellence

l

B2C services for which the Tele-centre agency would need to work with various content and service providers.

l

Other IT and Internet based services like computer education, DTP work etc.

l

Karnataka government has also set up the Nemmadi Monitoring Cell (NMC), which receives 3 percent of the share of all B2C and G2C service charges.

How the System Works? All the village telecentres run on PPP model with minimal investment from the side of the government and yet have the overall functioning control to regulate the quality of services and authentication of information. A common private partner is selected through an open and transparent competitive bidding process, to combine the customer-friendly attitude of private sector backed by the reliability of the government sector, both of which are necessary to be integrated to deliver the services to the satisfaction of the citizens. As a part of the implementation of the project, virtual offices are created at the village level for provisioning of various government services. With the anticipation of bringing comfort and satisfaction to the citizens in the delivery of government services, a pilot program was launched in one taluka, which was rolled out to 13 talukas in the second phase and scaled up to all the 177 talukas in the third phase. Now there are 800 telecentres/village kiosks spread over 750 Hoblis in the Karnataka State. One Hobli consists of 45 Gram Panchayats and each Gram Panchayat with four to five villages under its jurisdiction. These will be extended to cover all the Gram Panchayats of the State, numbering around 5000, so that there is one for every 10,000 to 20,000 people. There is one front office for each Hobli and one back office for every taluka office, which consists of four Hoblis. Village telecentres are equipped with computers, printers, scanners and a broadband Internet connection, to deliver G2C services such as land records, issue of caste and income certificates, birth and death certificates, submitting applications for social security schemes like old age pensions and many other similar services. Certificates are issued across the counters using the citizen database, after getting the digital signature of the Tahsildar In Nemmadi centres, the applicant has to merely sign on the form generated along with his photograph and request number. The signed document is scanned for uploading the information in the back office system at the Tahsildar’s office, after which intimation is sent to Revenue Inspector and Village Inspector to proceed with the field inspection. After the clearances are uploaded and the digital signature of the Tahsildar is obtained, the certificate is issued to the applicant. By presenting the request number, the applicant can know the status of the application. To facilitate the speedy issue of the certificates without the need to go through the Tahsildar or other officials, villager’s database is maintained at the centres. The required software for the Nemmadi project has been provided by National Informatics Center (NIC), an organization set up by the central government, for use of Information Communications Technology (ICT) tools in e-governance projects of the States. This organization also is responsible for building the software for government’s flagship egovernance project ‘Bhoomi’.

Nemmadi: A Peace of Mind Application for Rural People of Karnataka

579

Advantages Of Nemmadi Project Following are the major advantages of Nemmadi Project: l

Increase in the Citizen Interface with the Government: The telecentres have become the virtual offices of the Government. By delivering various services these centres are bringing the government closer to the citizens. Along with knowledge, information and benefits, various types of development schemes and programs by the government will be disseminated to large number of citizens, especially those away from the centre of governance.

l

Saving Time, Energy and Money of the Citizens: Pahani or the Record of Rights, Tenancy and Crops (RTC) represent a highly valuable document in the hands of a villager who owns fragmented land holdings. To get this document, the landowners hitherto had to trudge miles to the taluka office, spending considerable amount of time and money. Now this task has become the thing of the past. Thanks to the Nemmadi Project, time, energy and money of all the citizens living in 750 Hoblis, was saved as of date.

l

Reduction in the Interruption of Official’s Work: Since the village telecentres are also acting as information providers to citizen’s on matters relating to the delivery of government services, the officials are relieved from the work interruption caused due to the responsibility of attending to the enquiries of the citizens.

l

Providing Employment to Educated Unemployed Youth of the Area: Setting up of kiosks had been a challenging task and yet, a satisfying one as it provides jobs to rural youth.

l

Offer Private Services to the Citizens: The telecentres, besides providing government services, facilitate delivery of wide range of private services to the rural population; which include sale of mobile communication, insurance services, application for loans from the banks, employment applications, providing services in digital photography, telemedicine, computer education, access to exam results, photocopying, document preparation, writing services, and so on.

l

Use of Digital Signatures to reduce the waiting time of the Citizens: Every RTC Certificate has to be signed by the Village Accountant (VA) for authenticity. As of now the documents are taken to the Vas for signature, which is causing delay in issuing the documents to the citizens. The Government of Karnataka is making efforts to get the digital signatures of over 10,000 VAs across the State.

l

Cost-effective E-Governance: Under the PPP model implementation of the project, the Government gets Rs. 11 and the private partner, Comat Technologies earns Rs. 4, out of the total Rs. 15, collected as fee from the villager for getting the RTC document.

Barriers Crossed It goes without saying that a project of such dimensions would have faced number of challenges. Following are some of the problems faced and solutions applied: l

Despite Comat’s involvement in a number of rural projects, there were situations that occasionally created trouble. Nemmadi was a completely different opportunity because it not only required Comat to use technology in the rural areas but also enable the locals to use it.

580

Key Drivers of Organizational Excellence

l

There were problems relating to shipping, equipment and people within planned schedules. Getting the technical installation crew to the locations was another issue, and they took some time to settle down. In order to test out each of these kiosks, they had to make sure that they identify the right location.

l

Not every building that the team liked could carry the dish. So, they had put together a checklist for our team, with which it could identify the right place and then negotiate the lease agreement. It was only after this process that the rest of the team and equipment could be shipped.

l

The large distances also made planning difficult in many cases. The team would plan for 3-4 kiosks to be operational in one day. But in reality, in most cases, they were able to do only one. In some cases, rework was necessary.

l

The issues that came up in the first 200 kiosks were the biggest part of the learning. In several cases, the distance from a rooftop to the kiosk was so much that wiring cable was not available. Or, roofs could not accommodate a dish.

l

Managing attrition at the operator — as well as project level — was a huge problem. There would be no prior notice, and the person would just not turn up for work. Motivating people to return to work every morning took a lot of the team’s energy. There were also problems due to the complexity of plans for setting up deployments.

l

More importantly, there were technological challenges to handle. Since this is a multitiered operation, multiple software was required. NIC had provided the software for the RTCs. But for the others, Comat team had to create the software. Comat had developed GSI that provided a common platform for delivery of all RTCs.

l

Comet team needed a reporting system so that they could get constant feedback on daily problems faced by the remote kiosks. They have developed an asset tracking system by which we can track consumables — like printing paper — as well as the cash coming in for payments. The process of cash deposit had to meet specific requirements.

l

Another huge problem Comat faced was breaks in connectivity. Any break in connectivity affects the RTC printout. They have to keep track of all the papers. At the end of every month, they have to give a report to the government listing out the RTCs that have been issued, and even those that got damaged due to communications failure.

l

To fight the problem of breaks in prints causing security or authenticity issues, measures like holograms that identify the documents’ origin and authenticity have now been introduced. In addition to this, there is a watermark numbered on every page. Each area gets a certain number of pages issued to be used to print the RDS. In case of a printer or connectivity problem, the page has to be kept in record and returned to the state government.

Awards Received l

Nemmadi won the “Oscar of Asia’s Public Sector IT” by being awarded Asia’s premier Government Technology Award. Mr. M N. Vidyashankar, Secretary Department of IT,

Nemmadi: A Peace of Mind Application for Rural People of Karnataka

581

BT and Science and Technology said that Nemmadi was adjudged the best among as many as 212 projects from 14 different countries at the Government Technology Award presentation ceremony held at Phuket, Thailand in November 2007. Government Technology Award was instituted by Public Sector Technology and Management Magazine based in Singapore under the category of ‘Digital Inclusion. l

Won the Microsoft e-Governance Award at the Microsoft Government Leadership Summit 2007, held in New Delhi, for being the best IT project for rural development. This annual awards recognizes the most impactful E-governance applications in the country. The theme for the awards this year was ‘Reaching out through ICT’ and the projects nominated include those developed by State governments, Central government or Private companies in partnership with Microsoft. The projects were adjudged on several parameters including effectiveness in delivering project objectives, optimizing on delivering government services, innovation, relevance and scalability.

l

Also winner of Silver National Award for e-Governance for ‘Exemplary horizontal transfer of ICT-based Best Practice’ at 11th National Conference on e-Governance at Panchkula, Haryana during February 7 – 8, 2008. This award is given by Department of Administrative Reforms & Public Grievances, Government of India jointly with Department of Information Technology, Government of India.

What Next? Expectations have risen multifold. For example, villagers resent having to spend two whole days to collect a caste certificate, even though the village accountant used to take a week to do the same task. Villagers now want the certificate in one day! The success of the project lies in people accepting change in its real form. Optimization of time has been tried out in Nemmadi processes. There is one SLA, which states that no paper can be kept pending for more than two days. Thereafter, it basically functions on a first-come-first-serve basis. But rural residents will take time to accept this change. Caste and income certificates are yet to be incorporated into the database at the Nemmadi telecentres. Ten years ago, the village accountant had all this information on his fingertips. Today, this static data has still not been digitized. Another challenge that the government faces is the quantum of adjustments required. Dealing with a PPP is not easy because the corporate sector does not understand the depth of government processes in their entirety, just as government officials fail to understand the nuances of corporate procedures. Nemmadi continues to develop plans for the near future. The number of services will be increased and more utility payment options will be granted to the rural populace. Further, 3i Infotech plans to add financial services to be sold from the Namma RBCs. The opportunities to make the project serve multiple purposes are immense. From a private partner’s perspective, there could be adequate learning and hence improvement, but murmurs about the level of enthusiasm from the government also need to be tackled. In other states, there has been better backup support provided to the project implementation or infrastructure teams sent by the private partners. In Karnataka, it can still be better. This, again, is a learning experience — one from which the e-governance experience in India only stands to benefit.

582

Key Drivers of Organizational Excellence

CONCLUSION Improving Governance is a part of a development process. It is argued that corruption can be curbed by systematic changes in governance through introducing participation, transparency, accountability and probity in administration. The right to good governance is also considered as an essential part of the citizen’s rights that one can expect from the government. The implementation of the present project in Karnataka and similar other projects like the Kerala Akshaya Project indicate that IT can be used successfully to deliver services even to rural population more efficiently than any other conventional methods. The Nemmadi programme has succeeded in Karnataka because of the innovative launchlearn-innovate methodology of the State Government. Each of the components for delivery of Nemmadi services has been thoroughly tested through pilot deployments and the learning from the pilots have been incorporated in the solution. With the commencement of Nemmadi services in the entire state, the state government is confident that its vision of empowering the society by providing direct access to government services at the door step of the citizen will be realized. Conceptualization of e-Governance programs sometimes restricts the discussions to technology, such as technical terms like products to be deployed, server specifications, etc. While technology choices can influence the success of e-Governance programs but they are only enabling factors. The most important process of e-Governance however involves transformation of governance and the softer issues concerning training, hand holding and change management. It is designing of these softer issues that determine the success or failure of e-Governance programs and it is these difficult issues that have been successfully managed in the Nemmadi programme in Karnataka. The movement from the manual to electronic process with broadband technology in the rural areas in different States reflects the successful efforts being made in the direction of bridging the digital divide. Thus, development of infrastructure facilities at village level would go a long way in truly bringing IT to the door steps of common man.

References Challa, Radhakumari (2008), Nemmadi: An E-Governance Project of the State of Karnataka, E-Business, ICFAI University Press, February, 57-63. Joshi, Manojkumar (2006), E-Governance Initiatives in India, E-Business, ICFAI University Press, December, 55-61. Bose, Jayshree (2007), E-Governance in India – Issues & Cases, E-Business, ICFAI University Press, Hyderabad, 42-60. Sharma, Pankaj (2004), E-Governance , APH Publishing Corporation, 12-25. Prabhu, C S R (2004), E-Governance – Concepts & Case Studies, PHI, New Delhi, 34-46.

Impact of GATS

583

63

Impact of GATS (General Agreement on Trade in Services) on Higher Education System (with Specific Reference to Professional Disciplines) Ashwini Renavikar Abhijeet Tarwade V V Jog

India opened its gates for foreign universities through GATS in year 2000 and the vibrations of change started taking place. All metro cities in India, in the attempt of trying to equip them to face the competition, are undergoing tremendous change. It has multifold effects on many service sectors such as banking, insurance, hotels, hospitality and tourism. Education sector was the last one to join the list. The study undertaken is an attempt to analyze the impact of GATS on educational system in Pune. The Study touches some areas such as cost of education, quality of education, mode of teaching-learning and interaction with other universities and industry.

INTRODUCTION The General Agreement on Trade in Services (GATS), which came into force in 1996, is a multilateral agreement that is based on the premise that progressive liberalization of trade in commercial services will promote economic growth in WTO member countries. It provides legal rights to trade in all services, except those (like defense) provided entirely by the government. GATS have three parts. The first part is the Framework Agreement containing 29 Articles, second part consists of national schedules that list a country’s specific commitments on access to the domestic market and the third part consists of a number of Annexes, Ministerial Decisions, and Schedules of Commitment etc. Presently, GATS covers 161 activities falling within 12 services, education being one of them (Rupa Chanda, 2004). Under GATS, member nations have obligations of two types – General and Conditional. General Obligations are those that apply automatically to all member countries regardless of existence of commitments made for any sector. These relate to Most Favored Nation (MFN) treatment, transparency, and establishment of administrative reviews, procedures and disciplines. Under conditional obligations each country has to identify, if it so wishes, sectors/sub-Sectors, and modes of supply, under which it is willing to make

584

Key Drivers of Organizational Excellence

commitments (with limitations if it so desires). and make commitments relating to market access and national treatment.

THE BASIC PRINCIPLES OF GATS 1.

There has to be progressive liberalization with the process being irreversible because of binding commitments on negotiated levels of market access.

2.

Countries are free to decide which service sectors they wish to subject to market access and national treatment disciplines. In theory, if a country is unwilling or not prepared to open up a particular service sector it can say so.

3.

Under ‘Most Favored Nation (MFN)’ treatment no discrimination can be made amongst members in terms of treatment accorded to the service suppliers. The guiding principle is ‘favor one favor all’.

4.

Under the principle of ‘National Treatment’ nationals and foreigners need to be treated equally. There can be no discrimination between national/local and foreign serviceproviders. However, under certain conditions there can be limitations on National Treatment.

5.

There has to be transparency with all policies related to barriers to market access and discriminatory restrictions by the members have to be notified. (Powar, 2005)

6.

GATS recognizes four Modes of Trade in Services, namely Cross Border Supply, Consumption Abroad, Commercial Presence and Presence of Natural Persons. In education sector, we can specify the four modes as:

Mode 1: Cross Border Supply Service that is provided through distance or through telecommunications or mail and this mode of supply is often referred to as “movement of the consumer”. The essential feature of the mode is that the service is delivered outside the territory of the member making commitment. Actual movement of students from one country to another is considered under this mode. Through services embodied in exported goods (i.e., services supplied in or by a physical medium, such as a computer diskette or drawings). In all such cases, the service supplier is not present within the territory of the member where the service is delivered. Under Mode I, there are two possibilities of trade. The first is through distance education offered by universities or through national open universities either through print or telecommunication or through computer diskette or through all other means. The second of course relates to all education services supplied by other than universities or national open universities.

Mode II: Movement of Students In this mode students desirous of obtaining Higher Education Degrees from countries other than their own move to the country where the services are being provided on student visa. Under this mode the students from other countries may be permitted to undertake higher education in the country on more liberal way. The standard of higher education

Impact of GATS

585

also needs to be improved to make it at par with or higher than the most preferred locations today.

Mode III: Commercial Presence It refers to the actual presence of foreign investors in a host country. The important ways in which this mode can be activated are opening an institution or branch campuses abroad or franchising or through twinning. Indian higher educational institutions may offer program or qualification abroad and may allow the outside educational institutions to offer educational program and qualifications

Mode IV: Presence of Natural Persons This mode covers natural persons who are themselves service suppliers as well as natural persons who are employees of service suppliers. (Ranjan) Education Services presently has five sub-sectors, namely, Primary Education Services, Secondary Education Services, Higher Education Services, Adult and Continuing Education Services and Other Education Services. There is a proposal from the United States to add two other services, Training Services and Educational Testing Services, under other services. It is now to understand the ground truth that GATS came into existence in 1996 and it is here to remain. Basic intention of GATS is to remove the barriers and have open trade amongst all countries (Vijendar Sharma, 2002). At present 145 countries are members of GATS and 48 out of them have tabled their proposals to open education sector (Julia Nielson, 2004). Except 4th mode, all other 3 modes are widely applicable, in educational sector as well. An experts committee was formed, under the chairmanship of senior scientist C. N. R. Rao, to evaluate the opportunities of opening the doors for foreign universities to India. The committee has recommended very stringent rules for evaluating proposals and to regulate the entry of such universities GATS is going to open up wide range of options for students in India. But the issue needs to be handled carefully. Indian universities are going to be compared with the foreign universities. Indian Education ministry has a key role to play in this matter. Making Indian universities strong enough to survive and face the competition with foreign universities, though seems to be difficult, is the only way out (Cahtterjee, 2002). Regulatory bodies such as UGC, AICTE will have to be more cautious while implementing accreditation and quality assurance policies (Powar, 2002).

Objectives of the Study 1.

To study the impact on cost of higher education system in and around Pune, postintroduction to GATS.

2.

To study the impact on teaching-learning methodology in higher education system in and around Pune, post-introduction to GATS.

3.

To study the impact on quality of students taking higher education in and around Pune, post-introduction to GATS.

4.

To study the impact on educational policies for higher education in and around Pune, post-introduction to GATS.

586

Key Drivers of Organizational Excellence

HYPOTHESIS There has been substantial impact of GATS on cost of education, method of teaching-learning process and educational policies in higher education (specific reference to professional disciplines) in Pune.

Sample Design Data for the study was collected from different stakeholders in education system such as Directors of institutes, Teachers, industrial representatives as well as students. In addition to this, detailed interviews with directors and members of Governing bodies of institutes helped us to know the views about GATS and its impact on educational system. Recent declaration by Oxford University about opening their extension in Pune further justifies the selection of geographical scope.

Sample Selection Pune city has been focused as a geographical scope for the study for following reasons: Pune is likely to play a role of educational hub not only in India but Asia as well. Highly reputed and one of the 5 star universities , University of Pune can take a steering position in suggesting and monitoring new educational policies in Pune. Established in 1948, University of Pune has to its credit a record of attracting maximum number of foreign students. Pune is also termed as ‘Oxford of east’. Though, presently , very few institutes in Pune are having either tie-ups with foreign universities or providing distance learning facilities for appearing for foreign university degrees, the number of full fledged foreign universities operational in Pune is probably nil. But, in near future, there is likely to be considerable increase in number of such universities. The study undertaken is an attempt to find out the possible reaction to the fact of opening educational sector for free trade.

Scope The scope of the study will address professional disciplines like Management, engineering, medicine, biotechnology, information technology and law.

Methodology Primary source Data required for the above mentioned study was gathered through sources such as questionnaires and interviews of the stake holders in education system and industrial representation.

Secondary Source Previously published research work related to above mentioned area and articles published in different news-papers and magazines.

ANALYSIS AND MAJOR FINDINGS The study revealed some surprising outcomes. It seems that the Puneites are mentally prepared to sustain with the competition that lies ahead. Educational institutes in Pune are

Impact of GATS

587

grooming themselves to face the challenges that are likely to arise on account of entry to foreign universities to India (under GATS). 1.

62% of the respondents suggested that there is rise in number of short-term courses in Pune. This is on account of need for more concentrated and job-oriented courses to withstand the competition. Though, education as a service industry, is very much a part of the process of globalization, it is very likely to throw altogether different academic needs and resources to fulfill them (Sagar, 2005 8). The need for constant updation in terms of new technologies has resulted into increased number of existing employees joining evening and night schools, which has further resulted into increase in number of evening and night colleges.

W hether Institutes should adopt corporate look NO 5% s

CA N'T SAY 3%

1 2

YES 92%

3

2.

Next major outcome of the analysis is that: Institutes running the professional courses are emphasizing a lot on high-tech infrastructure, including WI-FI, huge seminar halls, comfortable accommodation on the campus, to attract good students. Further, 92% of the respondents recommended that the institutes should adopt corporate look.

3.

A major chunk , i.e. 84% of the respondents agree that regulatory bodies such as UGC, NBA and NAAC may play a major role in devising policies and guidelines that will help Indian universities withstand the competition with foreign universities and maintain the quality of education.

4.

Increasing cost of education has always been a sensitive issue. But, during last 3-4 years, there has been considerable rise in the fees. Around 59% of the say that there has been around 20-40% rise in the amount of fees, especially for professional disciplines. 27% say, the rise is between 40-60% while around 15% of respondents say, the rise is above 60%.

588

Key Drivers of Organizational Excellence

P e rc en tag e ris e in th e fee s fo r p ro fe ss io n a l co u rs es M o re tha n 80% 3%

B e lo w 2 0 % 0% 1 2 3 4 5

6 0 -8 0 % 11% 4 0 -6 0 % 27%

2 0 -4 0 % 59%

We can relate this to increasing amount spent on building infrastructure, training and placement efforts and highly qualified teaching staff, especially, post-introduction to GATS. This also suggests that there is need for frequent updation of curricula. Many institutes are trying for autonomous or deemed status to have this freedom. 5.

Around 7% of the students in India get an opportunity to go for higher education. Those, who are deprived of this facility, for some reason, are interested in joining foreign university courses.

Tie-ups with foreign universities NO 13%

CAN'T S AY 8% 1 2 3 Y ES 79%

Institutes are also getting interested in having tie-ups/ affiliations with foreign universities, 79% of the participants in the survey support this view. A step ahead in this context, the study also suggest that most of the foreign universities are better than Indian ones, in a way that they are delivering more relevant contents, have better faculty, acquire modern infrastructure and practical oriented training. The participants in the survey, teachers and student of management programs insisted on better placements and interaction with industry. 6.

The whole world becoming more and more IT-savvy, educational institutes are no exception. IT has opened many alternate modes for teaching-learning process, elearning, video conferencing, CBT(Computer Based Training) to name a few. The contact

Impact of GATS

589

learning mode, with more of chalk and talk method of delivering the knowledge is being accompanied by IT-enabled learning.

Impact on teaching-learning mode NO 8% CA N'T SAY 18%

1 YES 74%

2 3

74% of the participants agree that upcoming trend is more of learning than teaching, through audio-video electronic modes, which is more appreciated by learners. This is not only more effective, but also catering varying needs of the learners, based on contents, pace of learning, evaluation etc. Another area which is being exploited the most is training on soft-skills development. The students on the verge of placement, as well as the employees are taking special training on soft-skill development, to make them more acceptable for the changing job profiles. Though, the thought of total replacement of contact learning by e-learning was totally rejected.

CONCLUSIONS Testing of hypothesis has been done basically on the basis of percentage values. Hypothesis is proved through the major findings and also has some clues for institutions providing higher education in professional disciplines, specifically in Pune. To summarize in brief, we can say that it is not possible to avoid the entry to foreign universities, which is a part of globalization effect. But there is a way to handle it in a more effective way. Better strategic approach can help the educational institutes to survive and grow and maintain the quality of education, in spite of existence of foreign universities. Pune, in particular, is undergoing the phase of metamorphosis, getting itself more and more equipped to handle the competition with foreign universities. When the world is looking for better options for all the services, monopoly in educational field is a rare possibility, the fact that we should understand and accept.

References Rupa Chanda (2004), GATS, Concerns of Opening up Higher Education, The Financial Express, December 14, 2004. Powar, K.B. (2005), Country Paper on India: Implications of WTO/GATS on higher education in India, UNESCO Forum Occasional Paper. Series Paper no. 9. Implications of WTO/GATS on Higher Education in Asia & the Pacific 27-29 April 2005, Seoul, The Republic of Korea, Part Il, p. 130-150. (http://unesdoc.unesco.org/ images/0014/001467/146742E.pdf)

590

Key Drivers of Organizational Excellence

Vijender Sharma (2006), Higher Education in India and GATs: A Disastrous Proposal, Ganashakti Newsmagazine, November 2006. Powar, K. B. and Bhalla, Veena (Spring 2001), International Providers of Higher Education in India, International Higher Education , Number 23. URL: http://www.bc.edu/bc_org/avp/soe/cihe/newsletter/ News23/text006 Julia Nielson, Trade Directorate OECD UNESCO/OECD/Australia Forum on Trade in Educational Services (2004), Bridging the Divide: Building Capacity for Post-secondary Education through Cross-border Provision , Sydney, 11-12 October 2004. Sharma, Vijender (2002 ), WTO, GATS And Future Of Higher Education In India –II, People’s Democracy, Vol. XXVI, No. 07, Rajat Cahtterjee (2002), Overcoming Weaknesses, Analyzing Threats and Exploring Opportunities , University News. Powar, K. B. (2002) WTO, GATs and Higher Education: An Indian Perspective , University News. Nilay Ranjan (2000-01), Selected Educational Statistical , MHRD Author: Nilay Ranjan is the Knowledge Coordinator - Education at One World South Asia

Strategies for Training in Educational Institutions

591

64

Strategies for Training in Educational Institutions Raja K. G. Paramashivaiah

One of the important sub-systems within the comprehensive human resource management is training and development. Through this activity the employees who have entered organizational domains with diverse backgrounds and orientations are brought in line with the requirements of the organization so that organizational tasks get accomplished which in turn will assist organizations to move in the desired direction. Although the educational institutions had Impressive number of training establishments, the training activities have not been able to effectively cater to the needs of the industry fully. Over the years, unfortunately, the gap between training needs and training efforts got widened leaving major criticisms against training activities. A closer examination of the state of affairs regarding training in institutions indicates that the real reasons for this lie elsewhere.

INTRODUCTION One of the important sub-systems within the comprehensive human resource management is training and development. Through this activity the employees who have entered organizational domains with diverse backgrounds and orientations are brought in line with the requirements of the organization so that organizational tasks get accomplished which in turn will assist organizations to move in the desired direction. Although the educational institutions had Impressive number of training establishments, the training activities have not been able to effectively cater to the needs of the industry fully. Over the years, unfortunately, the gap between training needs and training efforts got widened leaving major criticisms against training activities. A closer examination of the state of affairs regarding training in institutions indicates that the real reasons for this lie elsewhere. Some of them include Lack of conviction of the concept of training, Lack of top management support, Improper selection of trainers, Ineffective use of training methodologies, Undesirable pressure for quantity rather than quality. Some of the problems of training are due to Training does not receive due attention from the top management, Identification of training needs is not done systematically. As a result, the training system and operational requirements tend to move in different directions, No systematic effort is made to select training faculty and there is lack of opportunity for career

592

Key Drivers of Organizational Excellence

advancement for faculty members and Insistence on number of programs and number of employees to be trained work against quality of training.

LITERATURE REVIEW Warren (1979) has suggested the following criteria to decide an appropriate method/ approach - Training criteria (suitability of the approach), Trainee response and feedback, Trainer skill, Approximation of the job, Adaptability to trainee difference and Cost. Peter R. Sheal (1989) lists 10 principles of adult learning that should be taken care of while designing training programs. Adults learn better in an informal, non threatening environment (create such atmosphere), When there is a need to learn or they want to learn (motivate them; identify need; develop need based programs), When their individual learning needs and styles are catered to (use a variety of methods, techniques and tools), and When their previous knowledge and experience are valued and used (use group techniques, case study etc.). Adults also learn better when there is an opportunity for them to have some control over the learning content and activities (use flexible training approaches, CBT, etc.), Through active mental and physical participation in the learning activities (use activities to do, rather than lectures alone), When sufficient time is provided to the assimilation of new information, practice of new skills or development of new attitudes (don’t over load content in short period of time; give sufficient time for interaction and discussion).They learn better When they have opportunities successfully to practice or apply what they have learnt (provide hands on training and application level of activities), When there is a focus on relevant and realistic problems and the practical application of learning (emphasize the ‘what’ and ‘why’ of training and explain how it is related to the job performed by the trainees). When there is guidance and some measure of performance so that learners have a sense of progress towards their goals (use assessment techniques to provide feedback). Monappa and Saiyadain (1996) opined that Training refers to the teaching/learning activities carried on for the primary purpose of helping members of an organization to acquire and apply the knowledge, skills, abilities and attitudes needed by the organization and it is the act of increasing the knowledge and skill of an employee for doing a particular job. They also concluded that In the entire process of training, the most important aspect of training is the most neglected and least appreciated i.e. evaluation of training. All training plans are alluring and attractive as they involve assessment of needs, interaction between participants and faculty, thrashing out of issues, finding solutions etc. However, whatever has gone into designing and implementation of the program can only be justified vis-à-vis the objectives by systematic evaluation of the training program which goes to establish if it has successfully delivered the inputs to participants, who in turn has been able to absorb them and are able to develop their capabilities and skills to use them in their day-to-day work situation. Ford (1999) Said that to develop a training program, we must know what we intend to change and what the training should be able to do better than the participants are not able to do at present. Objectives are statements of specific outcomes to be achieved by training (Ford, 1999), it is often used synonymously with goals and aims. Leslie Rae (2001) Has identified that effective trainers should have several skills such as Organizational knowledge; Management and operational roles and functions; Training knowledge and skills; Program preparation skills; Sensitivity and resilience; People skills; Commitment; Mental agility and creativity; Self-awareness and self-development; Sharing; Credibility; Humor and Self-confidence.

Strategies for Training in Educational Institutions

593

STRATEGIES FOR TRAINING IN EDUCATIONAL INSTITUTIONS The institutions should divide its training focus into two dimensions – one to run the day-today operations of the institution and one to radically change how institution operates. The focus of Run the Business (RTB) training programs should be to enhance or grow current services and strategies through effective benchmarking of training needs. Change the Business (CTB) objectives should promote innovative changes in institutional practices to achieve breakthrough improvement in its operations.

Identify key factors Then the institution can identify key factors responsible for Performance Excellence. Internal and external sources of data and information should be gathered and analyzed during initiation of the training process and serve as the starting point for this. These sources include: 1.

Student needs: Information related to student and market needs, and opportunities should be obtained through student satisfaction surveys and feedback from student complaints captured in the Voice of the student (VOS) system. Then a training schedule should be chalked out by the training department.

2.

Competitive and collaborative environment: the institution should gather comparative training requirements from the competitors. Leading indicators should be reviewed weekly through service measures and analyzed monthly by training department.

3.

Technology and innovations: Technological advancements and innovative breakthroughs are critical inputs in the educational sector. The institution can gather information on technology and innovation and conduct training programs regularly.

TRAINING OBJECTIVES AND TIMETABLE The institutions can identify top 5 training objectives and the challenges for each objective. These objectives may be to fulfill the gaps as through VOS. And the Institution can make the senior leaders accountable to lead the accomplishment of the objective. Integration and deployment of the training plan is hardwired through this accountability. This ensures that responsibility for accomplishing objectives is clearly identified.

Addressing training challenges The Institutions can align organizational objectives with training challenges in the training planning process to ensure that all training challenges are addressed. While the RTB objectives focus on improving operations, the CTB objectives are more strategic in nature and have a greater impact on addressing the training challenges.

Key short and longer-term action plans: The Institution should document key organizational short- and long-term action plans to support training objectives. Action plans are aligned with training objectives through prioritized goals that cascade throughout the organization. The institutions have to carefully plan new programs to ensure they are implemented timely.

594

Key Drivers of Organizational Excellence

Training may include the following issues l

Focus on recasting training systems, processes.

l

Leadership and Executive Development for top level positions.

l

Modern training and development strategies for faculty members.

l

Convincing and exciting the Board regarding the value of human capital and strategies to focus on “people development”.

l

Critical review of HRD functions at regular intervals.

l

Creating Knowledge Management infrastructure – to be used as a powerful tool for harnessing the latent talent in the people.

l

Creating Learning Organization.

l

Talent management – scouting, identification, nurturing and rewarding.

l

Reviewing reward mechanism and installing suitable reward mechanism for performers.

l

Modern HR methodologies for evaluation, assessment of performance and potential – such as ‘Assessment Centers’.

CONCLUSION Trainings always help for the transition whether it is happening at the entry level or the middle level of the career. The transition is very well noticed in the present day education which demands business acumen coupled with the updated knowledge on information technology. Most of the educational institutions are aiming to meet the unfolding challenges. So it is high time the managements took necessary steps to fill the information gap that may likely to arise or which has already been noticed so that the students are able to meet the future confidently.

References Warren M.W. (1979), Training for Results, Second Ed., Reading Addison: Wesley publishing. Peter, R. Sheal (1989), How to Develop and Present Staff Training Courses, London: Kogan page. Monappa and Saiyadain (1996), Personnel Management, Tata McGraw Hill: New Delhi. Ford, (1999), Bottom Line Training, Houston: Gulf Publishing Gardner, H. (1993), Frames of Mind the Theory of Multiple Intelligence, New York: Basic Books. Leslie Rae (2001), Develop Your Training Skills, London: Kogan page. Warren, M (1979), Training for Results, Addison-Wesley: Menlo Park, CA. Peter R. Sheal (1989), How to Develop and Present Staff Training Courses, New York: Nicholas Publishing, pp. 12-29.

Impact of e-Commerce in India

595

65

Impact of e-Commerce in India Mili Singh

The revolution in information technology and communication systems will completely change the way companies do business but it could still take 5-10 years before e-commerce catches on in India. E-business benefits will clearly impact interbusiness and intra-business communication but large-scale buying and selling through internet in India is still few years away. The biggest challenge which new technologies and internet pose will be protection of confidential information on product formulations and business strategies which may get leaked to competitors. Business-to-Business relationships will change, organizations internally will undergo change and media will play a greater role. This chapter does the review of the market place and organization to understand the implications of technology transition on businesses in the coming years. Consumer relationships will change as consumers will be more informed. They will expect greater transparency and better quality products and service. There will be a technology for consumer and will bring in `change'. Whether the change will be for the good or bad will be dictated by our ability to manage change.

INTRODUCTION The past 2 years have seen a rise in the number of companies’ embracing e-commerce technologies and the Internet in India. Most e-commerce sites have been targeted towards the NRI’s with Gift delivery services, books, Audio and videocassettes etc. Major Indian portal sites have also shifted towards e-commerce instead of depending on advertising revenue. The web communities built around these portal sites with content have been effectively targeted to sell everything from event and mouse tickets the grocery and computers. The major in this services being Rediff on the net (WWW.rediff.com) and India plaza with started a shopping section after their highly successful content site generated WEB visitors. In spite of RBI regulation, low Internet usage e-commerce sites have popped up everywhere hawking things like groceries, bakery items, gifts, books, audio & videocassettes, computer etc. None of the major players have been deterred by the low PC penetration and credit card usage in India and have tried to close the success worldwide of online commerce. BPB publication went online selling its complete range of computer books about 5 years ago, it might not have the success of either Amazon.com of Barnes and noble but they definitely

596

Key Drivers of Organizational Excellence

have promised the cause of e-commerce in India with at least 1 to 5 web sites like India bookshop coming online (Bailey, J. P. & Bakos, Y. 1997). This is not to say that the e-commerce scenario has been bad in India as highly successful ebusiness like baba bazaar and India mart have proved. Indian Banks too have been very successful in adapting EC and EDI Technologies to provide customers with real time account status, transfer of funds between current and checking accounts, stop payment facilities. ICICI Bank, Global TRUST BANK AND UTI-Bank also have put their electronic banking over the internet facilities in place for the up coming e-commerce market speed post also plan to clone the federal express story with online package status at any moment in time . The future does look very bright for e-commerce in India with even the stock exchanges coming online providing an online stock portfolio and status with a fifteen minute delay in prices. The days have came when with RBI regulations we are able to see stock transfer and sale over the Net with specialized services like Schwab and E-trade. Though with security and encryption being proven Technologies for transfer of funds over the Internet, the Indian Government still has problems with ‘Digital signatures’ and verification processes over the Internet. This combined with RBI norms and regulations has proved to a major handle for e-commerce even though VSNL India’s monopolistic ISP does want to jump on to the electronic transaction bandwagon with the advent of private ISP’s and India new and positive attitude towards IT and the prime ministers ‘IT policy “ the future is very positive in India for doing commerce.

PRIOR RESEARCH Experts have argued that the low cost of personal computers, a growing installed base for internet use, and an increasingly competitive Internet Service Provider (ISP) market will help fuel e-commerce growth in Asia’s second most populous nation. Dataquest, an Indian computer magazine, has found that the rise of Indian Internet subscribers will ultimately depend on the proliferation of network computers and Internet cable. Cyber cafes will also continue to provide low-cost access. The internet arm of textile giant, S.Kumars Group (SKG), is developing an extensive network of Internet kiosks to facilitate e-commerce (Malone, T., Yates, J., & Benjamin, R. 1987). Currently, the lion’s share of current e-commerce revenues is generated from an everexpanding business to consumer (B2C) rather than business to business (B2B) market. As in the India, B2C transactions have taken the form of on-line purchases of music, books, discounted airline tickets, and educational resources. E-commerce in India over the next few years could be B2B if the correct environment were developed. The B2B market is expected to increase following greater investment in the Indian telecommunications infrastructure, and once intellectual property rights and legal protections for commerce over the Net are addressed. There are still enormous challenges facing e-commerce sites in India. The relatively small credit card population and lack of uniform credit agencies create a variety of payment challenges. Increased distribution of online purchases could be complicated by India’s complex postal system and an uncertain regulatory environment. Nonetheless, everyone from Yahoo, Microsoft, and IBM to local carpet vendors, hotels, and some 300 Indian ISP’s are trying to claim a slice of the rapidly emerging Indian e-commerce market.

Impact of e-Commerce in India

597

E-MARKET PLACE This study focuses on Internet-based ‘e-marketplaces’ and on the ways in which firms may use the opportunities opened up by the Internet to do business. Many analyses of e-commerce mainly examine technology impacts. In contrast, the features and services that were being provided at e-marketplaces on the World Wide Web. We assess how these trading forums were operating and how firms in developing countries might be able to conduct business using them. The examination of e-marketplaces is supported by research. Garments and horticulture firm managers were interviewed about their use of e- marketplaces and about their use of the Internet in their businesses. How were they making use of the Internet to buy or sell products? How were the new opportunities for communication being used to change the way they were doing business with buyers and suppliers in their global supply chains? In spite of the limited empirical reach of this study, the results throw considerable light on the prospects for e-commerce in India. Our results confirm the crucial importance of empirical investigations of how e-commerce is actually being developed and used. As of 2004, very little e-commerce using ‘many-to-many’ e-marketplaces was found in our sample of firms. the leading reason cited by businesses for not conducting transactions electronically was a view that electronic commerce was not suited to the nature of their business. However, this negative picture is not the only one. Some forms of e-commerce are opening up opportunities for some types of firms (Kalakota, Ravi and Andrew B. Whinston 1997). The Internet is having an impact on the ways that firms do business particularly on the way firms handle relationships with their existing trading partners. The main effect of the use of the Internet is to make communication with existing trading partners cheaper and quicker. It was not being widely used to forge relationships with new trading partners. These conclusions have substantial implications for policy makers who are seeking to maximize the benefits of e-commerce for firms in developing countries. The emphasis of most ‘ereadiness’ strategies is on sophisticated technology, legal infrastructures, and awareness and training. Most of these strategies presume that e-commerce occurs in ‘many-to-many’ emarketplaces and that exporting firms is constantly searching for new international trading partners. Our results show that firms in India are using some types of e-commerce applications, but their primary uses are to strengthen existing business relationships and to deepen integration between suppliers and buyers. This has very important implications for the policy framework needed to realize some of the expected benefits of e-commerce for Indian firms (Turban, Efraim et. al., 2000). E-commerce marketplaces are on-line spaces where many buyers and sellers can come together in one trading community and obtain sufficient information to make decisions about whether to buy or sell. Many-to-many e-markets will be supported by complementary business functions. If buyers and sellers are to make decisions to transact on-line, then sufficient information must be provided on-line for the transaction to be completed and the systems must be in place to arrange binding contracts and payment. E-marketplaces and the implementation of their business models rely to a very large extent on technology infrastructure. The market maker must possess or have access to a technology that is capable

598

Key Drivers of Organizational Excellence

of handling the full range of commercial processes from ordering to order fulfillment and settlement. The technology must support transactions involving large numbers of users over the Internet and be capable of handling complex business practices, user relationships and integration with third-party commercial applications. Further, effective on-line business also needs the complementary services required to complete transactions. The types of services that may be offered by the marketplaces include: The ability to process payments, credit financing, credit validation, tax laws, trade restrictions, integrated business management accounting, on-line exchange of information and transaction-supporting documents, such as invoices and shipping documents; import/export compliance; providing on-line linkage to transportation and logistics and other third-party services linked to purchases, support for multicurrency and multi-language transactions; tariffs and tax data collection and management; automated landed cost calculations, customs compliance and documentation. The force of this analytical vision was reinforced by business trends. At the end of the 2000, investments in the Internet and its underlying infrastructure were increasing rapidly and considerable investments were also being made in e-marketplaces as a form of e-commerce. In a scramble for critical mass, first-movers were soon followed by imitators. The competing firms invested heavily in pursuit of the goal of being the leading global or regional provider of e-marketplaces in particular lines of business. As part of the process of attracting a client base, these firms had a vested interest in exaggerating the potential size of the market, playing down the obstacles to trading on-line and over-estimating the growth of their businesses. At many conferences about e-commerce during this period, multiple presentations by representatives of firms building e-commerce businesses would each claim that they were aiming to be the number one portal or e-marketplace in a particular business area. The firms building e-marketplaces themselves were supported by firms developing support services and by specialist financial investors seeking to build up e-commerce portfolios. The hype around e-commerce spread to developing countries. The message was that significant parts of global trade would switch to e-commerce and those firms and countries that did not jump on the bandwagon would be marginalized. To take part in the global market lies in developing “E-markets”, electronic meeting places for buyers and sellers with defined rules for epurchasing, e-bidding and e-selling... A wider global reach opens new markets products globally, while elimination of trading inefficiencies will result in better prices... To get a slice of this business one has to commence on-line trading exchanges which create tremendous efficiencies such as reducing processing costs by up to 90%, reducing cycle time by up to 80% and improving staff productivity between 20% and 30% (Kalakota, Ravi and Marcia Robinson 1999). Policy implications of the optimistic E-commerce model 1.

E-commerce is essential for market access and export growth. Indian governments must give priority to ensuring that the conditions for the participation of their businesses are met.

2.

E-commerce transactions are complex and information-intensive. The ICT infrastructure must be sophisticated enough to handle the data required. A quantum leap in telecommunications capabilities may be required.

Impact of e-Commerce in India

599

3.

Governments should ensure that telecommunication services are modern and efficient in order to lower the prices of network usage through effective competition and market liberalization. Governments should also reduce tariffs to support trade in ICT hardware and software.

4.

A legal framework to support electronic transactions has to be in place in order for firms to buy and sell on-line. This framework must include effective authentication and certification mechanisms (i.e. Digital signatures, secure settlement procedures) and a means of protecting against on-line fraud as well as achieving redress in cases where disputes arise.

5.

Significant amounts of business will migrate to E-marketplaces with complex requirements. Governments should support investment in human resources.

6.

Governments must ensure that national regimes for taxation, security and privacy protection are compatible with international governance regimes.

CONCLUSION E-commerce in India still 5-10 years away .The revolution in information technology and communication systems has completely changed the way companies do business but it could still take 5-10 years before e-commerce catches on in India.. E-business benefits clearly impact inter-business and intra-business communication but large-scale buying and selling through internet in India is still few years away. The biggest challenge which new technologies and internet pose is protection of confidential information on product formulations and business strategies which may get leaked to competitors. Business-to-Business relationships will change, organizations internally will undergo change and media will play a greater role in the next millennium. There will be a need to review the market place and organization and to understand the implications of technology transition on businesses. Consumer relationships will change as consumers will be more informed. They will expect greater transparency and better quality products and service. Interestingly, media will player a greater role. Media will take on a far greater responsibility and will influence consumers’ lifestyle and business relations. The coming years will see greater and faster information availability and media will be more interactive. But though ways of communicating with employees may change, they should not forget that it’s the human touch which creates teams. It should not end up being more intrusive in the functioning of managers and day-to-day affairs as facilities such as video-conferencing and regular interface through latest communication technologies will increase. The technology has also made changes in education and training in corporate but it should be watched out for information about untrained users, information theft and incorrect usage of technology. This is a technology for consumer and will bring in ‘change’. Whether the change will be for the good or bad will be dictated by our ability to manage change,

References Bailey, J. P., & Bakos, Y. (1997), An Exploratory Study of the Emerging Role of Electronic Intermediaries, International Journal of Electronic Commerce, 1(3), 7-20.

600

Key Drivers of Organizational Excellence

Malone, T., Yates, J., & Benjamin, R. (1987), Electronic Markets and Electronic Hierarchies: Effects of Information Technology on Market Structure and Corporate Strategies, Communications of the ACM, 30 (6), 484-497. Kalakota, Ravi and Marcia Robinson (1999), E-Business – Roadmap for Success, Addison-Wesley: Reading, MA. Kalakota, Ravi and Andrew B. Whinston (1997), Electronic Commerce, Addison-Wesley: Reading, MA. Turban, Efraim; David King; Michael Chung; and Jae Kyn Lee (2000), Electronic Commerce: A Managerial Perspective, Prentice Hall: Upper Saddle River, NJ.

Resurrecting the Morning Meeting

601

66

Resurrecting the Morning Meeting Bharti Venkatesh Deepa Chaterjee

The current opinion on 'Meetings' is quiet discouraging. Meetings that are endless discussions about pointless strategies, results, targets need to be done away with. What the organization can do best to better communication is to involve the top management in the meetings. The purpose is to move away from 'territory protection' and mismanaged communication. The top management in general moves away from day-to-day challenges and focus on the bigger picture. The present paper talks about the paradigms of The Morning Meeting (TMM) and how TMM helps in resolving issues and conflicts arising in the organization and how it bridges the gap between the top management and lower levels.

INTRODUCTION Some organizations have instituted Fridays free of e-mail to encourage more face-to-face interactions. Is it time to rethink Monday morning meetings? By now, most of organization have re-set their clocks and confirmed that their computers also remembered to 'fall back' during the time change on Sunday. Even so, it is a safe bet that on Monday some early morning meetings got lost in the confusion as they continued adjusting to standard time, again. Can we blame it all on the time change (Linksy, Marty, 2006) For most morning, meetings are problematic all year long. How many times have these prelunch meetings with clients gotten rescheduled, or worst forgotten (Normal Bel, 1986)? Not only can it be difficult to jump back into work following the weekend, but also clients are often getting hit with problems that had been building since the previous Friday. Also, it is almost impossible to confirm the meetings, or to get in touch with people in time to reschedule should something develop, (Marty 2006). Now let us understand what the paradigms of TMM. The Morning Meeting (TMM) is about communication, but imbedded within it are norms and values that are critical for organization that must deal with difficult issues and adapt nimbly to new situations; an openness to considering multiple perspectives, a willingness to share responsibility for finding creative solutions, and the discipline to move consistently from strategy to execution.

602

Key Drivers of Organizational Excellence

Morning meeting provides an arena where distinctions that define social, emotional and academic skills fade, and learning becomes an integrated experience (Dexter, 1996). Morning meeting is a forum in which the entire skills- skills essential to academic achievement - must be modelled, experienced, practiced, extended and refined in the context of social interaction happens. It is not an add-on, something extra to make time for, but rather an integral part of the day's planning. The time one commits to understanding that the morning meeting are an investment, which is repaid many times over. The sense of group belonging and the skills of attention, listening, expression and cooperative interaction developed in morning meeting are a foundation for every lesson, every transition time, every lining-up, every upset and conflict, all day and all year long. Morning meeting is a microcosm of the way we wish our organization to be (Fobes Magazine).

WHEN TMM IS NEEDED When communication is stifled and turf protection the order of the day, an organization's senior leadership team is less than the sum of its parts and cannot grapple with strategic and operational challenges most effectively (Dexter, 1996). Expertise and energy to untapped; less than frank communication sometimes means that team members do not know the full extent of one another's issues; and a lack of shared accountability leads some to think what's the problem and how to resolve it (American Quarterly Review). In contrast, two qualities characterize high-functioning leadership teams: 1.

Hard conversations happen - difficult issues move quickly from people's heads to the conference table

2.

Accountability is shared - individuals on the top team feel a responsibility to the organization as a whole, not just for their piece of the action.

To take senior teams to a new level of leadership, organization needs to conduct a model of top team communication that is called as The Morning Meeting (TMM). TMM is a deceptively simple name for an intricately ritualised event that has delivered significant pay offs to the organizations that have put into practice (Linsky, 2006): 1.

Backbiting and turf protection are dramatically reduced

2.

Tough problems are addressed while they are still manageable

3.

Issues cannot be covered over

4.

People can no longer hide their issues

5.

Ownership increases.

HOW TMM LOOKS LIKE TMM in its purest form works everyday, at the same time, the top team between 6 to 20 people, both staff and line assembles around a conference table, either in person or virtually. Also at the table are one or two others who either are responsible for an important current initiative or are valued for their area of expertise.

Resurrecting the Morning Meeting

603

There is no preset agenda. The top man sits at the head of the table. He does not run the meeting and everyone sits in the same place each day. Around the conference table on folding chairs, in a sort of gallery are a handful of deputies and executive assistants (Bel et al, 1986). Sometimes the top man will have an issue or two to begin the meeting. More often, the top man defers to the person seated to his left; the No. 2 person who would be the chief of staff or chief operating officer and this person would start things off and runs the meeting. When this officer's issues are fully discussed, the person seated to the left of No.2 raises any issues of concern and so on moving clockwise around the table. Once everyone at the table has had an opportunity to speak, everyone in the team leaves and the top team members get a chance to go around the table again(Dexter,1996). In this second phase of the meeting, executives discuss issues that demand a higher level of confidentiality. The entire meeting can take as little as 15 minutes or as long as two hours.

WHAT ARE THE GROUND RULES OF TMM Following are the ground rules of TMM (Linsky,2006): 1.

Anyone can put anything on the table for discussion; it does not have to be related to one's own area of responsibility. All are expected to be willing to comment on every issue raised, even those that lie beyond their technical expertise or area of responsibility.

2.

These are decision meetings, but issues are not just raised and resolved. Implementation plans are broadly outlined and agreed on, and internal and external communication strategies may be considered. Sometimes with particularly sensitive issues, the exact language that everyone around the table is going to use is hammered out.

3.

Once an issue is fully vetted, the top man determines the rule that will govern it. He decides whether he will be the one to make the final call, whether a particular individual or subgroup will make it or whether it will be made by group consensus.

4.

Changing one's mind, even in the middle of the conversation is ok, even respected. Not having an opinion is not.

5.

There are no arguments about factual questions. Participants are to get the facts and raise the subject at the next meeting.

HOW TO IMPLEMENT TMM IN AN ORGANIZATION The TMM model is a flexible one. During crises or during when there is a transition in organization, then this TMM is very effective. When things are running smoothly, meeting less frequently can deliver positive results. Division and unit heads can adapt the model to foster better decision-making and execution within their teams (Normal Bel, 1986).

ADVANTAGES OF HOLDING TMM Here are some of the most significant benefits that are seen in organization that have adopted some version of the TMM (Robert, 1997):

604

Key Drivers of Organizational Excellence

1.

Backbiting, intrateam conflicts, turf protection and second-guessing are dramatically reduced. Everyone owns every decision made in the meeting.

2.

Competition to face time when the top man goes away.

3.

Crisis can be addressed with detachment. When a vice president rushes into the CEO's office looking for help with a decision during a crisis, the CEO can say to bring the issues at the morning meeting or get the group together for resolving it.

4.

Team members feel a responsibility for the organization as a whole. Any problem that one team member has becomes a team problem, and thereby everyone benefits from the experience and insight of the entire team.

5.

Difficult conversations are the norm, and tough issues do not foster until they explore but are addressed while they are still manageable.

6.

Top team members do their homework better. Executives often feel their obligation ends with providing information. But when group accountability is the norm, executives are motivated to prepare more fully or discussions of business challenges.

CONCLUSION TMM is a flexible model and is not essentially a daily affair. Depending upon the need of the organization, members can schedule it to a specified day in a week or make it bi-monthly. While TMM involves the top management, executives and middle managers can adopt an informal model within their teams to enhance group decision-making. The aim of TMM is communicating hard issues, evolving new strategies, sharing opinions and most of all feeling responsible for the organization. Today, with more than thousand people, TMM is held twice a week. Practically, a few things have changed but the 'help sessions' initiated by a member are still valued. After every morning meeting, the members can have informal talks with each other for about ten minutes. The ten minutes of impromptu meetings set a great start for the day ahead. Most managers are scared to talk and express their views. Sadly, there are few who calculate the time and money lost. However, the important thing is the value of getting everybody together in one room at one time, twice a week, to collaborate on how to better accomplish the organizational goals is priceless.

Reference Linksy, Marty (2006), The Morning Meetings: Best-Practice Communication for Executive Teams, Harvard Management Communication Letter, 3(2), Spring. Norman Bel Geddes (1986), Lecture Series on Pioneers of Industrial Design, Cooper-Hewitt Museum, Smithsonian Institution, New York, December 4, 1986. Robert W. Hamilton (2005), American Plastic: A Cultural History, Rutgers University Press: New Brunswick. Myfanwy Trueman (1998), Managing innovation by design - how a new design typology may facilitate the product development process in industrial companies and provide a competitive advantage, European Journal of Innovation Management, 1(1), 44-66. Paos, (1989), Design in the Contemporary World, New York: Pentagram Design, Japanese translation: Tokyo. Jeffrey Meikle (2001), Twentieth Century Limited: Industrial Design In America, 1925-19392nd ed.,(Philadelphia: Temple University Press

Evaluation of Distance Education with Special Reference to Management

605

67

Evaluation of Distance Education with Special Reference to Management Vijay Kumar Pandey Praveen Sahu Krishan Kumar Pandey Gaurav Jaiswal Vikhyat Singh

India is a country of villages and 72% of population is residing in villages. It is very difficult to provide adequate educational infrastructure in every village. As per census 2001 only 65% of population is literate. To provide education in India, distance education has to play vital role. The other side of story is that India has 350-plus universities and deemed universities and over 14,000 medical, engineering and arts and science colleges. In fact, India produces the second largest pool of trained manpower in the world. Despite this the quality of higher education in India has left much to be desired (Suresh Kumar, 2006). This study is aimed at evaluating the quality of distance education with special reference to management education.

INTRODUCTION Distance education is a mode of teaching and learning that is growing significantly in the past 20 years as indicated by the number of higher education institutions that offer courses and/or full degree programs via distance learning. The then Chief Minister of Bihar had launched an education camp to educate the residents of villages through Charwaha Vidyalaya. It was an innovation in providing education to the villagers at their work place. Distance education also can provide education at their work place or without interrupting their working life. Distance learning is an effective and successful way to continue the education while working from the comfort of home. The pace and schedule of learning are entirely in the hands of the people. Government is trying to achieve educational goal by promoting Indira Gandhi National Open University. Indira Gandhi National Open University is playing major role in educating people from rural area and working people. But it is also important that in today’s

606

Key Drivers of Organizational Excellence

time most of the jobs come in the field of different disciplines of the management and distance education is the effective means to earn degrees while working.

REVIEW OF LITERATURE Bonk (2001) formatted the various factors found within the chosen thirteen studies on distance education. Once the factors were charted, they were grouped into categories which included personal, external, technical, pedagogical, and institutional. Upon further reflection, the technical and pedagogical categories seemed to fit best within the institutional category. McKenzie (2000) said that the majority of the studies discussed both motivating and deterring factors, while four studies discussed either motivational factor. Much of the literature supports that intrinsic motivators are stronger than extrinsic motivators when it comes to participation of faculty in online teaching. Intrinsic motivating factors include a personal motivation to use technology. Chizmar & Williams (2001) in their research had participants which included faculty who taught online courses or programs divided the faculty in their studies by those who had participated in teaching an online course and faculty who were considered nonparticipants – never have taught via distance education technologies. External incentives in the form of tenure and promotion would also increase the level of job satisfaction as well as the amount of support and recognition faculty receive from peers – another factor that motivates faculty participation. Bonk (2001) Rockwell (1999) did not distinguish between faculty who had or had not participated in distance education. Four studies included administrators, as well as faculty, as participants in the studies added support staff to the mix as well. Thus the final categories were intrinsic or personal, extrinsic, and institutional. Within the institutional category, the following two subcategories were recognized: 1) technology and teaching and 2) technical and administrative support. The factors within these categories are outlined in the next section of this review. External incentives in the form of tenure and promotion would also increase the level of job satisfaction as well as the amount of support and recognition faculty receive from peers – another factor that motivates faculty participation. Schifter (2000); Betts (1998) espoused that intrinsic motivating factors include a personal motivation to use technology or perceiving teaching via distance learning as an intellectual challenge. Some faculty stated that teaching via distance learning added to their overall job satisfaction and that teaching online provided optimal working conditions, as they were able to “teach” at any time and from any place. Faculty also stated a feeling of self-gratification from teaching online. Not all motivators can be considered intrinsic. Factors that are extrinsic have been categorized as institutional motivators as the institution or the administration are perceived to have the ability or power to alter distance education policies or procedures to meet the needs of the faculty. These needs are addressed within the following list of institutional motivators. Bonk (2001); Rockwell, et al, (1999) said that Extrinsic Motivators External incentives in the form of tenure and promotion would also increase the level of job satisfaction as well as the amount of support and recognition faculty receive from peers – another factor that motivates faculty participation. External incentives in the form of tenure and promotion would also increase the level of job satisfaction as well as the amount of support and recognition faculty receive from peers – another factor that motivates faculty participation.

Evaluation of Distance Education with Special Reference to Management

607

Olcott & Wright (1995) found that Peer pressure exists at the academic department level and departmental support is essential for increasing faculty participation in distance education. Peer pressure in the form of competitors – other faculty and programs within higher education institutions and other markets. Student pressure is exhibited not only by the way in which students communicate with one another (i.e. instant messaging and chat rooms) but also with the professor (via email). Students increasingly choose to conduct research via the Internet, escalating pressure on universities to provide online library access and causing faculty to be more knowledgeable about copyright and online plagiarism issues. In Betts’ (1998) study, administrators note that pressure on faculty to participate in distance education came from two sources of pressure – the administration and prospective students. Meeting the community’s needs may entail offering courses and programs via distance education for rural areas, business and industry, and working adults – generally, a new population of learners.. Verduin and Clark (1991) suggested the structure or categories of distance education research include (a) philosophy and theory of distance education, (b) distance students, their milieu, conditions and study motivations, (c) subject-matter presentation, (d) communication and interaction between students and their supporting organization [tutors, counsellors, administrators, other students], (e) administration and organization, (f) economics, (g) systems [comparative distance education, typologies, evaluation, etc.], and (h) history of distance education. With this as background information, the research agenda on distance learning greatly expanded over the past decade with research emerging in a number of areas. Some of the researchers have been interested in the learner – their attributes and perceptions, interaction patterns and how these contribute to the overall learning environment. An example of this includes research on a learner-cantered approach (Hanson et al., 1996).

Objectives of the Study l

To develop and standardize a measure for quality of distance management education.

l

To evaluate quality of distance management education from the students perspective.

l

To open new vistas for research.

RESEARCH METHODOLOGY Study: The study was exploratory in nature with survey method being used to complete The Study the study. Sample Design: Population contained the students of professional education Institutes in Gwalior region. Individual respondents were sampling element. Purposive sampling technique was used to select sample. Responses were taken from 100 respondents.. Tools for data Collection: Self-designed questionnaire for evaluation of distance management education was used. Data was collected on a Likert type scale, where 1 stands for minimum agreement and 7 stands for maximum agreement. Analysis: Item to total correlation was applied to check the internal consistency Tools for Data Analysis of the questionnaires. The measure was standardized through computation of reliability and validity. Factor analysis was applied to identify the under lying factors.

608

Key Drivers of Organizational Excellence

RESULTS & DISCUSSIONS Test: Iterative item to total correlation was applied on the responses Internal Consistency Test received from the students. The items having higher correlation coefficient value than the cut off value were retained for further analysis. There were 14 items in the quality of distance management education measure (Table 1). Reliability Test: Reliability tests were applied on the quality of distance education measure using SPSS and the value of Cronbach Alpha reliability coefficient was found to be 0.833 and the value of split half reliability coefficient was found to be 0.860. Thus the reliability of the measure was found to be high. Factor Analysis: Principal component factor analysis with Varimax rotation and Kiser normalization was applied. The factor analysis of Evaluation of Quality Management Distance Education converged 14 items into 4 factors namely Student Support Services, Complaints Handling and information, motivation and technical assistance and Feedback. The details about the factors are included in the (table 2).

CONCLUSION The study attempted to cast light on evaluation of distance quality management education. The study is useful contribution to the professional institution, to understand the quality management and the student set of mind while choosing distance education course and institutes. This research also provides information to educational institution on how to improve their quality of the distance course.

References Berge and Betts (1998), Electronic Distance Learning: Positives Outweigh Negatives, T.H.E. Journal, 18, 6770. Bonk, C. J. (2001), Attitudes of Higher Education Faculty toward Distance Education: a National Survey, The American Journal of Distance Education, 7 (2), 19-33. Carl, D. L. (1991), Electronic Distance Learning: Positives Outweigh Negatives, T.H.E. Journal, 18, 67-70. Chizmar and Williams (2001), The Development of Distance Education Research, The American Journal of Distance Education, 1 (3), 16-23. Clark, T. (1993), Attitudes of higher education faculty toward distance education: A national survey, American Journal of Distance Education 7(2):19-33. Collis, B., Veen, W. & DeVries, P. (1993), Preparing for an Interconnected Future: Policy Options for Telecommunications in Education, Educational Technology, 33 (1), 17-24. Hanson, D., Maushak, N. J., Schlosser, C. A., Anderson, M. L., Sorensen, C. & Simonson, M. (1996), Distance Education: Review of the Literature (2nd Ed.), Research Institute for Studies in Education. Ames: Iowa State University. International Conference on Learning with Technology (2000), Educators Examine Impact of Technology on Learning Reported, Lincoln Journal Star, June 12, Lincoln, NE. McKillip, J. (1987), Need Analysis: Tools for the Human Services and Education, Newbury Park, CA: Sage Publications. Miller, G. (1993), American Independent Study, News from the NUCEA Division of Independent Study.

Evaluation of Distance Education with Special Reference to Management

609

Moore, M. (1994), Administrative Barriers to Adoption of Distance Education, The American Journal of Distance Education, 8 (3), 1-4. P Baldwin (1998), Preparing for an Interconnected Future: Policy Options for Telecommunications in Education, Educational Technology, 33(1), 17-24. Olcott, D. Jr. and Wright, S. J. (1995), An Institutional Support Framework for Increasing Faculty Participation in Postsecondary Distance Education, The American Journal of Distance Education, 9 (3): 5-17. Rockwell, S. K., Schauer, J., Fritz, S. M., and Marx, D. B. (1993), Faculty Education, Assistance and Support Needed to Deliver Education via Distance, Online Journal of Distance Learning Administration. Schifter C.C. (2000), Information and Training Needs of Agricultural Faculty Related to Distance Education, Journal of Applied Communications, 81(1):1-9. Verduin, J.R. and Clark,T.A. (1991), Distance Education: The Foundations of Effective Practice, San Francisco: Jossey-Bass.

610

Key Drivers of Organizational Excellence

Annexure Table 1: Showing Results of Internal Consistency

S.No

Computed correlation value

Consistency

Accepted/dropped

1

0.468579

Consistent

Accepted

2

0.456569

Consistent

Accepted

3

0.56322

Consistent

Accepted

4

0.468579

Consistent

Accepted

5

0.390086

Consistent

Accepted

6

0.368456

Consistent

Accepted

7

0.769601

Consistent

Accepted

8

0.405828

Consistent

Accepted

9

0.46513

Consistent

Accepted

10

0.794967

Consistent

Accepted

11

0.727833

Consistent

Accepted

12

0.447393

Consistent

Accepted

13

0.590869

Consistent

Accepted

14

0.610192

Consistent

Accepted

Table 2: Showing Results of Factor Analysis

Factor name

Eigen values

1. Student Support Services

2.Complaints Handling information

Variable convergence

Total

%Of Variance

4.639

33.136

1.782

12.782

and

Loading

3.instruction by instructor

0.814

4.disscussion

0.695

7. course materials

0.665

11.assistance

0.631

13.online library

0.611

12.student support services

0.422

5.complaints

0.830

6.well advised

0.805

9.admission process 3. Motivation and technical assistance

1.464

10.459

14.availability resources

of

2.self motivation commitment 4. Feedback

1.268

9.055

0.631 network

0.824

and

0.750

10.technical assistance/support

0.633

8.earlier web practices

0.864

1.feedback

0.613

Role of Information System and Improved Business Decisions

611

68

Role of Information System and Improved Business Decisions K K Pandey Satish Bansal Manisha Pandey

Information has always played a vital role in decision making. Information has been widely valued for what it intrinsically is, as well as what it does for an organization. The success of an organization largely depends on the quality of the decisions that its manager makes. Each and every area of managerial decision making, be it planning, organizing, coordinating, directing or control, calls for substantial amount of information processing. Information is an entity- qualitative or quantitative - and is a value addition over data. On the whole, raw and unprocessed data have to be screened and filtered, to reckon as meaningful information. When decision making involves large amounts of information and a lot of processing, computer based systems can make the process efficient and effective. This chapter deals with how to provide meaningful information to the decision makers to make that decision effective and the role of computer based information system in processing raw data and conversion into meaningful information.

INTRODUCTION Information system is a necessity of all the organizations and now computers have fundamentally changed information system from an abstract concept to concrete systems that provide insight into organizations, open avenues to leverage business opportunities, and even give a competitive edge. With a good information system support, the management decision making becomes more effective. Several types of information systems support decision making: Decision-support system, Executive information system and Expert system. According to Norbert Wiener (1948), in recent years applications have been developed to combine several of the features and methods. Also, many decision- support modules are often integrated into larger enterprise applications. For example, ERP (enterprise resource planning) systems support decision making in such areas as inventory replenishment. A decision is easy to make when one option will clearly bring about a better outcome than any other. Decision becomes more difficult when more than one alternative seems reasonable

612

Key Drivers of Organizational Excellence

and when the number of alternatives is great. In business, there can be dozen, hundreds, or even millions of different courses of action available to achieve a desired result. The problem is deciding on the best course of action.

DECISION MAKING PROCESS According to Herbert Simon, a long- time researcher of management and decision making, described decision making as a several phase process. l

Intelligence: In the intelligence phase we collect facts, beliefs, and ideas. In business, the facts may be millions of pieces of data.

l

Design: In the design phase, we design the method by which we will consider the data. The methods are sequences of steps, formulas, models, and other tools that enable us to systematically reduce the alternatives to a manageable number.

l

Choice: When we are left with a reduced number of alternatives, we make a choice; that is, we select the one alternative we find most promising. But when a manager finally selects the best course of action, decision making process does not stop here, it needs finally put into effect. In this view the fourth step is required other than Herbert Simons’s three- phase process i. e .implementation.

l

Implementation: is the process of putting the decision into effect. Implementation is the fourth phase because the problem is not solved until the decision is put into operation.

According to Laudon, Kenneth C., and Jane Price Laudon (1991), every small, medium or large size organization deals with many types of transactions daily related to major business functions like marketing, finance, stores, production, quality etc. Number of transactions depends upon the size of the organization and recording of the transaction is done for future purpose. Information is a processed form of data and is a strong base for manager’s decisionmaking. Organization always gives much importance to the information useful in decisionmaking, process used in formation of this information and training required to take sound decisions but neglects the most important concept that is only correct process or input is not sufficient to generate meaningful information. This can be well understood with the help of given diagrams:

Input

Process

O utput

If Incorrect

If Correct

Incorrect

Figure: 1

Input

Process

O utput

If Correct

If Incorrect

Incorrect

Figure: 2

Role of Information System and Improved Business Decisions

613

Input

Process

O utput

If Correct

If Correct

Correct

Figure: 3

Now this is very clear from the third figure that there is a condition for both input and process to be correct to generate correct output. As per Scott Morton, Michael S (1971), use of computers in data processing had increased accuracy level, as computer programs may be design for data processing. For example, to know the average height of 100 students in a class is very simple, because formula to calculate the average height may be define in the program and this helps to control chance of errors in the process, which was difficult manually as we are human beings During addition of all the data, if the person adding data pressed subtraction key while adding last data, this will reduce the accuracy level of information, which is impossible in case of computers as addition program is already defined there so computer will only add all the entered value. Hence use of computer helps to control the processing errors but there is another area where controlling is required to make the generated information more meaningful and valuable for decision making. Consider an example, suppose purchase manager has to take the decision about inventory for the next year. The information available is that 50,000 units of a particular type of product were required last year. For coming year the requirement may be 10% more or less of the previous year. If he takes the decision on the basis of this information, what’s goes wrong with his decision is clearly given in the table. Month

Product(units required)

January

10,000

February

5,000

March

3,000

April

20,000

May

1,000

June

4,000

July

500

August

500

September

1,000

October

700

November

300

December

4000

Total

50,000

After watching the previous records carefully, the manager found that in the month of April data entered by data entry operator was 20,000 while the correct data was 2,000, and if we go through the total requirements of all months, picture is too clear that the actual requirement is of 32000 units of that particular product, and there is a very big difference between the generated information and up to what level this information may affect the managers decision

614

Key Drivers of Organizational Excellence

is very clear and if managers takes the information on the basis of incorrect information what other problems may arise are: 1.

Increased inventory cost

2.

Problems in storage of extra inventory

3.

Wastage will be more

4.

money is blocked

Above is one of the business problems, now it is essential to know other types of problems.

TYPES OF PROBLEMS Suppose a person wants to invest a sum of money for a period of 5 years at a yield of 10 percent. He may consult an investment specialist who considers the total amount of money, the length of time that person is willing to separate himself from his money and the desired yield, the consultant suggests to invest savings in the stocks. At the end of the period, that person is left with less money. Person complains to the consultant, but he says: “There are no guarantees. Here we are dealing with an unstructured area.” Another case where a person says, “I am a mathematician. Give me a problem, give me the parameters, and I will give you the number you are looking for. Guaranteed. Why can’t you guarantee that your decision will yield what you expect?” The answer is that the confidence that can be placed in the solution to a problem depends on the nature of the data and data analysis that are used to solve the problem. Depending on amount of data and the availability of data analysis methods, the problems we face daily can be classified as structured, semi structured, and unstructured.

Structured Problems A fully structured problem is one whose optimal solution can be reached through a single set of steps. Since the one set of steps is known, and since the steps must be followed in a known sequence, solving a structured problem with the same data will always yield the same solution. Structured problems are often referred to as programmable problems because it is feasible to write a program to solve them.

Unstructured Problems An unstructured problem is one for which there is no algorithm to follow to reach an optimal solution-either because there is not enough information about the factors that may affect the solution or because there are so many potential factors that no algorithm can be formulated to guarantee a unique optimal solution. Unstructured problems are considered unprogrammed because no specific program can solve them perfectly. Despite this view, computer programs have been written to solve unstructured problems. Although the programs may not yield perfect results, and two different programs addressing the same problem may even yield different results, they do significantly minimize the time that would otherwise be necessary to solve these problems.

Role of Information System and Improved Business Decisions

615

Unstructured ness is closely related to uncertainty. We cannot be sure what the weather will be tomorrow; nobody can guarantee what an investment in a certain portfolio of stocks will yield by year’s end; and two physicians may diagnose the same symptoms differently. These are all areas where unstructured problems predominate.

Semi structured Problems A semi structured problem is one that is neither fully structured nor totally unstructured. The examples of unstructured problems cited previously would be considered semi structured by experts in their fields, because experts have sufficient knowledge to narrow down the number of different possible solutions, but not sufficient to guarantee 100 percent certainly of producing an optimal solution. The problem “How much will I earn after two years if I invest Rs.5, 00,000 in fixed deposits that pays 8 percent per annum?” is structured. To find the solution we have to follow a simple algorithm that takes as parameters your Rs.5, 00,000, the two years, and the 8 percent interest rate. Our calculated income is guaranteed. However, the problem “If someone invest Rs. 500,000, in the stock of ABC ltd., and sells the stock after two years, how much money will he make?” is semistructured. It cannot be considered structured because too many factors must be taken into consideration: the demand for the company’s products, entrance of competitors into its market, the market of its products in the country and overseas, and so on. So many factors affecting the price of the stocks may change over the next two years that the problem is semistructured at best and totally unstructured at worst.

SIMPLE PROCESS FOR PROBLEM-SOLVING AND DECISION-MAKING According to Sprague, R. H. and Hugh J. Watson (editors) (1996), Problem solving and decision-making are important skills for business and life. Problem-solving often involves decision-making, and decision-making is especially important for management and leadership. There are processes and techniques to improve decision-making and the quality of decisions. Problem-solving and decision-making are closely linked, and each requires creativity in identifying and developing options, for which the brainstorming technique is particularly useful. Good decision-making requires a mixture of skills: creative development and identification of options, clarity of judgment, firmness of decision, and effective implementation. Decision making, in simple terms, is regarded as an individual human activity focused on particular matters which is largely independent of other kinds of choice. In more formal terms, decision making can be regarded as an outcome of mental processes leading to the selection of a course of action among several alternatives. Every decision making process produces a final choice. The output can be an action or an opinion. It is very difficult to arrive at a judgment on a particular decision, particularly, when we lack knowledge on the subject matter.

USE OF MODELS Businesses collect data internally (from within the organization) and externally (from outside resources). They use models to analyze data. A model is a representation of reality. In architecture, a table-top representation of a building or a city block is a model of the full-

616

Key Drivers of Organizational Excellence

sized structure. A map is a small-scale representation-a model-of a particular geographic area.

MODELS IN BUSINESS In business, mathematical equations that represent the relationship among variables can be models for how business will respond to changes, such as: what will happen to profits when sales and expenses both go up or down? Managers either use universal models, such as certain statistical models, or design their own models to analyze data. Then they select what they perceive as the best course of action. Sometimes an individual manager makes the decisions. At other times, the decision process is carried out by a group of managers. There are Computer- based aids that can support almost any style of decision-making. Decision making skills are must for any manager - and making the right decisions is essential. The process of decision making is of the utmost importance for effective management. As a manager, your decision making must be informed by expert knowledge and experience. The process of decision making is influenced by several factors and should be investigated from several perspectives: 1.

Learning

2.

Normative

3.

Expected utility

4.

Psychological

5.

Moral

6.

Behavioral

7.

Leadership styles and organizational

8.

Deterministic

9.

Complexity

10.

Neuroscience

11.

Philosophical

12.

Economic

13.

Engineering

According to Robert G. Murdick, Joel E. Ross, James R. Claggett (1997), Decision-making increasingly happens at all levels of a business. The Board of Directors may make the grand strategic decisions about investment and direction of future growth, and managers may make the more tactical decisions about how their own department may contribute most effectively to the overall business objectives. But quite ordinary employees are increasingly expected to make decisions about the conduct of their own tasks, responses to customers and improvements to business practice. This needs careful attention about computer tactics. Although the strategic decisions are taken by the top levels but information comes from various sources, if the process selected by lower level to convert data into information is even

Role of Information System and Improved Business Decisions

617

slightly incorrect value of information will be very low or sometimes nil. Also if data used for converting information is unreliable and unrealistic again the value of information will be very low or nil which is the base of decision making.

TYPES OF BUSINESS DECISIONS Programmed Decisions: These are standard decisions that could be made using the same standardized process, every time a decision is required. As such, they can be written down into a series of fixed steps which anyone can follow. They could even be written as computer program Non-Programmed Decisions: These are non-standard and non-routine. Each decision is not quite the same as any previous decision. Strategic Decisions: These affect the long-term direction of the business e.g. whether to take over Company A or Company B Tactical Decisions: These are medium-term decisions about how to implement strategy e.g. what kind of marketing to have, or how many extra staff to recruit Operational Decisions: These are short-term decisions (also called administrative decisions) about how to implement the tactics e.g. which firm to use to make deliveries.

Stra teg ic P lan ning L evel

B o ard o f D irecto rs, M an ag in g D irecto rs, C E O

M an ag em e nt C o ntrol Le vel

D e partm e nta l an d D ivision al H e ads, M an ag ers

O pe ra tio na l C o ntrol Le vel

Figure 4: Decision Making Model

S u pe rvisors

618

Key Drivers of Organizational Excellence

CONSTRAINTS ON DECISION-MAKING 1.

Internal Constraints: These are constraints that come from within the business itself. a) Availability of funds: We reject certain decisions because they cost too much b) Business policy: Every time it is not always practical to re-write business policy to accommodate one decision c)

2.

People’s abilities and feelings: A decision cannot be taken if it assumes higher skills than employees actually have, or if the decision is so unpopular no-one will work properly on it.

External Constraints: These come from the business environment outside the business. a) Customers Choice b) Competitors’ behavior, and their likely response to decisions your business makes c)

Lack of technology

d) Economic climate

QUALITY OF DECISION-MAKING Some managers and businesses make better decisions than others. Good decision-making comes from: l

Training of managers in decision-making skills

l

Good information in the first place

l

Management skills in analyzing information and handling its shortcomings

l

Experience and natural ability in decision-making

l

Risk and attitudes to risk

l

Human factors

Before explaining the role of computer based information system in decision making it is important to know:-

Business areas where decision making is required The major areas in which decision making is required are l

Manufacturing/ operations,

l

Marketing

l

Finance/administration

l

Project planning/ control

l

Support Systems

l

Engineering/design

Role of Information System and Improved Business Decisions

619

D a ta C o llection fro m in te rna l an d e xte rna l so urces

O bjective Ide ntifica tio n

C o nversion of d ata in to m ea ning fu l inform a tio n th rou g h su itab le p rocess

S e le ctin g co urse of action

Im p le m e ntin g se le cting a ctio n

E valua ting re su lts Stud y of fina l results

Figure: 5 shows a good decision-making process

These systems are not separate and distinct; they connect, interact, and otherwise tie the subsystems of the organization together through the medium of information.

Decisions taken at different levels Decisions are taken by all the levels of organization but the natures of decisions are differing from level to level. Starting at the bottom: l

Operational decisions are day to day decisions needed in the operation of the organization. These decisions affect the organization for a short period of time, such as several days or weeks. For example, in a departmental store, an optional decision is whether to order more a branded perfume today. This decision affects the business for the next few weeks. This type of decisions is made by lower- level managers.

l

The next level of decisions is Tactical decisions, which are those that involve implementing policies of the organization. They affect the organization for a longer period of time than operational decisions, usually for several month or a few years, and are made by middle-level managers. For example, deciding whether to sell branded

620

Key Drivers of Organizational Excellence perfume next summer is a tactical decision; it has an effect on the organization for a long period of time.

l

At the highest level of decisions are Strategic decisions, which are made by top-level managers. These decisions involve setting organization policies, goals, and long-term plans, and they affect the organization for many years. For example, a strategic decision for a departmental store is whether the store should stop selling branded perfume and start selling some other product. This decision has a long-term effect on the business.

ROLE OF INFORMATION SYSTEM IN DECISION MAKING An information system is “a set of procedures that collects (or retrieves), processes, stores, and disseminates information to support decision making and control.” In most cases, information systems are formal, computer based systems that play an integral role in organizations. Although information systems are computer based, it is important to note that any old computer or software program is not necessarily an information system. “Electronic computers and related software programs are the technical foundation, the tools and materials, of modern information systems, “Understanding information systems, however, requires one to understand the problems they are designed to solve, the architectural and design solutions, and the organizational processes that lead to these solutions.” Though it is sometimes applied to all types of information systems used in businesses, the term “management information systems, “ or MIS, actually describes specific systems that “provide managers with reports and, in some cases, on-line access to the organization’s current performance and historical records, “

MANAGEMENT INFORMATION SYSTEM Effy Oz (2002), stated “MIS primarily serve the functions of planning, controlling, and decision making at the management level.” MIS are one of a number of different types of information systems that can serve the needs of different levels in an organization. For example, information systems might be developed to support upper management in planning the company’s strategic direction or to help manufacturing in controlling a plant’s operations. Some of the other types of information systems include: transaction processing systems, which simply record the routine transactions needed to conduct business, like payroll, shipping, or sales orders; and office automation systems, which are intended to increase the productivity of office workers and include such systems as word processing, electronic mail, and digital filing. Ideally, the various types of information systems in an organization are interconnected to allow for information sharing. The development of effective information systems holds a number of challenges for small businesses. “Despite, or perhaps because of, the rapid development of computer technology, there is nothing easy or mechanical about building workable information systems, “Building, operating, and maintaining information systems are challenging for a number of reasons.” For example, some information cannot be captured and put into a system. Computers often cannot be programmed to take into account competitor responses to marketing tactics or changes in economic conditions, among other things. In addition, the

Role of Information System and Improved Business Decisions

621

value of information erodes over time, and rapid changes in technology can make systems become obsolete very quickly. Finally, many companies find systems development to be problematic because the services of skilled programmers are at a premium. Despite the challenges inherent in systems development, however, MIS also offer businesses a number of advantages. “Today, leading companies and organizations are using information technology as a competitive tool to develop new products and services, forge new relationships with suppliers, edge out competitors, and radically change their internal operations and organizations, “ For example, using MIS strategically can help a company to become a market innovator. By providing a unique product or service to meet the needs of customers, a company can raise the cost of market entry for potential competitors and thus gain a competitive advantage. Another strategic use of MIS involves forging electronic linkages to customers and suppliers. This can help companies to lock in business and increase switching costs. Finally, it is possible to use MIS to change the overall basis of competition in an industry. Brien (1999), stated that a variety of tools exist for analyzing a company’s information needs and designing systems to support them. The basic process of systems development involves defining the project, creating a model of the current system, deriving a model for the new system, measuring the costs and benefits of all alternatives, selecting the best option, designing the new system, completing the specific programming functions, installing and testing the new system, and completing a post-implementation audit. According to Banergee (2004), information systems designers, whether internal to the company or part of an outside firm, are generally responsible for assuring the technical quality of the new system and the ease of the user interface. They also oversee the process of system design and implementation, assess the impact of the new system on the organization, and develop ways to protect the system from abuse after it is installed. But it is the responsibility of small business owners and managers to plan what systems to implement and to ensure that the underlying data are accurate and useful. “The organization must develop a technique for ensuring that the most important systems are attended to first, that unnecessary systems are not built, and that end users have a full and meaningful role in determining which new systems will be built and how, “ Consider an example: Suppose a bank is under planning phase for the next year and the top level management has to take decisions about requirement of cash for the coming year in a particular month, week, or day. The solution to this type of problems lies inside the organizations. But again condition is that the required data to make meaningful information is available or not. If the sufficient data is available, then the second requirement is of the process whether the correct process is available or not. And the process idea comes from the system and to make the system we should know the boundaries of every sub – systems inside that system. Suppose everything is there means process, data, systems then it is now easy to develop several alternative, comparison and selection of best alternative. We can understand this with the help of this figure.

Interdependence Businesses are highly interdependent on each other, their suppliers and their customers. Decisions are not taken in isolation. The effects of any decision will depend critically on the

622

Key Drivers of Organizational Excellence

reactions of other groups in the market. These have to be, as far as possible, taken into account before decisions are made. The computers have become user- friendly. They can communicate to any distance and share data, information and physical resources of other computers; it can be used for knowing the current status of any aspect of the business due to its on-line real time processing capability. With the advancement of computer technology, it is now possible to recognize information as a valuable resource like money and capacity.

In pu t D a ta

U se of P ro gra m fo r P rocessin g

C o m p uted R e su lt

S u m m a ry of R e su lts

Valid ate d ata , if N e ce ssa ry

C h an ge th e pro gra m if n ecessary

C o m pa re R e su lts

D e cision C o ntrol F ee db ack

D e sirab le o r kno w n sum m ary o f re su lts

Figure 6: Control System Model for Data Processing

The computer does not take decisions; managers do. But it helps managers to have quick and reliable quantitative information about the business as it is and the business as it might be in different sets of circumstances. There are also aids to decision-making, various techniques which help to make information clearer and better analyzed, and to add numerical and objective precision to decision-making (where appropriate) to reduce the amount of subjectivity. According to Sadagopan (2003), the quality of information can be measured on the four dimensions, viz., utility, satisfaction, error and bias. The concept of the utility of the information is subjective to the individual manager, at least in terms of the form, time and access. Since in the organization there are many users of the same information, the subjective ness would vary. Therefore, the one common key for measuring the quality could be the satisfaction of the decision maker. The degree of satisfaction would determine the quality of the information. If the organization has a high degree of satisfaction, then one can be safe in saying that the information systems are designed properly to meet the information needs of the managers at all levels.

Role of Information System and Improved Business Decisions

623

BENEFITS OF COMPUTER-BASED INFORMATION SYSTEMS Robert C. Nickerson (2002), stated computers are fast and accurate, and they process large volumes of data although these characteristics make computer information systems very useful, the real benefits of these systems are much more involved.

Better information: Computer-based information systems store and process data, but they produce information, which is the basis for good decision making. When a business person makes a decision, he or she selects one of several alternative courses of action. Almost always the person is uncertain about what exactly will happen with each alternative. Information helps reduce the person’s uncertainty, and so with better information a business person is more certain about the outcome of the decision. Improved Service: Computer-based information systems operate at any time of the day or night and process data faster than humans. So, organizations and businesses serve their customers and clients more conveniently and efficiently with computer information systems than without them. Increased Productivity: Productivity has to do with how much people can accomplish in a given period. With computer-based information systems, people can do more work in a period of time than they would be able to do if they did not have such systems. Competitive Advantage: Computer-based information systems can help the business gain a competitive advantage. For example, information system can help reduce the cost of production so that a business can have the least expensive product.

LIMITATIONS OF MIS According to Muneesh Kumar (1998), the common weakness of MIS has been the nonavailability of decision oriented reports. For example, a manager would need a report that offers complete information regarding the cost structures for a pricing decision and not just estimates of variable and fixed costs. Because of the predefined periodicity of MIS reports, it is possible that information reaches the manager quite late and sometimes too late. Though, MIS reports serve the purpose of keeping the managers aware of happenings in the business, they are not tuned for analysis of the situation in terms of identifying reasons for undesired situations or working out and evaluating alternative courses of action. Another, common complaint of managers is that MIS are generally, quite slow in responding to the dynamics of the market situation and thus help more in doing post-mortem than warning at the time when things are going wrong.

CONCLUSIONS We must accept the relationship between the computer and its use for decision making applications. We had concerned with how the computer “makes” programmed decisions, and how the computer can provide decision-assisting information for complex decisions that do not lend themselves to automation. Clearly, no computer automation can take care of unclear situation. It will be for managers – using their decision-imperatives – to apply their value judgments, intuitions and even impulses to undertake decision-making. Information, however, remains the “window” to the organization and the “door” to the insights of its functioning, and managers do need information for their decision-making. Conceptually, a

624

Key Drivers of Organizational Excellence

management information system can exist without computers, but it is the power of the computer which makes MIS feasible. The question is not whether computers should be used in management information system, but extent to which information use should be computerized. Computer-based management information system should not be attempted without a firm data discipline and is to be entirely geared to meeting the decision maker need. Despite the abundance of development of computer technology, we still do not have an exact science of nonprogrammed decision-making.

Needs of Upgrading Clerical System to Improve Decision Making Although in a strict sense the clerical and supervisory types of application are not oriented to decision making, but the clerical area offers applications with immediate payoff in cost reduction as well as improved accuracy of information. The major users of computer-based information systems are clerical person, first level managers, staff specialist, and management. Clerical personnel are responsible for handling transaction, processing input data, and data control. The job of clerical person may be altered significantly when transaction processing is changed from manual to computer-based, especially if the system is online. The problem with the decisions taken by top level management is that the decision is based on the information supplied by MIS. Here information received is the processed form of data. Careful attention is required at the first stage, that is capturing of data from physical documents and inputting captured data into system, which is the job of clerical person Because of the discrepancy between what is needed and what is provided by the data, decision makers must expend time and effort to acquire information and decision support they need but can not get from their system. They supplement the computer-based system with their own private information systems. The result is additional costs, in terms of lost time and on the part of decision makers to operate these private information systems. At the same time there are reduced benefits from Good decision making is an essential skill for career success generally, and effective leadership particularly. If we can learn to make timely and well-considered decisions, then we can often lead our team to spectacular and welldeserved success.

References Laudon, Kenneth C., and Jane Price Laudon (1991), Management Information Systems: A Contemporary Perspective, 2nd ed., New York: Macmillan, 1991. Scott Morton, Michael S (1971), Management Decision System: Computer-based Support for Decision Making, Boston, MA: Division of Research, Graduate School of Business Administration, Harvard University. Sprague, R. H. and Hugh J. Watson (editors) (1996), Decision Support for Management, Englewood Cliffs, N. J.: Prentice-Hall, Inc. Simon, Herbert Alexander (1977), The New Science of Decision Making, Prentice Hall: New Delhi. Norbert Wiener (1948), Cybernetics, New York: John Wiley, pp. 187. Utpal K Banergee (2004), Practical Management Information System, Indian Experiences and Case Studies, Macmillan: New Delhi Robert G. Murdick, Joel E. Ross, James R. Claggett (1997), Information Systems for Modern Management, 3rd ed., Prentice-Hall: New Delhi.

Role of Information System and Improved Business Decisions

625

Jerome Kanter (1992), Managing with Information, 4th ed., Prentice-Hall: New Delhi. Sadagopan (2003), Management Information Systems, 1st ed., Prentice-Hall: New Delhi. Muneesh Kumar (1998), Business Information Systems, Vikas Publishing House: New Delhi Robert C. Nickerson (2002), Business and Information Systems, 2nd ed., Prentice-Hall: New Delhi. Effy Oz (2002), Management Information Systems, 3rd ed., Thomson Course Technology: New York. Steven Alter (1999), Information Systems, a Management Perspective, 3rd ed., Pearson Education Asia Pvt. Ltd. James A. O’Brien (1999), Management Information Systems, Managing Information Technology in the Internetworked Enterprise, 4th ed., Galgotia Publications: New Delhi.