HCI as an Engineering Discipline - Semantic Scholar

2 downloads 0 Views 547KB Size Report
However, if we want to change reality we have to add models and arte- facts that fit into reality where details matter. Concrescence is the complementary activity ...
163

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

HCI as an Engineering Discipline: to be or not to be!? G. W. Matthias Rauterberg Department Industrial Design, Eindhoven University of Technology, The Netherlands [email protected]

Abstract— One of the major challenges in the emerging interdisciplinary field of human-computer interaction (HCI) is the specification of a research line that can enable the development of validated design knowledge with a predictive power for the design of interactive systems. Based on the three different elements in the design of interactive systems: (1) human being(s), (2) technical artefact(s), and (3) context of use, different academic disciplines contribute with different research paradigms to this new field: social sciences with a strong empirical and experimental approach, industrial and interaction design with a strong emphasis on artistic design, and engineering disciplines with a strong technical and formal approach. This programmatic paper presents, discusses and recommends a possible way to integrate the strengths of different research and design paradigms based on triangulation, and we argue for HCI as an engineering discipline. Index Terms— human computer interaction, design paradigm, research agenda, triangulation.

I.

INTRODUCTION

The Human-Computer Interaction (HCI) community is diverse. Academics and practitioners from science, engineering, design or art contributing to its rapid development, but communication and cooperation between the different disciplines can be challenging at times. The Association for Computing Machinery (ACM) Computer Human Interaction (CHI) conference is the largest and arguably one of the most important conferences in the field, which is organized through their Special Interest Group Computer Human Interaction (SIGCHI). At the 2005 SIGCHI membership meeting, the organization of the next conference was discussed, which ignited a shouting match between academics and practitioners [1]. This outbreak of emotions illustrates the tension between the different groups and it can be explained by taking a

closer look at the paradigms under which they operate. Already about ten years ago Rogers, Bannon and Button [2] opened and discussed three major questions: (1) What is the problem in [or with] HCI? (2) What does a theoretical approach have to offer HCI? (3) How does a theory relate to practice? In this programmatic paper we will address, discuss and provide possible answers to these three questions: in the past, at present, and hopefully for the future as well (see also [3]). Before we can directly address possible answers (see chapter 5), we have to introduce definitions, terms and concepts about research paradigms (see also [4]). A. Some Definitions We will use the established term HCI in a wide scope. With upcoming new technology (e.g., ambient and aware systems, mobile and entertainment computing, etc.) the traditional term HCI seems to be quite limited. Furnas [5] addresses this issue by broadening the scope of HCI to ++HCI. We mainly agree with Furnas’ scope of ++HCI. Therefore we will use a very broad definition of HCI throughout this paper (including adaptive and non-adaptive systems, professional, home consumer and entertainment products, etc.). HCI investigates and develops interactive products, systems or services that contain at least some computational power. Science as an activity is the concerted effort to understand, or to better understand, the natural world and how the natural world operates, with observable evidence as the basis of that understanding. It is done by investigating observable phenomena, and/or through experimenting that tries to simulate observable processes under controlled conditions. The logic of modern science requires that observations or facts guarantee the validity of generalizations or

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

164

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

theories [6]. Science is not art; because art is largely an individual's effort to communicate his or her ideas or feelings in an implicit manner via artefacts. On the contrary, science is a group effort to explicitly describe and understand reality. Science is not technology either. Although science can lead to technology, and it uses technology, it is knowledge by nature [7]. Following van Aken [8] we shall use the “term ‘scientific’ like the German ‘wissenschaftlich’ or the Dutch ‘wetenschappelijk’, meaning ‘according to sound academic standards.’ Thus, its meaning is not confined to the natural sciences” (p. 242). We will use the term science to describe research in the positivistic paradigm only, and we will use the term academia to describe the whole. According to Merriam Webster’s online dictionary1, ‘engineering’ is defined as: (1) the activities or function of an engineer; (2a) the application of science and mathematics by which the properties of matter and the sources of energy in nature are made useful to people; (2b) the design and manufacture of complex products (e.g., software engineering); and (3) calculated manipulation or direction (as of behaviour). We will mainly refer to definition part (2a) and (2b) further on. Academic research consists of ‘science’, ‘engineering’, and other activities according to sound academic standards. B. How a Discipline Develops All over the world, research communities are contributing to the growing area of HCI, based on the context in which each community is established (e.g., art, industrial design, computing science, software engineering, electrical and mechanical engineering, psychology, sociology, ethnology, etc.). The survival of these communities depends on their abilities to adapt to their environment, and to which extent the whole interdisciplinary research arena can be established as such. Nowadays, several stakeholders are requiring more interdisciplinary research than in the past [9]. HCI is by nature interdisciplinary, and started almost 30 to 50 years ago [10], [11]. What are the main theories, artefacts, and 1

See at http://www.m-w.com/dictionary/engineering

methods developed until now? What are the remaining challenges for HCI? In this paper we try to offer a broad and ambitious view to continue a discussion about the possible academic future of HCI. We will begin by describing some aspects of how academic disciplines can evolve, which the relevant phases are, and what the possible requirements are that have to be fulfilled. In the next step we discuss the relevant paradigms and discuss how the different paradigms could be merged into a necessary new one. In the final part of this paper we present a general concept in which interdisciplinary research for HCI may benefit from a structured approach via triangulation. Böhme, Van den Daele, Hohlfeld, Krohn and Schäfer [12] differentiate three phases of development in academic disciplines: (1) Explorative phase: “Methods are predominantly inductive in character, and research is determined by strategies aimed at classification... The dynamics of the field are characterized more by discovery than explanation. The fine structure of the objects of study remains largely unknown, and is handled in a manner closely paralleling cybernetics’ famous ‘black box’. The scientist knows the relevant input and outputs − but what goes on between remains a mystery”. (2) Paradigmatic phase: “The onset of the paradigmatic phase is marked by the emergence of a theoretical approach which is able to organize the field. The introduction and elaboration of this approach represents a theoretical development with a definitive end. ... The theoretical dynamic of the paradigmatic phase is evidently one which can come to a conclusion − that is, can lead to mature theories which contain a fundamental, and in certain respects a conclusive, understanding of the discipline’s research object”. (3) Post-paradigmatic phase: “Where the organizing theories of scientific disciplines are clearly formulated and comprehensive, the possibilities of revolutionary changes or spectacular generalizations of their basic principles are commensurably reduced. Instead, the dynamics of theoretical development will be determined by the application of paradigmatic theories for the explanation of complex systems which can be subsumed within them” (pp. 6-9). Although Masterman

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

165

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

[13] could identify 21 different meanings of Kuhn’s term ‘paradigm’, we will still use this fuzzy term to refer to the epistemological basis of a research community. C. HCI: Looking Back It seems obvious that the present state of affairs for the interdisciplinary field of HCI is in the explorative phase ([14], p. 45), though it may be able to move on to the paradigmatic phase in the future. This statement does not necessarily exclude the possibility that different research communities contributing to HCI are already in a paradigmatic, or even in a postparadigmatic phase. According to Grudin [15], the origin of HCI can be located between human factors and ergonomics on the one hand [16], and software engineering on the other [17], without being merged with one of these two but cannibalising both. According to Hartson[18] HCI is “cross-disciplinary in its conduct and multidisciplinary in its roots, drawing on - synthesizing and adapting from - several other fields, including human factors (e.g., the roots for task analysis and designing for human error in HCI), ergonomics (e.g., the roots for design of devices, workstations, and work environments), cognitive psychology (e.g., the roots for user modelling), behavioural psychology and psychometrics (e.g., the roots of user performance metrics), systems engineering (e.g., the roots for much pre-design analysis), and computer science (e.g., the roots for graphical interfaces, software tools, and issues of software architecture)” (p. 103). What were the main research topics of HCI in the past? Hunt [19] analysed 1374 papers published between 1990 and 1999 in leading international journals (e.g., ACM Transactions on Computer-Human Interaction, Behaviour and Information Technology, Human Computer Interaction, Interacting with Computers, International Journal of Human Computer Interaction, International Journal of Human Computer Studies, etc.). He categorized all papers as follows: knowledge-based systems and theory (18.6%), design theory and software engineering (13.7%), language interfaces, i.e., text, speech, hypertext, hypermedia (11.3%), computer mediated communication (8.1%), social,

cultural and health implications of computers (7.3%), system testing and evaluation (6.8%), menu, icons, and graphics (5.7%); all other 16 categories accounted for less than 5%. One interesting result should be mentioned: only 0.6% of all papers fall in the category ‘HCI research issues’ [19]. Clemmensen [20] analysed 17 years of research published in the journal ‘Human Computer Interaction’. One of his major and important results is that the number of published papers based on ‘hard science’ (according to Newell and Card [21], [22]) has been decreasing since 1994. “This trend cannot be explained by a decreasing number of all empirical studies, …, where the number of theoretical studies for the first time after the first two years becomes higher than the number of empirical studies. These trends support Hartson’s assumptions that theoretical studies play a significant role in the literature [18]. However, the decreasing tendency in the use of laboratory experiment indicates decreasing support to Hartson’s claim that much of HCI theory comes from cognitive psychology” ([20], p. 272). Since about 1990, several stakeholders have started to become concerned about the ongoing research, and some important discussions have taken place: CHI panel 1991 on ‘HCI theory on trial’ ([24]); INTERCHI workshop 1993 on ‘rethinking theoretical frameworks for HCI’ [2]; CHI workshop 1996 on ‘educating HCI practitioners: evaluating what industry needs and what academia delivers’ [23]; CHI workshop 2000 on ‘national and international frameworks for collaboration between HCI research and practice’ [26]; CHI panel 2002 on ‘CHI@20: fighting our way from marginality to power’ [27]. And these discussions continue today [1]. From the CHI 2002 panel two statements are worth quoting: one from Don Norman (in [27]) “We do not contribute anything of substance: we are critics, able to say what is wrong, unable to move a product line forward. … The Design profession flourishes because they do things, they create” (p. 689); and the other from Stuart Card “The second limitation [of HCI in the past, added by author] was insufficient foundations … The Chapanis National Research Council report found most non-experimental human factors methods were not adequately

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

166

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

validated …” (p. 690). Several papers reflect on the past performance of HCI: ‘a preliminary analysis of the products of HCI research’ [28]; ‘toward an HCI research and practice agenda based on human needs and social responsibility’ [29], ‘HCI-whence and whither?’ [11], ‘HCI in the next millennium: supporting the world mind’ [30], ‘a reference task agenda for HCI’ [31], and more recently ‘crossing the divide’ [15]. Despite all the concerns discussed, Grudin concludes: “I am optimistic about the future of scholarship and scientific communication” ([15], p. 23). On what foundation is this optimism based? D. HCI: Looking Forward Although it is not possible to design a new paradigm, this has to emerge from different activities with the aim to setup a research community. But it is still possible to look at the main boundaries that have to be taken into account to maximize the success for this kind of endeavour. The following main activities can be identified: (1) institutionalisation via research centres and groups in academia and industry; (2) networking via major international conferences (e.g., ACM CHI, IFIP INTERACT, HCI international, etc.); (3) establishing peerreviewed journals, (4) proper education and teaching of enhanced models, solutions, and tools [28] [32] [33], and last but not least (5) identifying a standard or process for determining the quality of research ([15], p. 5). Activities (1) to (3) have been well done so far, but (4) and more (5) especially seem to be underdeveloped. Hence, we will focus on these two activities further on. We do not argue that in the past the HCI research community performed suboptimally; this is obviously not the case. Just in historical terms we have to move on and work hard to mature up to a level our customers (i.e., the practitioners, etc.) would like to have us, and thus the quality of our main deliverable: HCI design knowledge. The main question still is: how is it possible to improve the maturity of our discipline? What are the minimal requirements for a HCI research agenda that have to be satisfied to enable successful research? To make an answer possible, we first have to address the following ques-

tions: what is a paradigm, what are the relevant paradigms for our scope of research, and if there are differences, how can we combine them fruitfully? II. DESIGN PARADIGMS All over the world, several research communities, i.e. human-computer interaction [34], human factors [35], software engineering [17], and management information system [36] are struggling with their foundations, even if they are not fully aware of this. Following Kuhn’s model [37] of scientific development, it can be proposed that the inter-, cross- and multidisciplinary research arena of HCI may be considered an arena of several distinct communities that coalesce around associated paradigms. Paradigm is defined in the Kuhnian sense of a disciplinary matrix that is composed of (a) shared beliefs, (b) values, (c) models, and (d) demonstrative examples that guide a ‘community’ of theorists and practitioners [37] [38]. Dorst [39] presents an empirical comparison of two approaches of design methodology: reflective practice and rational problem solving. To this purpose, he introduced and discussed the two most influential paradigms related to these two approaches: (a) phenomenology for ‘design’ and ‘engineering’ research (reflective practice) and (b) positivism for scientific normative research (rational problems solving). Phenomenology ascertains and studies the kinds of elements universally present in the phenomenon. The phenomenon is whatever is present at any time to the mind in any way. “The business of phenomenology is to draw up a catalogue of categories and prove its sufficiency and freedom from redundancies, to make out the characteristics of each category, and to show the relations of each to the others" (Harvard Lectures on Pragmatism, CP 5.432, 1903). Research according to the positivistic paradigm is based on concepts of reality that invoke scientists to look for what is, and not to speculate on what might be, while searching for true 2

CP x.xx (volume.paragraph) = (Collected Papers of Charles Sanders Peirce, 8 volumes, vols. 1-6, eds. Charles Hartshorne and Paul Weiss. Cambridge, Mass.: Harvard University Press, 1931-1935)

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

167

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

meaning, and that the generated concepts really exist backed by empirical data. What can we say about design and engineering activities? To which paradigm do these activities belong? One position is clearly expressed by Bayazit [40]: “an artist’s practicing activities when creating a work of art or craftwork cannot be considered research” (p. 16). Dorst [39] characterizes engineering as design activities as ‘thrown’ into a design ‘situation’ (‘thrown-ness’ in German ‘Geworfenheit’, see [41]). Winograd and Flores [42] illustrate this kind of ‘thrown-ness’ as follows: “When chairing a meeting, you are in a situation that (I) you cannot avoid acting (doing nothing is also an action); (II) you cannot step back and reflect on your actions; (III) the effects of actions cannot be predicted; (IV) you do not have a stable representation of the situation; (V) every representation you have of the ‘situation’ is an interpretation; (VI) you cannot handle facts neutrally; you are creating the situation you are in”. The following two main aspects characterize this kind of situation: (1) no opportunity for ‘reflection’ (see (I), (II), and (V)), and (2) no stable and [maybe] predictable reality (see (III), (IV), and (VI)). A design situation based on ‘thrownness’, is a typical context characterized by the latter two main aspects. The designer creates and synthesizes the situation while he/she is acting in it. To focus on the constructivistic and synthetic aspects of this paradigm, we will replace the term ‘phenomenology’ by the term ‘constructivistic paradigm’ from now on. According to the positivistic paradigm most of the dominant activities in natural and formal sciences can be characterized as a rational problem-solving approach. This main approach can be described as ... “the search for a solution through a vast maze of possibilities (within the problem space)... Successful problem solving involves searching the maze selectively and reducing it to manageable solutions” [43]. In this paradigm, all knowledge should be described, represented and processed in an objective manner: independent of an undisclosed individual and personal knowledge base. The personal knowledge base (e.g., ‘craft skill’) is exclusively accessible to the individual him/herself, even sometimes without the opportunity for

conscious reflection about the content (see e.g. the ‘knowledge engineering bottleneck’; [44]). In natural sciences most formal descriptions are validated − sooner or later − via empirical observations, experiments or simulation studies. Models and theories generated under the positivistic paradigm are strong in abstraction and therefore prediction. This predictive power is based on abstraction of all details that are perceived as not relevant in reality at time (t1). In the context of a particular model or theory all differences between reality at time (t1) and reality at time (t2) are classified as irrelevant and uncorrelated noise. These models and theories normally can not handle singularities and unique cases in time. Because there is no asymmetry in time (neither forward nor backward), we can also use these models and theories for explanations. However, if we want to change reality we have to add models and artefacts that fit into reality where details matter. Concrescence is the complementary activity to abstraction, done by adding all ‘necessary’ details that were abstracted from (see Fig. 1). Nowadays, the positivistic paradigm seems to be the dominant characterization for a scientific research line. But how can we incorporate design as an academic activity? According to the aspect ‘no reflection’ Winograd and Flores [42] propose to overcome problems by approaches like reflective practise as introduced by Schön [46]. Following Schön [46], a “practitioner approaches a practice problem as a unique case. He does not act as though he had no relevant prior experiences; on the contrary. But he attends to the peculiarities of the situation at hand” (p. 129). The practitioner confronted with a concrete design problem “seeks to discover the particular features of his problematic situation, and from their gradual discovery, designs an intervention” or action (p. 129). Schön’s concept of ‘reflection-in-action’ can be applied to a broad range of research activities, in which the scientist is looking for a particular solution for a given set of constraints (e.g., design of an experimental set-up, a formal proof, a research plan, a technical artefact, etc.). The implicit nature of all these activities is the synthetic approach, to come up with something concrete as part of reality (‘concrescence’, see

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

168

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

models and theories models and artefacts abstracting

concrescence

prediction explanation

reality (t1)

time

reality (t2)

Fig. 1. A general schema for the process of academic knowledge development (adopted from [45], Fig. 2.8 and [39], Fig. 6.2)

Fig. 1). The two aspects of academic activity ‘abstraction’ and ‘concrescence’ are both necessary and complementary. If this is an appropriate description, why then does academia primarily focus on the positivistic paradigm and praise its ‘abstraction’? Given a reality at time (t1), science observes and analyses particular phenomena, makes proper abstractions, and tries to predict similar phenomena for reality at time (t2) (see Fig. 1). To preserve a stable view on reality [reality (t1) = reality (t2)], science has to operate under the following assumption, and this assumption seems to be essential: [{model, theory} ∉ reality]. Whatever a theory about the phenomenon gravity, for example, explains and predicts, this theory does not influence or change the phenomenon gravity at all! In this sense, models and theories of science (in the positivistic paradigm) are not part of the investigated and described reality; they are apart from this reality ([47], p. 12). We will use the term ‘reality’ (excluding models and theories) further on to make this distinction clear compared to the broader meaning of the term reality (including models and theories). The underlying mechanism to guarantee the fulfilment of the assumption is reductionism via abstraction. Any differences in empirical measurements between (t1) and (t2) are interpreted as just accidental factors (‘noise’),

which do not contradict the theory. With only knowledge, based on theories developed under the positivistic paradigm, the design of a concrete artefact is almost impossible, because the knowledge in these theories is purified from the changing contextual factors between reality at (t1) and at (t2). This lack of specific knowledge for any concrescence (e.g., craft skills) in science gives design and engineering disciplines their right to exist. Dreyfus [48] stimulated and continued a very important discussion about the importance of intuitive expertise, complementary to rational problem solving. On the other side, activities under the constructivistic paradigm claim to influence the reality and therefore to change this reality via the developed artefacts [reality (t1) ≠ reality (t2)], and in fact they do! The design and engineering disciplines develop knowledge to make the concretisation successfully possible. This knowledge realized in the form of models and artefacts can be interpreted as part of the reality, and not apart from it [{model, artefact} ∈ reality]. But how can design and engineering disciplines guarantee a stable reality, as desired by science? If models and artefacts are seen as part of the reality, i.e., as a subset of the reality under consideration, then any action that changes this subset changes the whole set (reality) as well. So, none of the constructiv-

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

169

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

istic disciplines can guarantee a stable reality, and they do not want to [49]. Up to now, the main conclusion is that knowledge developed in the positivistic paradigm and knowledge developed in the constructivistic paradigm is different. If the schema in Fig. 1 describes the whole process for developing knowledge, independent of a given paradigm, then the positivistic and the constructivistic knowledge can be seen as two subsets of a superset of knowledge: [{model, theory} ∪ {model, artefact} ≡ {model, theory, artefact}]. In this sense we can describe them as complementary [50]. Probably, the most practical value of positivistic knowledge is the specification of limits and boundaries under which constructivistic knowledge has to operate. For example, the state-of-the-art theory in thermodynamics explains and predicts that the design of a ‘perpetuum mobile’ is not feasible. Therefore, any attempt to design such a kind of system is assumed to be unrealistic. The challenge in combining both kinds of knowledge is creating artefacts (attractors as singularities), which fall inside the constrained design space provided by positivistic knowledge. This kind of validated design is quite challenging, because the designer has to take almost all relevant constraints and limits into account. This consequence usually is probably one of the main reasons for designers to oppose or even reject this position. But still, how can an academically sound research line be characterized that includes designrelated activities? Let us have a closer look at existing design-related activities inside different academic disciplines. First, we will shortly describe and characterize the most well-established disciplines. Disciplines such as physics, chemistry, etc. present themselves as ‘natural sciences’. Theory development takes place in a strictly formal manner with a rigorous experimental validation practice. Truth is based on the conformity of empirical data with the observed ‘reality’ of the phenomenon under investigation. The most important bases for conclusions is inductive logic. Academic disciplines like mathematics present them-

selves as ‘formal sciences’. Truth is based on logical consistency. One of the most important bases for conclusions is deductive logic. On the other hand, humane disciplines can be classified as ‘ideal sciences’. Truth is based on belief: hermeneutic evidence grounded in intuition! The most important basis for conclusions is a value system contained in an individual knowledge base. How is it possible that sciences based on a positivistic paradigm claim to be and presents themselves as true academic disciplines (compared to the rest), even if they include (and need) constructivistic and synthetic components as well? One possible explanation is the important asymmetry between both kinds of knowledge: ‘positivistic’ knowledge claims a more fundamental status than ‘constructivistic’ knowledge. ‘Positivistic’ knowledge has a stable predictive and explanatory power over time (see Fig. 1; based on the underlying idea of absolute and timeless truth, see [6]), because it is particularly designed for this purpose. But this approach pays the price of not being able to reach reality: to explain and predict, but not to touch and change reality (see also [51]). In the rest of this paper, we will develop an outline for research in the field of HCI, which tries to take the considerations and conclusions of this section into account. III. WHAT IS HCI ABOUT? A. Overview HCI claims the broadest range of research activities, including all contributions from the above mentioned communities. In the context of this paper we define the field of HCI as follows: HCI is a discipline concerned with the design, evaluation and implementation of interactive systems for human use and with the study of relevant phenomena surrounding them (‘context of use’). Furthermore, we define an interactive system as a work system {WS} := [{U}, {S} with ICT3 component(s), other components]. Fol3

ICT = Information and Communication Technology.

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

170

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

lowing Dowell and Long [7], we distinguish between a work system {WS} and a [work] domain {WD}, and the relation between these two components (see Fig. 2). The major goal for HCI to become an engineering discipline is “the design of behaviours constituting a work system” {WS} “whose actual performance (PA) conforms with some desired performance (PD)” (p. 1522). The relationship between the work system and the [work] domain has to be investigated in the context of task and domain analysis related activities (e.g., [52] [53] [54]). One of the main issues in the relation {WS}↔{WD} is the man-machine function allocation problem [55]. A system {S} for a real-world application domain {WD} can only be developed taking {WS}↔{WD} the following into account: {S}{WS}↔{WD}. [Work] domain {WD}

Worksystem {WS}

User {U}

System {S}

Fig. 2. The distinction between the interactive work system {WS} and the work domain {WD} (adopted from [7]).

Developing a work system without taking a [work] domain into account is risky, because later there is no guarantee that the designed work system can contribute to achieve the desired performance of the whole system. On the other hand, without technology push, the technical option space for solving real-world interactive problems would be seriously constrained (e.g., the visionary device of Vennevar Bush called MEMEX, see [56]; [57], pp. 41-42). So far, the main conclusion is the need to investigate the relationship between ‘push based’ developed technology and the requirements coming from existing or planned

work domains. Which type of interaction technique is appropriate for which type of task and work domain? However, how can we develop interaction techniques without having a possible interactive task in mind? One possible answer is the development of ‘generic’ interaction techniques, which should be applicable to any task type (‘generic’ in the sense of ‘work-domain independent’). Is for example the ‘mouse’ based interaction really a generic and optimal interaction technique (see the contradicting empirical results in [58]). On a more general level, what kind of research line has to be established to gain valid answers to this kind of research questions? B. Work System The two major sub-elements of the work system are of a completely different nature: (1) humans can be described in terms of perceptual, cognitive, acting, and emotional capabilities and limitations (user {U}); (2) the system (a technical artefact) can be described in processing power, system architecture, input/output relations, functionality, material properties, etc. (system {S}). Green, Davies and Gilmore (1996) differentiate between three different views: (1) psychological view, (2) systems view (focus on artefact), and (3) interactive view. We follow this classification to describe required design knowledge: (1) design knowledge related to the user {U} (e.g., trainings, tutorials, help systems, etc), (2) design knowledge related to the system {S}, and (3) design knowledge related to the interaction space {IS} [59]. The ‘context of use’ will be discussed in the next section about the work domain. Attempts to integrate the two worlds of the user and of the system have a long tradition and still it is the most important challenge (see [60] [61]). If we conceptualise the field HCI primarily as an engineering discipline [7], we have to translate the research results from the social and cognitive sciences into technical dimensions that can be directly applied to solve design issues for interactive systems (e.g., [62]). This kind of translation

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

171

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

is a valuable and challenging goal for the whole HCI research community. Even today, one of the dominant HCI research lines has focussed on the design of the interface of the interactive system, but as a matter of fact, interface designers design the interaction space {IS} between the user and the system. This view will have a strong impact on the theoretical foundations of HCI: describing user-system interaction as a dynamic relation {IS}:= f[{U}↔{S}]t taking the relation {WS}↔{WD} into account. The interaction framework of Barnard and Harrison [59] is a first and valuable attempt in this direction. Any kind of inter-action has at least one essential component: the synthetic part, to end in something concrete. Two different approaches to investigate the user can be distinguished: (approach-1) to treat a user as a human being with a physical body (e.g., the view of biology, psychophysics, physiology, etc.), and (approach-2) to treat a user in a particular context of use (e.g., design, marketing, manufacturer, psychology, etc.). Approach-1 looks at a user without taking the relation {WS}↔{WD} into account; but this description is not completely correct. The human being is investigated in his/her natural environment, which can be described in physical terms. So, one could interpret {WD} as the whole world, specified and described beyond any cultural, political, economical, and social constraints (the purely physical view to nature). Approach-2 tries to incorporate the relationship {WS}↔{WD} in a more specific manner: {WD} as a concrete (inter)action space with all related semantics regarding cultural, political, economical, and/or social dimensions (see [63] [64] [7]). To connect approach-2 to approach-1, we must look for a theoretical foundation of human activities that can take explicitly contextual boundaries into account (e.g., activity theory, see [65] [66]). Green et al. [60] discuss in detail the pros and cons of trying to connect these two approaches. They describe three lines of possible development: “Two of these lines are ventures in developing representations of

interactive situations which apply equally to both partners, the person and the system. The third line is even less theoretically ambitious, seeking only to crystallize and expose concepts which many users (even if not HCI workers) already recognize, but which have not yet been presented in an organized way” (p. 109). C. Work Domain In all design projects, which have to develop a fully-fledged product for a particular market segment, we are obliged to investigate the domain in which the product has to survive. A domain can cover a broad range of social activities. Following Dowell and Long [7] “a domain of application can be conceptualised as: ‘a class of affordance of a class of objects’. Accordingly, an object may be associated with a number of domains of application (‘domains’)” (p. 1524). Following Nardi [67] three main approaches to investigate a user in a context of use or task context can be distinguished: (1) situated actions [64], (2) distributed cognition [68], and (3) activity theory [69]. Nardi [67] concludes: “Activity theory seems to be the richest framework for studies of context in its comprehensiveness and engagement with difficult issues of consciousness, intentionality, and history” (p. 96). With a concerted effort by HCI academics to develop a systematic conceptual framework (the work domain as a well specified context for human activities), much progress could be made. We could then answer questions like, for what kind of task is a particular type of interface and interaction technique most appropriate. Last, but not least, a very practical argument has to be discussed: the argument that a task context cannot be excluded from a research line to acquire validated design knowledge utilizing on empirical investigations. Any kind of a human action can be described and interpreted as a task and/or problem-solving activity [66]. For example, if we investigate human (re)actions to a controlled environment in a laboratory setting (a very artificial context), then − whatever this

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

172

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

test subject is doing − his or her (re)actions cannot be interpreted without taking the concrete task context into account (often the intended part of the design of the experimental setting: the set of independent variables/ factors). The main critique against experimental settings is the often unclear relation between an artificial experimental setting and natural task contexts (the ecological validity discussion; see [63]). Nevertheless, even if we would follow a research line in which we try to exclude a work domain (in the sense of [7]), we will still not be able to exclude a context of use at all (in this very general sense). For example, in the case of research on pattern recognition of facial expressions, we have to assume a possible context of use although it is implicitly given. In one of the following sections about the user’s validation cycle we will return to this issue. IV. HOW TO GET A SCIENTIFIC LANGUAGE? Kuhn [70] differentiated between two phases in the development of an academic discipline: before and after reaching consensus. Reaching a consensus phase can take a long time (i.e., between decades up to centuries). The consensus phase is, for example, characterized by a common content in different textbooks and handbooks providing successful examples from which students can learn. Green, Davies and Gilmore [60] stated very clearly that the aim of establishing a common research line for HCI “is only feasible if a common language can be developed, in which relevant aspects of both the person and the system can be expressed” (p. 99). Rauterberg and Szabo [71] made a first constructive attempt to conceptualise and compare different perceptual effects (e.g., visual and auditory modality on the human side) with different technical options to produce particular perceptual impressions. Since 2003 at least one HCI glossary4 has been available online, in which all given definitions of HCI relevant terms and concepts are

4

http://id00156.id.tue.nl/hci/

provide that could be extracted from existing ISO standards [72]. A. What is a Scientific Language? A coherent and powerful technical language based on consensus is a necessary precondition for any progress in an academic discipline [60]. Up until now, the HCI community has no well-established corpus of descriptors. For example, the important concept interaction style introduced by Shneiderman [73] is translated into an interactive style [57], into a dialogue style [74] and into a dialogue technique [16] (referring to ISO 9241). Neither the terms ‘interaction style’, ‘interactive style’ nor ‘dialogue style’ can be found in the keyword index of [75] or [76]. In Jacko and Sears [77] at least the term ‘dialogue style’ made it into the subject index, but not into the text. Only Baecker and Buxton [57] distinguish between nine major categories of interaction styles: (1) command line, (2) programming language, (3) natural language, (4) menu, (5) form filling, (6) iconic, (7) window, (8) direct manipulation, and (9) graphical interaction (p. 427). They conclude that “more effort needs to be expended on developing a taxonomy of the content of human-computer interaction” (p. 434). Vet and Ruyter [78] developed a concept of interaction styles that decomposes an interaction style into three components: (1) conceptual operations, (2) interaction structure, and (3) interaction techniques. “An interaction style is thus defined as the execution of a conceptual operation within an interaction structure using an interaction technique” (p. 8). To be able to compare the published empirical results with the strengths and weaknesses of different technologies, a special notation language was proposed [71]. Only with such kind of notation language, the results of published experiments can be compared and discussed to achieve valuable conclusions for further developments. Very successful and therefore prominent examples of such kind of a scientific language are all formal notations in different

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

173

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

engineering communities. These formal notations based on mathematics can be seen as probably the only truly international agreed upon academic language. These types of languages are highly attractive, and sometimes are claimed to be a necessary requirement for an academic quality standard. Several researchers in HCI and related fields already moved in this direction [79], [80], [81], [82], [83], [84], and [54]. But there is a major pitfall or even shortcoming with formal notations: “by their very nature, mathematical metaphors can only be applied to a narrow range of problems” ([85], p. 589). Gupta [85] discusses several reasons why an academic language based on mathematics may not always, or even cannot always provide appropriate insight into a complex reality (“the widespread misappropriation of the language of mathematics in the social … sciences has to be one of the great tragedies of our time”, p. 589). One of the main challenges for a research line in HCI is to figure out where (and why) formal notations are appropriate and where (and why) they are not [86]. For example, far more than any formal approach, the book of Suchman [64] has been influential on the HCI community ([87], p. 615). We do not wish to express or transform everything into a formal language, but we must do as much as possible, taking into account that many important ideas and relevant design knowledge can only be expressed in a non-formal language. B. How to Obtain Consensus Habermas differentiates four types of speech acts [88]: (1) communicativa imply the freedom for an expressed opinion itself and the freedom to express one’s own opinion (everyone is allowed to take part in a communication); (2) representativa imply the semantics of the expressed statement and the possible subjective bias in it; (3) constitutiva cover the objective truth in the statement; and (4) regulativa enable the expression of normative aspects. Agreement, according to Habermas, can be reached via truthful expressions in a power-free communication (in German ‘machtfreier Diskurs’).

A truthful expression in a speech act is characterized by all involved parties being able to reach a potential agreement, based on their rational reason. Rational reason is defined as knowledge about a possible way to justify the truth in an objective manner, during the speech act itself (‘veracity’) and beyond in daily practice (‘credibility’). To achieve consensus, it is important that all involved parties share and accept a similar way to describe and to justify the ‘truth’. This means having at least consensus about a validation methodology. How to establish validation into a HCI research line (on different levels) is described and discussed in the following section. V. COHERENT RESEARCH LINE A. Does HCI Need a Coherent Framework? We argue in this paper for a research line, not for a particular framework. What is a framework? Most introduced and described frameworks try to conceptualise one or more domains with a set of relevant dimensions. Green [80] considered a space in which to locate many different kinds of notations. Floyd [89] discussed an evolutionary approach for software development as a reality construction. Kuutti and Bannon [90] promoted activity theory as a unifying concept to integrate the different perspectives. Cugola et al. [83] discussed a framework for formalizing inconsistencies and deviations in human-centered design. Furnas [5] introduced MoRAS as a framework similar to the approach of Rasmussen, Pejtersen and Goodstein [91]. Olson and Olson [92] described a framework for collaboration technology with four key concepts: (1) common ground, (2) coupling of work, (3) collaboration readiness, and (4) collaboration technology readiness. De Souza et al. [93] promoted a semiotic approach to HCI research. For the field of information system research Bacon and Fitzgerald [94] identified a need for a systematic framework and proposed one. Ng [87] introduced a theoretical framework for understanding the relationship be-

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

174

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

tween situated action and planned action models in the information retrieval contexts. Many HCI scholars still believe in frameworks. But to which extent are these frameworks really helpful in unifying the HCI community? B. Is a Coherent Framework Achievable? Harris and Henderson [95] expressed their concerns regarding generic frameworks: “We begin by recognizing that no attempt to fit the world into a neat set of categories can succeed for long. We will always encounter inconsistent, ambiguous, messy bits that don’t fit. Standard system design which depends on making the world fit a neat set of categories will naturally have trouble” (p. 94). Given this quote, what can be done about this, and what are the main reasons for questioning the feasibility of generic frameworks as candidates for improving the maturity of the research community? To be clear, we do not argue against frameworks as such (whether they are specific or generic), but is focussing on one or at least only a few frameworks a good way to go? Rozanski and Haake [96] conclude that there may be too many facets of HCI that make a unified framework hard or even impossible to achieve. Floyd [89] and Santos, Kiris and Coyle [97] stress the fact that system design takes place in a changing environment; it changes often rapidly because of a very fast technology push. Given all these constraints we have to conclude that a unifying framework is very difficult to establish, if at all. But, we can strive for a unifying research line. C. Triangulation This section discusses the most relevant aspects of a possible research line for HCI on a high conceptual level. Inspired by the maturity model of Humphrey [98], and to start with we try to introduce a similar view. The major levels a new design discipline might go through to become mature are the (1) initial phase, (2) repeatable design processes, (3) defined research line, (4) managed research activities, and (5) optimized theory

development. At the top level (5) a cyclic structure for self-optimizing knowledge development should be established. Design knowledge analysis

synthesis

[empirical] validation

Interactive systems

Fig. 3. The academic validation cycle; triangulation for an academic research approach with a rigorous validation component (adopted from Greenberg [33]; see also Wickens et al. [100], p. 387f).

To combine the analytical strength of empirical validation methods (e.g., observation, experiment, inquiry, etc.) with the synthetic strength of system design, Mackay and Fayard [99] introduced the triangle structure presented in Fig. 3 (see also Greenberg [33], and Wickens et al. [100]). This triangle structure conceptualises the three most important components of HCI research: (1) the collection of design knowledge, (2) the interactive system in different possible representation forms, and (3) the various optionsfor usability testing and [empirical] validation. This triangle structure is similar to the circular model of Henderson [101] in which the following steps are differentiated: (1) design (“creating improvements in the activity”); (2) implementation (“bringing the designs to life”); (3) use (“people’s work that is to be improved”); (4) observation (“encountering and capturing the activity”); and (5) analysis (“understanding the regularities in the activity”) (p. 262). 1. Design Knowledge The development and collection of design knowledge is the primary goal of the whole research line. This validated knowledge with high predictive power should be formulated in design theories based on high-level design

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

175

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

principles (e.g., Gram and Cockton [102]), medium-level guidelines (e.g., Mayhew [103]), and low-level implementation techniques (e.g., metrics according to Rauterberg [84]). Sutcliffe [25] proposes “that HCI knowledge should be theory-grounded, and development of reusable ‘designerdigestible’ packets will be an important contribution in the future” (p. 197). Finkelstein and Kramer [17] very clearly summarize “that we cannot expect industry to make very large big-bang changes to processes, methods and tools, at any rate without substantial evidence of the value derivable from those changes. This, accompanied again by the increased disciplinary maturity, has led to a higher ‘validity’ barrier which research contributions must cross. It is readily observable, that research that proposes new frameworks, methods and processes are not accepted without positive evidence that they are of use rather than simply airy and unfounded speculation” (p. 4). We fully agree with this important conclusion. 2. Interactive System Design knowledge with sufficient predictive power enables the design expert to apply this knowledge to a concrete system design with a guaranteed outcome. In the most powerful form this design knowledge enables the designer to calculate the intended system characteristics in advance [49], [62], [84]. All design knowledge can be given away in the form of design theories, written down in books and articles, shown in videos, taught in education and training, etc., and sometimes demonstrated in the form of concrete artefacts as well [104]. The crucial part is taking the step from requirements to specifications before implementing. One of the promising methods for this step is MUSE [53]. So far only HCI academics with an engineering mind set are willing to invest in such kind of structured approaches [105], [106]. It seems to be the case that the introduction of a structured method like MUSE is too early, based on the fact that not enough design knowledge about the user is available in a particular context of use. Sutcliffe and

McDermott [107] and Sutcliffe and Wang [108] introduced other approaches to incorporating HCI with software engineering methodologies. However, HCI as an engineering discipline will need such structured design methods sooner or later. 3. Empirical Validation To validate a proposed design knowledge (e.g., design principles), empirical research methods are necessary. But first an abstract design principle for a particular type of design class has to be instantiated via concrete artefacts {S1 … S n} before it can be tested. To make proper use of the full range of empirical research methods, proper training in these methods is required. Unfortunately, based on a lack of this kind of profound expertise, design oriented HCI researchers are starting to complain about the “tyranny of evaluation” [109]. So far, only educational programs in social sciences have provided this kind of training in empirical research methods. For non-social scientists several good textbooks are already available [110], [111], [112], [113]. Throughout a thorough empirical validation activity, the shortcomings of a particular design instantiation can be discovered, and can often be directly turned into a solution ([114], 105ff), [115]. D. Industrial Relationship The research field of HCI has raised much attention and interest from industry. “The human-computer interface is critical to the success of products in the market place...” ([116], p. 794). ICT companies have a growing need for ICT professionals with an increasing expertise in HCI [23], [117]. Industry is looking for highly skilled interaction designers who can contribute to commercial success based on their profound design expertise (Norman in [27]). Moreover, industry is mainly interested in utilizing design knowledge that can directly lead to successful product design (see Fig. 4, commercial optimization cycle). It is a plausible, but still is an insufficiently proven assumption, that usability immediately contributes to commercial success [118]. Bias

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

176

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

and Mayhew [119] present and discuss several projects in which a cost-justifying usability approach was successfully applied. They collected and discussed a couple of cost-justifying arguments to convince project managers in industry to invest in usability engineering activities. The commercial optimization cycle (see Fig. 4) is primarily money driven, while the academic validation cycle should deliver design knowledge of high quality. market success

commercial product

Industry Design knowledge analysis

synthesis

Academia

validation Interactive system

Fig. 4. Linking the academic validation cycle (below) to the commercial optimization cycle (above).

Given the high pressure and urgent demands from industry, the whole HCI field has primarily offered and delivered ‘usability testing methods’, instead of ‘validated design knowledge’ including a structured design methodology. ‘Discount usability’ and ‘usability testing’ seems to be an outsourcing strategy for selling scientific validation process methodology, instead of developing and delivering the desired design knowledge. This statement is maybe over-critical, but it points to the core of the problem. Up to now, books have contained many relevant, designrelated ideas, hints and tips, sometimes called guidelines (e.g., [74]) or even design principles (e.g., [102]). However, what is still missing is a basis for a corpus of design knowledge that contains thorough empirically validated results. As Gaines [30] points out, “I will conclude that we are still at a very early stage in the development of HCI that the major impact of the technology on society is yet to come and that to understand the design issues involved we will need

much greater overt understanding...” (p. 19). Industry would perceive the outcome of the academic HCI research cycle as sufficiently mature if HCI research would primarily operate on an engineering paradigm. E. ‘System’ Validation Cycle One of the open questions is the appropriate substitute of a real system with something else that is much faster to create, but still retains the most relevant features for further validation. Hix and Hartson [120] emphasize a two-step approach: (1) conceptual design and (2) [initial] scenario design. “Conceptual design is higher level and has to do with synthesizing objects and operations. Detailed design has to do with activities such as determining the wording of messages, labels, and menu choices, as well as the appearance of objects on the screen, navigation among screens, and much more” (p. 132). A scenario design can be worked out in the form of a set of screens, story boards, or even video clips [121]. Particular research questions can only be investigated if a complete interactive system is available. Fortunately, a lot of relevant questions can be already answered with lighter substitutes than the real system (e.g., prototype, ‘Wizard of Oz’ simulation, formal specification, concept; see Fig. 5). However, these substitutes can only replace the real system if they are − in general − fully validated beforehand, and all their methodological constraints are well investigated and known. If we have to rely on cheap replacements or light substitutes, then we have to make sure that the results gathered with these substitutes are not biased, at least not uncontrollably biased with the chance for proper corrections afterwards. Very little has been done so far to validate these substitutes compared to real systems (a positive exception is [122]). This issue is a very important, but highly underestimated, research contribution [123], [124]. This research contribution will lead to a properly validated design methodology beyond Lim and Long [53].

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

177

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

Design knowledge analysis

synthesis

• concept • formal specification • ‘wizard of oz’ • prototype

validation Interactive systems

• real system

[empirical] validation Fig. 5. System validation cycle with different ways to substitute a real system with a lighter replacement.

F. ‘User’ Validation Cycle Up until now, the HCI research arena has demonstrated a strong eclecticism in its approach to methodology. Methods from psychology, social sciences, ethnography, etc. have been adopted to solve some of the immediate problems without taking into account the ontological consequences. Books and articles about a particular HCI methodology give an introduction and overview about possible methodological adaptations [125], [126], [127], [128]. The preference for empirical validation methods (compared to formal validation methods) is based on the fact that user’s behaviour, confronted with a new system, is very difficult to predict. If a user substitute (e.g., user model, user simulation, etc.; see Fig. 6) instead of a representative sample of real end users is used for validation, these substitutes have to be validated beforehand as well. These validated user dummies are already partially delivered by the following research contributions: cognitive and mental modelling [129], human factors [130], artificial intelligence [131] and humanoid robotics [132]. Research in this direction led to tools like AMME [133], HOMER [134], or IMPRINT [135].

We entirely agree with Landauer [128] that the professional use “of good research methods is a pressing and immediate practical concern, not just a step toward a firmer scientific base” (p. 204). Monk [126] differentiates between the following four polarities: (1) ‘naturalistic observation’ versus ‘rigorous experiments’, (2) ‘field’ versus ‘laboratory research’, (3) ‘scientist as participants’ versus ‘scientist as observer’, and (4) ‘few’ versus ‘many test subjects’. One important point he makes, is “the importance of developing predictive models and theories which can suggest and explain empirical results” (p. 136). As Long [136] put it: “Greater effectiveness of interactions, practices and research will derive from the validation of new HCI knowledge, and the specification of relations between HCI research and the design of human-computer interactions, such that research is ‘fit-for-design-purpose’” (p. 241). Only with a rigorous validation methodology will the development of design knowledge lead to stable theories with sufficient predictive power for the design of new interactive systems. These high quality design theories will enable interaction designers to specify the intended system characteristics beforehand [137].

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

178

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

Design knowledge analysis

• expert • user model • user simulation

synthesis

validation

• real user

Interactive systems

empirical validation Fig. 6. Empirical validation cycle for different ways to substitute real users with ‘user’ dummies.

VI. CONCLUSIONS Evaluating the role of theory in HCI (and taking design seriously) means evaluating the usefulness and usability of applying a theory to the design of interactive artefacts. “A deeper understanding of how representations are created and how they contribute to the solution of problems will become an essential component in the future theory of design” ([43], p. 78). Hence, what does HCI really need to have a serious chance of becoming a mature engineering discipline? In addition to Long [138], here are some very basic answers: • A new theoretical focus to investigate the interaction space based on specified problems: The interaction space between a human and a technical artefact is difficult to conceptualise, and it is difficult to find the proper set of parameters. On the one hand, we have to deal with the human being, primarily described and specified in qualitative dimensions, and on the other, we have to design a technical artefact, described and specified with quantitative parameters. • A coherent taxonomy with a powerful corpus of descriptors and terminology: In

developing a coherent taxonomy includes the development of a coherent theory as well, and vice versa. A scientific terminology without a theoretical context is neither possible nor desirable. • A rigorous validation method to prove the design knowledge to achieve progress (see Fig. 3): The academic community of HCI would benefit from agreeing upon an objective and rigorous manner of validation; the ‘wild’ growth of unvalidated statements could converge to a couple of stable theoretical nuclei. One necessary pre-condition seems to be the specification of ‘relevant’ problems in relation with the state of the art (documented via publications, and mainly via patents for technical artefacts). The needs of HCI are growing as the power and complexity of interactive systems continue to grow, and “we will be unwise to neglect any approach to meeting them” ([139], p. 160). If the HCI research area wants to survive as a scientific discipline, at least the following three conditions have to be fulfilled (to move from the explorative to the paradigmatic phase):

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

179

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006



The specification of most relevant elements (‘research objects’; including ‘problem definitions’). • The development of a coherent scientific language (for achieving consensus). • Establishing a research line to develop design knowledge in a validated manner with predictive power (based on triangulation). “A number of dramatic human-computer interaction design successes, ..., have already occurred as a direct result of systematic research − as contrasted with intelligent creativity alone” ([128], p. 224). On the one hand, ‘systematic research’ and, on the other, ‘intelligent creativity alone’ seems to be contradictory, but we have argued that they are complementary. Historically speaking, HCI research and development has been ‘spectacularly’ successful, and has indeed fundamentally changed interactive computing [116]. It is important to appreciate that decades of research are involved in creating and making interactive technologies ready for widespread use. Using methods researched and validated in other scientific fields allows HCI to move quickly to robust, valid results that are applicable to the more applied area of design. But to improve the maturity of HCI research and to guarantee the long-term survival of this research field, we have to do more. Let us summarize the two major messages of this paper: (1) HCI research should move from ‘art’ to ‘science’ and (2) from ‘evaluation’ to ‘calculation’. Based on recent results of Bartneck and Rauterberg [4], the engineering discipline should take the lead to bridge the gap between artistic design and science. If HCI research can operate on a sound scientific engineering paradigm, we can successfully move into this direction.

feedback and remarks on earlier versions of this paper.

REFERENCES [1] [2]

[3]

[4]

[5] [6] [7]

[8]

[9] [10] [11]

[12]

[13]

VII. ACKNOWLEDGEMENTS I am much indebted to John Long for the intensive and fruitful discussions that made this paper possible. In addition, I would like to thank Christoph Bartneck, Loe Feijs, Panos Markopoulos, Kees Overbeeke, Ben Salem, Pieter-Jan Stappers, and several anonymous reviewers for their valuable

[14] [15]

Arnowitz, J. and Dykstra-Erickson, E., ”CHI and the practitioner dilemma,” interactions, vol. 12, no. 4, July 2005, pp. 5-9. Rogers, Y., Bannon, L. and Button, G., “Rethinking theoretical frameworks for HCI: report on the INTERACT’93 workshop,” ACM SIGCHI Bulletin, vol. 26, no. 1, 1994, pp. 2830. Rauterberg, M., “How to characterize a research line for user-system interaction,” IPO Annual Progress Report, No. 35, 2000, pp. 6686. Bartneck, C. and Rauterberg, M., “HCI reality - an unreal tournament?” (submitted to International Journal of Human Computer Studies, 2006, under review). Furnas, G.W., “Future design mindful of the MoRAS,” Human Computer Interaction, vol. 15, 2000, pp. 207-263. Siepmann, J.P., “What is Science?” Journal of Theoretics, vol. 1, no. 3, 1999, pp. 1-4. Dowell, J. and Long, J.B., 1989. Towards a conception for an engineering discipline of human factors. Ergonomics, 32(11), 15131535. Aken, van J.E., “Management research based on the paradigm of the design sciences: the quest for field-tested and grounded technological rules,” Journal of Management Studies, vol. 41, no. 2, 2004, pp. 219-246. Butler, D., “Interdisciplinary research ‘being stifled’” Nature, vol. 396, 1998, p. 202. Shackel, B., “Ergonomics for a computer,” Design, vol. 120, 1959, pp. 36-39. Shackel, B., “Human-computer interactionwhence and whither?” Journal of the American Society for Information Science, vol. 48, no. 11, 1997, pp. 970-986. Böhme, G., Daele, W. van den, Hohlfeld, R., Krohn, W. and Schäfer, W., “Introduction,” in: W. Schäfer (Ed.), Finalization in Science: The Social Orientation of Scientific Progress. Reidel, Dordrecht, 1983, pp. 3-11. Masterman, M., “The nature of paradigms,” in: Lakatos, I. and Musgrave, A. (Eds.), Criticism and the growth of knowledge. University Press, Cambridge, 1970, pp. 59-89. Myers, B.A., “A brief history of humancomputer interaction technology,” ACM interactions, vol. 5, no. 2, 1998, pp. 44-54. Grudin, J., “Crossing the divide,” ACM Transactions on Computer-Human Interaction, vol. 11, no. 1, 2004, pp. 1-25.

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

180

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

[16]

[17]

[18]

[19] [20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

Cakir, A. and Dzida, W., “International ergonomic HCI standards,” in: M. Helander, T.K. Landauer and P. Prabhu (Eds.), Handbook of Human-Computer Interaction. North-Holland, Amsterdam, 1997, pp. 407-420. Finkelstein, A. and Kramer, J., “Software engineering: a roadmap,” in: A. Finkelstein (Ed.), The Future of software engineering. ACM, New York, 2000, pp. 3-24. Hartson, H.R., “Human-computer interaction: interdisciplinary roots and trends,” The Journal of Systems and Software, vol. 43, no. 2, 1998, pp. 103-118. Hunt, A.D., Radical user interfaces for realtime musical control. Ph.D. Thesis, University of York, UK, 2000. Clemmensen, T., “What happened to the psychology of human-computer interaction? A review of 17 years of research presented in one journal,” in: Khalid, H.M., Lim, T.Y., and Lee, N.K. (Eds.), Proceedings of SEAMEC 2003, Kuching, 2003, pp. 266-273. Newell, A. and Card, S.K., “The prospects for psychological science in Human-Computer Interaction,” Human-Computer Interaction, vol. 1, no. 3, 1985, pp. 209-242. Newell, A. and Card, S.K., “Straightening out softening up: response to Carroll and Campbell,” Human-Computer Interaction, vol. 2, no. 3, 1986, pp. 251-267. Czerwinski, M., Sears, A., Dringus, L. and Thomas, B.B., “Educating HCI practitioners: evaluating what industry needs and what academia delivers,” in: Proceedings of the Conference on Human Factors in Computing Systems, CHI 96. ACM Press, 1996, p. 425. Sutcliffe, A., “HCI theory on trial,” in: Proceedings of the Conference on Human Factors in Computing Systems, CHI 91. ACM Press, 1991, pp. 399-401. Sutcliffe, A., “On the effective use and reuse of HCI knowledge,” ACM Transactions on Computer-Human Interaction, vol. 7, no. 2, 2000, pp. 197-221. Ubink, J., Boegels, P., Henderson, A., van der Heiden, G., Minstrell, J., Noldus, L., Rauterberg, M., Thomas, A., van der Veer, G., Vredenburg, K. and Wong, W., “National and international frameworks for collaboration between HCI research and practice,” in: Proceedings of the Conference on Human Factors in Computing Systems, CHI 00. ACM Press, 2000, p. 367. Shneiderman, B., Card, S., Norman, D.A., Tremaine, M. and Waldrop, M.M., “CHI@20: fighting our way from marginality to power,” in: Proceedings of the Conference on Human Factors in Computing Systems, CHI 02. ACM Press, 2002, pp. 688-691.

[28]

[29]

[30]

[31]

[32] [33] [34]

[35]

[36]

[37] [38]

[39] [40] [41]

[42]

Newman, W., “A preliminary analysis of the products of HCI research, using pro forma abstracts,” in: Proceedings of the Conference on Human Factors in Computing Systems, CHI 94. ACM Press, 1994, pp. 278-284. Muller, M.J., Wharton, C., McIver, W.J. and Laux, L., “Towards an HCI research and practice agenda based on human needs and social responsibility,” in: Proceedings of the Conference on Human Factors in Computing Systems, CHI 97. ACM Press, 1997, pp. 155-161. Gaines, B., “HCI in the next millennium: supporting the world mind,” in: M.A. Sasse and C. Johnson (Eds.), Proceedings of HumanComputer Interaction, INTERACT 99, IOS Press, 1999, pp. 18-30. Whittaker, S., Terveen, L. and Nardi, B.A., “Let’s stop pushing the envelope and start addressing it: a reference task agenda for HCI,” Human Computer Interaction, vol. 15, 2000, pp. 75-106. Howard, S., “User interface design and HCI,” ACM SIGCHI Bulletin, vol. 27, no. 3, 1995, pp. 17-22. Greenberg, S., “Teaching human computer interaction to programmers,” ACM interactions, vol. 3, no. 4, 1996, pp. 62-76. Kellog, W.A., Lewis, C. and Polson, P., “Introduction to this special issue on new agendas for human-computer interaction,” HumanComputer Interaction, vol. 15, nos. 2-3, 2000, pp. 1-5. Badham, R. and Ehn, P., “Tinkering with technology: human factors, work redesign, and professionals in workplace innovation,” Human Factors and Ergonomics in Manufacturing, vol. 10, no. 1, 2000, pp. 61-82. Zhang, P. and Dillon, A., “HCI and MIS: shared concerns,” International Journal of Human-Computer Studies, vol. 59, 2003, pp. 397-402. Kuhn, T.S., The Structure of Scientific Revolutions. University of Chicago Press, Chicago, 1962. Kuhn, T.S., “Second thoughts on paradigms,” in: F. Suppe (Ed.), The Structure of Scientific Theories. University of Illinois Press, Champaign, 1974, pp. 459-482. Dorst, K., Describing design: a comparison of paradigms. PhD Thesis, Delft University of Technology, The Netherlands, 1997. Bayazit, N., “Investigating design: a review of forty years of design research,” Design Issue, vol. 20, no. 1, 2004, pp. 16-29. Heidegger, M., “Sein und Zeit,“ in: E. Husserl (Ed.), Jahrbuch der Philosophie und phänomenologischen Forschung, Volume VIII. Niemeyer, Halle, 1927. Winograd, T. and Flores, F., “Understanding and being,” in: T. Winograd and F. Flores

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

181

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

[43] [44] [45]

[46] [47] [48] [49] [50]

[51]

[52] [53] [54] [55]

[56] [57]

[58]

[59]

(Eds.), Understanding Computers and Cognition: A new Foundation for Design. Ablex, Norwood, 1990, pp. 27-37. Simon, H.A., The sciences of the artificial. MIT Press, Cambridge, 1969. Buchanan, B., “Expert systems: working systems and the research literature,” Expert Systems, vol. 3, no. 1, 1986, pp. 32-51. Sarris, V., Methodologische Grundlagen der Experimentalpsychologie I: Erkenntnisgewinnung und Methodik. Uni-TB GmbH, Stuttgart, 1990. Schön, D.A., The reflective practitioner: how professionals think in action. Basic Books, New York, 1983. Sheldrake, R., The presence of the past. Times Books, 1988. Dreyfus, H.L., What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge: MIT Press, 1992 (3rd ed.). Klemm, F., A history of Western technology. MIT Press, Cambridge, 1970. Pylyshyn, Z., “Some remarks on the theorypractice gap,” in: J.M. Carroll (Ed.), Designing Interaction. University Press, Cambridge, 1991, pp. 39-49. Carroll, J.M. and Campbell, R.L., “Softening up hard science: reply to Newell and Card,” Human-Computer Interaction, vol. 2, 1986, pp. 227-249. Ehn, P., Work-oriented design of computer artifacts. Stockholm: Arbetslivscentrum, 1988. Lim, K.Y. and Long, J.B., The MUSE method for usability engineering. University Press, Cambridge, 1994. Paternò, F., Model-Based Design and Evaluation of Interactive Applications. SpringerVerlag, Berlin, 1999. Kantowitz, B.H. and Sorkin, R.D., “Allocation of functions,” in: G. Salvendy (Ed.), Handbook of Human Factors. John Wiley & Sons, Chichester, 1987, pp. 355-369. Bush, V., “As we may think,” The Atlantic Monthly, no. 176, 1945, pp. 101-108. Baecker, R.M. and Buxton, W.A.S. (Eds.), Readings in Human-Computer Interaction: a multidisciplinary approach. Morgan Kaufmann, San Mateo, 1987. Rauterberg, M., Mauch, T. and Stebler, R., „The digital playing desk: a case study for augmented reality,” in: K. Tanie (Ed.), Proceedings of the 5th IEEE International Workshop on Robot and Human Communication, ROMAN 96. IEEE, Piscataway, 1996, pp. 410415. Barnard, P. and Harrison, M., “Towards a framework for modelling human-computer interactions,” in: J. Gornostaev (Ed.), Proceedings of the East-West International Conference

[60]

[61]

[62]

[63] [64] [65]

[66]

[67]

[68]

[69] [70]

on Human-Computer Interaction, EWHCI 92. Moscow, 1992, pp. 189-196. Green, T.R.G., Davies, S.P. and Gilmore, D.J., “Delivering cognitive psychology to HCI: The problem of common language and of knowledge transfer,” Interacting with Computers, vol. 8, no. 1, 1996, pp. 89-111. Nickerson, R.S. and Landauer, T.K., “Humancomputer interaction: background and issues,” in: M. Helander, T.K. Landauer and P. Prabhu (Eds.), Handbook of Human-Computer Interaction. North-Holland, Amsterdam, 1997, pp. 3-31. Rauterberg, M., “Usability evaluation: An empirical validation of different measures to quantify interface attributes,” in: T. Sheridan (Ed.), Analysis, Design and Evaluation of Man-Machine Systems, Volume 2. Pergamon, Oxford, 1995, pp. 467-472. Neisser, U., Cognition and reality: principles and implications of cognitive psychology. W.H. Freeman, San Francisco, 1976. Suchman, L.A., Plans and situated actions: the problem of human-machine communication. University Press, Cambridge, 1987. Hacker, W., “Activity: a fruitful concept in industrial psychology,” in: M. Frese and J. Sabini (Eds.), Goal Directed Behavior: The Concept of Action in Psychology. Lawrence Erlbaum, Hillsdale, 1985, pp. 262-284. Kuutti, K., “Activity theory as a potential framework for human-computer interaction research,” in: B.A. Nardi (Ed.), Context and Consciousness. MIT Press, London, 1996, pp. 17-44. Nardi, B.A., “Studying context: A comparison of activity theory, situated action model, and distributed cognition,” in: B.A. Nardi (Ed.), Context and Consciousness: Activity Theory and Human-Computer Interaction, MIT Press, Cambridge, 1996, pp. 67-102. Flor, N.V. and Hutchins, E.L., “Analyzing distributed cognition in software teams: A case study of team programming during perfective software maintenance,” in: J. KoenemannBelliveau, T.G. Moher and S.P. Robertson (Eds.), Proceedings of the Fourth Annual Workshop on Empirical Studies of Programmers, ESP 91. Ablex Publishing, Westport, 1991, pp. 36-59. Leont’ev, A.N., “The problem of activity in psychology,” Soviet Psychology, vol. 13, no. 2, 1974, pp. 4-33. Kuhn, T.S., “The essential tension: tradition and innovation in scientific research,” in: C.W. Taylor (Ed.), The Third University of Utah Research Conference on the Identification of Scientific Talent. University of Utah Press, Salt Lake City, 1959, pp. 162-174.

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

182

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

[71]

[72] [73] [74] [75] [76] [77] [78]

[79] [80]

[81]

[82]

[83]

[84]

Rauterberg, M. and Szabo, K., “A design concept for N-dimensional user interfaces,” in: Proceedings of the 4th International Conference on Interface to Real & Virtual Worlds. EC 2 & Cie, Paris, 1995, pp. 467-477. Sikorski, M., A framework for developing the on-line HCI dictionary. Technical Report, Technical University of Gdansk, Poland, 2002. Shneiderman, B., Designing the user interface: strategies for effective human-computer interaction. Addison-Wesley, Reading, 87. Mayhew, D.J., Principles and guidelines in software user interface design. Prentice Hall, Englewood Cliffs, 1992. Fox, E.A. (Ed.), Resources in HumanComputer Interaction. ACM, New York, 1990. Helander, M., Landauer, T.K. and Prabhu, P. (Eds.), Handbook of human-computer interaction. North-Holland, Amsterdam, 1997. Jacko, J.E. and Sears, E. (Eds.), The humancomputer interaction handbook. Lawrence Erlbaum, 2003. Vet, J.H.M. de and Ruyter, B.E.R. de, “An inventory of interaction styles,” IPO Report No. 1113, Eindhoven University of Technology, 1996. Card, S.K., Moran, T.P. and Newell, A., The Psychology of Human-Computer Interaction. Lawrence Erlbaum, London, 1983. Green, T.R.G., “Cognitive dimensions of notations,” in: A. Sutcliffe and L. Macaulay (Eds.), People and Computers V. University Press, Cambridge, 1989, pp 443-460. Buckingham Shum, S. and Hammond, N., “Transferring HCI modelling and design techniques to practitioners: a framework and empirical work,” in: G. Cockton, S. Draper and G. Weir (Eds.), People and Computers IX: Proceedings of BCS HCI’94, University Press, Cambridge, 1994, pp. 21-36. Dix, A., “Formal methods: an introduction to and overview of the use of formal methods in HCI,” in: A. Monk and N. Gilbert (Eds.), Perspectives on HCI: diverse approaches. Academic Press, London, 1995, pp. 9-43. Cugola, G., Di Nitto, E., Fuggetta, A. and Ghezzi, C., “A framework for formalizing inconsistencies and deviations in humancentered systems,” ACM Transactions on Software Engineering and Methodology, vol. 5, no. 3, 1996, pp. 191-230. Rauterberg, M., “An empirical validation of four different measures to quantify user interface characteristics based on a general descriptive concept for interaction points,” in: Proceedings of IEEE Symposium and Workshop on Engineering of Computer-Based Systems. IEEE Computer Society Press, Los Alamitos, 1996, pp. 435-441.

[85] [86]

[87]

[88] [89]

[90]

[91] [92] [93]

[94]

[95]

[96]

[97]

[98]

Gupta, S., “Avoiding ambiguity,” Nature, vol. 412, 2001, p. 589. Khazaei, B. and Roast, C., “The usability of formal specification representations,” in: G. Kadoda (Ed.), Proceedings of the 13th Annual Workshop of the Psychology of Programming Interest Group – PPIG 2001, Bournemouth University, 2001, pp. 305-310. Ng, K.B., “Toward a theoretical framework for understanding the relationship between situated action and planned action models of behavior in information retrieval contexts: contributions from phenomenology,” Information Processing and Management, vol. 38, 2002, pp. 613-626. Habermas, J., Theorie des kommunikativen Handelns, Volumes 1 and 2. Suhrkamp Verlag, Frankfurt am Main, 1981. Floyd, C., “Human questions in computing science,” in: C. Flyod, H. Zullighoven, R. Budde and R. Keil-Slawik (Eds.), Software development and reality construction. Springer, Berlin, 1992, pp. 15-27. Kuutti, K. and Bannon, L., “Searching for unity among diversity: exploring the ‘interface’ concept,” in: Proceedings of the conference on Human Factors in Computing Systems, INTERCHI 93. IFIP & ACM, New York, 1993, pp. 263-268. Rasmussen, J., Pejtersen, A., and Goodstein, L.P., Cognitive systems engineering. Wiley, New York, 1994. Olson, G.M. and Olson, J.S., “Distance matters,” Human-Computer Interaction, vol. 15, 2000, pp. 139-178 De Souza, C. S., Barbosa, S. D., and Prates, R. O, “A semiotic engineering approach to HCI,” in: Extended Abstracts on Human Factors in Computing Systems, CHI 01. ACM Press, 2001, pp. 55-56. Bacon, C.J. and Fitzgerald, B., “A systematic framework for the field of information system,” The DATA BASE for Advances in Information Systems, vol. 32, no. 2, 2001, pp. 4667. Harris, J. and Henderson, A., “A better mythology for system design,” in: Proceedings of the Conference on Human Factors in Computing Systems, CHI 99. ACM Press, 1999, pp. 88-95. Rozanski, E.P. and Haake, A.R., “The many facets of HCI,” in: Proceeding of the 4th Conference on Information Technology Education. ACM Press, 2003, pp. 180-185. Santos, P.J., Kiris, E.O. and Coyle, C.L., “Designing as the world turns,” in: Proceedings of the conference on Designing Interactive Systems DIS 97. ACM Press, 1997, pp. 315-321. Humphrey, W.S., Managing the Software Process. Addison-Wesley, London, 1990.

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

183

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

[99]

[100]

[101]

[102] [103] [104]

[105]

[106]

[107]

[108]

[109]

[110] [111] [112] [113]

Mackay, W.E. and Fayard, A.L., “HCI, natural science and design: a framework for triangulation across disciplines,” in: Proceedings of the conference on Designing Interactive Systems: processes, practices, methods, and techniques. ACM Press, 1997, pp. 223-234. Wickens, C.D., Lee, J., Liu, Y. and Becker, S.G., An introduction to human factors engineering. Pearson Education International, New Jersey, 2004. Henderson, A., “A development perspective on interface, design, and theory,” in: J.M. Carroll (Ed.), Designing Interaction, University Press, Cambridge, 1991, pp. 254-285. Gram, C. and Cockton, G. (Eds.), Design principles for interactive software. Chapman & Hall, London, 1996. Mayhew, D.J., The usability engineering lifecycle. Morgan Kaufmann, San Francisco, 1999. Carroll, J.M. and Kellogg, W.A., “Artifacts as theory nexus: hermeneutics meets theorybased design,” in: K. Bice and C. Lewis (Eds.), Proceedings of the Conference on Human Factors in Computing Systems, CHI 89. ACM Press, 1989, pp. 7-14. Wesson, J., Kock, G. d. and Warren, P., “Designing for usability: a case study,” in: Proceedings of the IFIP TC13 International Conference on Human-Computer Interaction INTERACT’97. S. Howard, J. Hammond, and G. Lindgaard, Eds. IFIP Conference Proceedings, vol. 96. Chapman & Hall Ltd., London, UK, 1997, pp. 31-38. Pauws, S., Music and choice: adaptive systems and multimodal interaction. PhD Thesis, Eindhoven University of Technology, The Netherlands, 2000. Sutcliffe, A. and McDermott, M., “Integrating methods of Human-Computer Interface design with structured systems development,” International Journal of Man-Machine Studies, vol. 34, 1991, pp. 631-655. Sutcliffe, A. and Wang, I., “Integrating Human-Computer Interaction with Jackson System Development,” The Computer Journal, vol. 34, no. 2, 1991, pp. 132-142. Lieberman, H., The tyranny of evaluation, 2003. Retrieved on September 28, 2006 from http://web.media.mit.edu/~lieber/Misc/Tyrann y-Evaluation.html Kumar, R., Research methodology: a step-bystep guide for beginners. Sage, London, 96. Lyman, H.B., Test scores and what they mean. Allyn and Bacon, Boston, 1998 (6th edition). Corbetta, P., Social research: theory, methods and techniques. Sage, London, 2003. Gray, D.E., Doing research in the real world. Sage, London, 2004.

[114] Nielsen, J., Usability engineering. Academic Press, London, 1993. [115] Trætteberg, H., Model-based User Interface Design. PhD-Thesis, Norwegian University of Science and Technology, Norway, 2002. [116] Myers, B.A., Hollan, J. and Cruz, I. et al., “Strategic directions in human-computer interaction,” ACM Computing Surveys, vol. 28, no. 4, 1996, pp. 794-809. [117] Faiola, A., “The Copernican shift: HCI education and the design enterprise,” in: Jacko, J. and Stephanidis, C. (Eds.), Human-Computer Interaction - Ergonomics and User Interfaces, vol. 1. Lawrence Erlbaum, 2003, pp. 326-330. [118] Lindgaard, G., “Making the business our business: one path to value-added HCI,” ACM interactions, vol. 11, no. 3, 2004, pp. 11-17. [119] Bias, R.G. and Mayhew, D.J. (Eds.), CostJustifying Usability. London: Academic Press, 1994. [120] Hix, D. and Hartson, R., Developing User Interfaces. John Wiley & Sons, Chichester, 1993. [121] Carroll, J.M., Kellogg, W.A. and Rosson, M.B., “The task-artifact cycle,” in: J.M. Carroll (Ed.), Designing Interaction. Cambridge University Press, 1991, pp. 74-102. [122] Boehm, B.W., Gray, T. and Seewaldt, T., “Prototyping versus specifying: A multi project experiment,” IEEE Transactions on Software Engineering, vol. SE-10, no. 3, 1981, pp. 224-236. [123] Long, J.B., “Specifying relations between research and the design of human-computer interactions,” in: S. Howard, J. Hammond and G. Lindgaard (Eds.), Proceedings of IFIP TC.13 International Conference on Human-Computer Interaction, INTERACT 97. Chapman & Hall, 1997, pp. 188-195. [124] Middlemass, J., Stork, A. and Long, J., “Successful case study and partial validation of MUSE, a structured method for usability engineering,” in: M.A. Sasse and C. Johnson (Eds.), Proceedings of the IFIP International Conference on Human-Computer Interaction INTERACT 99. IOS Press, Amsterdam, 1999, pp. 399-407. [125] Sackman, H., Man-computer problem solving: experimental evaluation of time-sharing and batch processing. Auerbach, Princeton, 1970. [126] Monk, A., “Empirical evaluation of user interfaces,” in: R.M. Baecker and W.A.S. Buxton (Eds.), Readings in Human-Computer Interaction: A Multidisciplinary Approach. Morgan Kaufmann, 1987, pp. 135-137. [127] Kirakowski, J. and Corbett, M., Effective methodology for the study of HCI. NorthHolland, Amsterdam, 1990. [128] Landauer, T.K., “Behavioral research methods in human-computer interaction,” in: M. He-

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.

184

African Journal of Information and Communication Technology, Vol. 2, No. 4, December 2006

[129]

[130]

[131] [132] [133]

[134]

[135]

[136]

[137]

lander, T.K. Landauer and P. Prabhu (Eds.), Handbook of Human-Computer Interaction. North-Holland, Amsterdam, 1997, pp. 203227. Ivory, M. Y. and Hearst, M. A., “The state of the art in automating usability evaluation of user interfaces,” ACM Computing Survey, vol. 33, no. 4, 2001, pp. 470-516. Badler, N., “Virtual humans for animation, ergonomics, and simulation,” in: Proceedings of IEEE Workshop on Motion of Non-Rigid and Articulated Objects--NAM '97. IEEE Computer Society, 1997, pp. 28-36. Carberry, S., “Techniques for plan recognition,” User Modeling and User-Adapted Interaction, vol. 11, nos. 1-2, 2001, pp. 31-48. Capps, R., “The humanoid race,” Wired Magazine, 12.07.2004, online. Rauterberg, M., “AMME: an automatic mental model evaluation to analyze user behaviour traced in a finite, discrete state space,” Ergonomics, vol. 36, no. 11, 1993, pp. 1369-1380. Davies, J.M., “A designers guide to human performance models,” in: Proceedings of the Workshop on Emerging Technologies in Human Engineering Testing and Evaluation. NATO Technical Proceedings AC/243 (Panel 8) TP/17, Brussels, Belgium (June 24-26, 1997), 1998, pp. 36-43. Allender, L., Salvi, L. and Promisel, D., “Evaluation of human performance under diverse conditions via modeling technology,” in: Proceedings of the Workshop on Emerging Technologies in Human Engineering Testing and Evaluation. NATO Technical Proceedings AC/243 (Panel 8) TP/17, Brussels, 1998, pp. 44-51. Long, J.B., “Research and the design of Human-Computer Interactions or ‘What happened to validation?’,” in: H. Thimbleby, B. O’Conaill and P. Thomas (Eds.), People and Computers XII. Springer, London, 1997, pp. 223-243. Byrne, M.D. and Gray, W.D., “Returning human factors to an engineering discipline: expanding the science base through a new generation of quantitative methods - preface to the special section,” Human Factors, vol. 45, no. 1, 2003, pp. 1-4.

[138] Long, J.B., “Cognitive ergonomics lessons: Possible challenges for USI?,” IPO Annual Progress Report, Vol. 34, 1999, pp. 24-36. [139] Lewis, C., “Inner and outer theory in humancomputer interaction,” in: J.M. Carroll (Ed.), Designing Interaction. Cambridge University Press, 1991, pp. 154-161. G.W. Matthias Rauterberg received a B.S. in Psychology (1978) at the University of Marburg (Germany), a B.A. in Philosophy (1981) and a B.S. in Computer Science (1983), a M.S. in Psychology (1981) and a M.S. in Computer Science (1986) at the University of Hamburg (Germany), and a Ph.D. in Computer Science/ Mathematics (1995) at the University of Zurich (Switzerland). He was a senior lecturer for ‘usability engineering’ in computer science and industrial engineering at the Swiss Federal Institute of Technology (ETH) in Zurich, where later he was heading the Man-Machine Interaction research group (MMI). Since 1998 he is fulltime professor for ‘Human Communication Technology’ first at IPO, Center for User System Interaction Research, and later at the Department of Industrial Design at the Eindhoven University of Technology (TU/e, The Netherlands). From 1999 till 2002 he was director of IPO. Now he is heading the Designed Intelligence Research group at the Department of Industrial Design of the TU/e. He was the Swiss representative in the IFIP TC13 on ‘Human Computer Interaction’ (1994-2002) and the chairman of the IFIP WG13.1 on ‘HCI and Education’ (1998-2004). He is now the Dutch representative in the IFIP TC14 on ‘Entertainment Computing’ and the founding vicechair of this TC14 (since 2006). He is also the chair of the IFIP WG14.3 on ‘Entertainment Theory’ (since 2004). He is also appointed as visiting professor at Kwansei Gakuin University (Japan) (2004-2007). He received the German GI-HCI Award for the best Ph.D. in 1997 and the Swiss Technology Award for the BUILD-IT system in 1998. Since 2004 he is a nominated member of the ‘Cream of Science’ in the Netherlands (the 200 top-level Dutch researchers) and amongst the 10 top-level TU/e scientists. He has over 250 publications in international journals, conference proceedings, books, etc. He acts also as editor and member of the editorial board of several leading international journals.

1449-2679/$00 - (C) 2006 AJICT. All rights reserved.