Monitoring Online Training Behaviors: Awareness of Electronic ...

3 downloads 2038 Views 110KB Size Report
Web-based training programs commonly capture data reflecting e-learners' activi- ties, yet little ... training program designed to teach online search skills. Half of ...
Monitoring Online Training Behaviors: Awareness of Electronic Surveillance Hinders E-Learners1 Lori Foster Thompson2

Jeffrey D. Sebastianelli

North Carolina State University

CASTLE Worldwide, Inc.

Nicholas P. Murray East Carolina University Web-based training programs commonly capture data reflecting e-learners’ activities, yet little is known about the effects of this practice. Social facilitation theory suggests that it may adversely affect people by heightening distraction and arousal. This experiment examined the issue by asking volunteers to complete a Web-based training program designed to teach online search skills. Half of participants were told their training activities would be tracked; the others received no information about monitoring. Results supported the hypothesized effects on satisfaction, performance, and mental workload (measured via heart rate variability). Explicit awareness of monitoring appeared to tax e-learners mentally during training, thereby hindering performance on a later skills test. Additionally, e-learners reported less satisfaction with the training when monitoring was made salient. jasp_521

2191..2212

Technology has altered many aspects of the modern-day workplace, and training is one area that has changed dramatically in recent years. Today, a great number of e-learning opportunities reside on the Internet and organizational intranets. Meanwhile, the contemporary work world has witnessed an increasing reliance on electronic performance monitoring, which is defined as the use of computer and communication technologies to gather information about work performance (Aiello & Douthitt, 2001). The fusion of these two trends has led to the design of Web-based training systems that capture data reflecting employees’ e-learning activities. To the best of our knowledge, no past research has examined the effects of this practice on people who are trying to acquire new knowledge and skills online.

1 The authors thank John G. Cope and Karl L. Wuensch for their insightful comments and suggestions concerning this research. An earlier version of this paper was presented at the 20th annual conference of the Society for Industrial and Organizational Psychology, Los Angeles, CA, April 2005. Portions of this study were carried out while the first two authors were affiliated with East Carolina University. 2 Correspondence concerning this article should be addressed to Lori Foster Thompson, Department of Psychology, North Carolina State University, Campus Box 7650, Raleigh, NC 27695-7650. E-mail: [email protected]

2191 Journal of Applied Social Psychology, 2009, 39, 9, pp. 2191–2212. © 2009 Copyright the Authors Journal compilation © 2009 Wiley Periodicals, Inc.

2192 THOMPSON ET AL. The present study, therefore, is designed to investigate whether the awareness that training activities are electronically monitored will affect e-learners, both physiologically and psychologically.

Electronic Monitoring of E-Learners To some degree, organizations have always monitored trainees. Sign-in rosters, knowledge tests, and other measures help employers track workers’ developmental efforts. To this end, one might argue that the electronic monitoring of e-learners is simply a straightforward adaptation of practices that have been around for quite some time. We disagree with this assertion for three reasons. First, classroom trainees typically know what is being measured and when. Online surveillance is less obtrusive, and e-learners are not always privy to the status of monitoring activities (Alge, Ballinger, & Green, 2004; Stanton & Barnes-Farrell, 1996; Wells, Moorman, & Werner, 2007). Second, the amount of information collected in the classroom pales in comparison to the range and volume of data that can be gathered online. According to Alge, Ballinger, Tangirala, and Oakley (2006), today’s workers “face increasingly invasive information collection and dissemination demands from their organizations” (p. 221). The accelerated development of a variety of inexpensive electronic devices (e.g., computer networks, wireless technologies) enables the collection of data (e.g., keystroke recording, Internet monitoring) that would be difficult, if not impossible, to obtain through physical monitoring (McNall & Roch, 2007; Stanton & Barnes-Farrell, 1996). Thus, employers who choose to monitor training activities potentially have more information at their disposal when the monitoring occurs online, rather than in person. Third, whereas classroom data-collection activities are usually discrete events, online monitoring can be constant and ongoing (Wells et al., 2007). For instance, some electronic monitoring systems are designed to provide continuous and real-time views of employees’ onscreen activities (Alge et al., 2004). Some specific examples of the types of data that can be captured by an e-learning package include login times and dates; number and types of preand post-tests attempted, along with the number of items answered correctly during each attempt; which training modules the learners have tried; dates and times in which the modules were attempted; how long the trainees worked on each module; number of practice exercises attempted/completed; duration of each practice exercise; and whether or not the employees followed through and finished various training units. Clearly, there are many practical uses for these types of data. Henderson, Mahar, Saliba, Deane, and

MONITORING ONLINE TRAINING

2193

Napier (1998) maintained that monitoring is essential for performance and productivity in the modern office. It can be used to provide timely work performance feedback in training situations. Moreover, trainees’ online data can facilitate employee development and needs analysis by helping organizations understand who tends to need and use which types of learning opportunities. Management often argues that the monitoring of employees also encourages proper time delegation, courteous demeanor in interpersonal environments, and increased productivity (Oz, Glass, & Behling, 1999). During training, it may promote accountability, ensuring that employees feel responsible for completing necessary learning activities, even in the absence of an in-person instructor and peers. Research by Brown (2001) has suggested that employees may move through computer-based training rather quickly, skipping critical practice exercises and reducing their knowledge gain as a result. Monitoring e-learners and holding them responsible for their training activities may help curtail this problem. In short, the reasons for tracking Web-based training activities are compelling; therefore, e-learning software continues to incorporate tracking features. The degree to which employers actually use these data is presently unknown and may matter less than the degree to which employees suspect that their training data are being tracked. Research in other domains (e.g., employee surveys) has suggested that employees hold concerns about datatracking technologies and are skeptical about the privacy afforded to them online, even when this skepticism is unjustified (Thompson, Surface, Martin, & Sanders, 2003). Thus, even trainees whose employers do not put their software’s tracking capabilities to use may suspect that they are being monitored. Social Facilitation Theory and the Effects of Electronic Monitoring To date, there is an unfortunate dearth of research empirically examining how learners react to the perception that their training behaviors are being tracked and accounted for. Moving beyond the training literature, social facilitation theory offers insights into the potential effects of perceived surveillance on e-learners. Social facilitation theory has provided a framework for much of the research in the area of performance monitoring (Aiello & Douthitt, 2001). This theory suggests that the presence of others enhances the performance of simple tasks and worsens the performance of complex tasks (Geen, 1989). For example, a 1973 study by Hunt and Hillery showed that the presence of others reduced the errors produced by individuals learning an easy maze and increased the errors produced by those learning a difficult maze.

2194 THOMPSON ET AL. Zajonc (1980) has linked this effect to increases in alertness and arousal that result from the uncertainty of another’s behavior. Sanders, Baron, and Moore (1978) indicated that the presence of others serves to distract a performer (or, in the present case, a trainee). Arousal occurs as a result of an overloaded cognitive system, which stems from an inherent conflict between paying attention to others and paying attention to the task at hand (Baron, 1986; Myers, 2005). Consistent with this view, classic work by Pessin and Husband (1933) showed that the presence of a passive other decreases the efficiency with which individuals can memorize nonsense syllables. More recent work by Huguet, Galvaing, Monteil, and Dumas (1999) demonstrated that the presence of others inhibits automatic verbal processing. Self-presentation has also been implicated in the social facilitation effect. In the presence of others, people may be especially self-attentive in an attempt to conform to norms of behavior (Carver & Scheier, 1981) and present themselves favorably (Bond, 1982). As a result, extra mental resources are required to complete a task when others are aware of one’s performance. Evaluation apprehension is also believed to partially explain the social facilitation effect, though Zajonc (1980) argued that this is not the only reason why the presence of others produces performance effects. The social facilitation effect occurs even when others are unfamiliar and difficult to see (Guerin & Innes, 1982; Myers, 2005). In fact, others who cannot be seen are thought to produce even greater physiological effects on a performer than those who are visible (Bond & Titus, 1983). The social facilitation framework, therefore, extends beyond in-person monitoring to the domain of computer performance monitoring, wherein electronic surveillance serves the role of “invisible others.” Research supports this line of reasoning, demonstrating that electronic monitoring diminishes performance, unless the task at hand is a simple one. For example, a study by Douthitt and Aiello (2001) showed that monitoring impaired the performance of people working on a complex task involving timed screen images requiring arithmetic calculations. Other research has demonstrated that the presence of computer monitoring decreases the performance of people working on difficult anagrams and has the reverse effect on those tasked with easy anagrams (Davidson & Henderson, 2000). Meanwhile, explicit knowledge that performance is not being monitored has been shown to enhance the performance of people asked to solve 60 five-letter anagrams in 10 min (Aiello & Svec, 1993). In sum, the literature has theoretically linked electronic monitoring to arousal, affect, and performance. The implications for e-learners are clear. Our first hypothesis is based on the contention that monitoring taxes e-learners’ cognitive systems by dividing their attention between the training material and the awareness that their data will be examined by outside others.

MONITORING ONLINE TRAINING

2195

Testing this initial prediction requires a meaningful assessment of cognitive load. Mental workload has been defined as the mental or perceptual cost incurred by a person to achieve a given performance level (Hart & Staveland, 1988), or simply the effort invested in task performance (Braarud, 2001). In a training context, Paas (1992) conceptualized mental effort as the capacity learners allocate to instructional demands. Time on task is a commonly used, yet contaminated measure of learner effort (Fisher & Ford, 1998). Unfortunately, however, the self-report alternative to time on task also has problems. Self-report measures of mental workload can be tainted by purposeful response distortion. Moreover, learners who are not particularly self-reflective may have trouble accurately reporting on the application of their own mental resources (Fisher & Ford, 1998). Rowe, Sibert, and Irwin (1998) highlighted a number of advantages of employing physiological measures to determine mental effort. They suggested that heart rate variability may be used to indicate the point at which a person’s mental capacities to process stimuli are being exceeded. Heart rate variability indexes circumvent many of the concerns associated with selfreport measures of mental effort because they do not ask learners to selfreflect. Physiologically, heart rate variability is a measure of cardiac autonomic function that reflects both sympathetic and parasympathetic nervous system activity, including balances and imbalances (De Vito, Galloway, Nimmo, Maas, & McMurray, 2002; Mussalo et al., 2001). It is determined by mediated beat-to-beat variability and reflects this continuous oscillation around its mean value, thus providing noninvasive data about control of heart rate in real-life conditions (Routledge, Chowdhary, & Townend, 2002). Most heart rate variability measures produce several different indexes of physiological functioning. The index of interest in this study is the very low frequency (VLF) score. VLF, which is found to increase as mental workload increases, reflects sympathetic activity (Metelka, Weinbergova, Opavsky, Salinger, & Ostransky, 1999). Although heart rate variability has not yet found its way into mainstream training research, numerous studies outside of the industrial–organizational psychology domain have used it to assess mental workload (e.g., De Vito et al., 2002; Kallio et al., 2000; McMillan, 2002; Mussalo et al., 2001; Routledge et al., 2002). Heart rate variability has been utilized in both laboratory and field settings and has been found to be sensitive to manipulations in task complexity (e.g., Rowe et al., 1998). For example, Aasman, Mulder, and Mulder (1987) examined participants who were asked to press one of two buttons to indicate the presence or absence of a stimulus. Heart rate variability levels were significantly altered when participants were asked to think

2196 THOMPSON ET AL. about other things (e.g., keep a running mental count of memory set items) while performing this button-pressing task. Meanwhile, Croizet et al. (2004) used a measure of heart rate variability to assess the disruptive, heightened mental load that hinders performance in the presence of stereotype threat: a phenomenon said to involve apprehension about being evaluated based on a negative stereotype (Myers, 2005). In short, the literature supports the use of heart rate variability as an index of mental workload in general, and the disruptive mental workload stemming from evaluation apprehension in particular. The present study operationalizes cognitive load accordingly. Hypothesis 1. Heightened perceptions of electronic monitoring will increase e-learners’ mental workload (i.e., VLF scores). Notably, monitoring Web-based trainees is expected to produce effects that extend beyond physiological arousal. Social facilitation theory maintains that the presence of others hinders the performance of difficult tasks (Geen, 1989). Much training is presumed to be a challenging activity because, more often than not, the skill or body of knowledge being taught has not yet been mastered (hence, the need for training). Therefore, we predict the following: Hypothesis 2. Heightened perceptions of electronic monitoring will reduce performance on a post-training skills test. The social facilitation literature and the preceding hypotheses imply a mediated model, wherein the performance-reducing effects of monitoring occur as a result of learning decrements stemming from a cognitive system that is overloaded during training. This overload should be reflected in heightened VLF scores. It arises from the conflict between attending to others while attending to the training material. Our third prediction tests this model: Hypothesis 3. Mental workload (VLF scores) will mediate the relationship between heightened perceptions of monitoring and performance on a post-training examination. The preceding predictions suggest that e-learners obtain less return on their mental investments when they are aware that their training data are being captured online. This may adversely affect their feelings about the Web-based training program. Computer monitoring has been shown to have negative affective consequences (Davidson & Henderson, 2000). E-learners who are monitored, therefore, are expected to react less favorably when asked to express their attitudes toward an online training program.

MONITORING ONLINE TRAINING

2197

Hypothesis 4. Heightened perceptions of electronic monitoring will reduce satisfaction with an online training program. Method Participants The participants in the present study were 58 volunteers (27 females, 31 males) from a range of courses at a large southeastern university. The sample was 88% Caucasian, 7% African American, and 5% belonged to other ethnic groups. Participants’ mean age was 21.7 years (SD = 4.8). In terms of Internet exposure, participants varied quite a bit and reported spending an average of 5.28 hr searching for information online each week (SD = 5.19). Design and Procedure The independent variable, perceived computer monitoring, had two levels: heightened perceptions (where participants were told that their training activities would be tracked) and control perceptions (where participants were given no information about monitoring). The 58 trainees were randomly assigned to one of the two conditions. The dependent variables were mental workload, performance on a post-training skills test, and satisfaction with the online training program. Data collection occurred in a laboratory equipped with a desk, a bookshelf, the experimenter’s laptop computer, and a university-registered Intel™ Pentium®-class computer on which Microsoft Internet Explorer was installed. Volunteers participated one at a time. After arriving at the laboratory, each participant was given an informed consent form. A lightweight heart rate variability monitor, called the Biocom pulse wave sensor M-2001, was then comfortably affixed to the trainee’s left ear. This device was connected to the experimenter’s laptop computer. The experimenter asked participants to report their age, and age data were entered into the computer to ensure an accurate heart rate variability reading. Next, participants in both conditions were introduced to a pre-task questionnaire that gathered demographic and other data. Trainees were then asked to use their university-issued usernames and personal passwords to log on to “Blackboard,” the university’s Web-based instructional medium. Afterward, they were familiarized with the Web-based training program, which was identical for both the monitored and control groups. The Web-based training program was designed to teach participants how to locate information (e.g., newspaper articles) on the university library’s

2198 THOMPSON ET AL. Website. It gave trainees the opportunity to read instructions on how to conduct searches and to view screenshots exemplifying proper searches. Practice exercises were included to facilitate learning. The training program was flexible in that it allowed participants to determine the ordering of the modules, how long they wished to spend on each module, and the number of practice exercises they wished to attempt. Prior to training, participants in the present study were asked to rate their familiarity with the library’s Website on a 5-point scale ranging from 1 (no experience) to 5 (a lot of experience). The average rating was 2.25 (SD = 0.97), and only 6 individuals (10% of the sample) rated their experience levels as above average. Indeed, a needs analysis conducted prior to this study highlighted the necessity of this type of program for the population from which our sample was drawn. Data from a previous study (King, 2003) confirmed that the content was indeed challenging and demonstrated that the program significantly increased students’ online library search skills. After receiving an introduction to the Web-based training program, those who were assigned to the heightened perceptions condition were informed that their performance in the training program would be tracked. The experimenter stated Be aware that your performance on the training program as well as the practice exercises is being closely monitored, tracked, and recorded. Both [name of faculty member] and I will carefully review this information from the other room while you are working. They were then shown a fictitious report, which illustrated the type of generic data the monitoring program supposedly captured. The experimenter then stated “This is a generic example of a printout that [name of faculty member] and I receive regarding the type of data the monitoring program captures.” Participants were shown an additional on-screen fictitious report, which illustrated data supposedly captured from a different participant. The experimenter then stated Hang on just a second, let me close the report from the person who went before you, then I’ll explain the type of data that are being monitored. Okay. As you can probably see, we are able to track not only how much time you’re spending on the training modules, but also how many practice exercises you get through, as well as your overall success in the practice exercises. These graphs outline your overall success on the training modules. This graph illustrates the amount of time elapsed in each training module. This graph illustrates the practice exercises that have been completed.

MONITORING ONLINE TRAINING

2199

It should be noted that these training data were not actually tracked, because of limitations of the software being used. However, these statements were expected to convince those in the experimental condition that their training data were indeed being collected. The experimenter then initiated the heart rate variability reading, left the laboratory, and returned in 25 min. Upon returning to the laboratory, the experimenter removed the heart rate variability monitor from the participant’s ear and administered the post-task questionnaire and skills test. Afterward, participants were thanked, debriefed, and dismissed. Each session lasted approximately 75 min.

Measured Variables Manipulation check. We included four items as a manipulation check (see the Appendix). Participants rated their agreement with the items on a 5-point scale ranging from 1 (strongly disagree) to 5 (strongly agree). A sample item is “The experimenter is able to check the computer to verify how many practice exercises I completed.” Responses to these items (a = .60) were averaged. High scores represented assured beliefs that training activities were being monitored. Mental workload. The physiological measure of mental workload was taken via the heart rate variability monitor attached to each participant’s ear. VLF scores were examined. These scores ranged from a 9 (implying a high heart rate variability, low mental workload) to a 3026 (implying a low heart rate variability, high mental workload). Satisfaction. The post-task measure included three items (a = .60) that asked e-learners to rate their satisfaction with the training program (i.e., whether they were satisfied with, enjoyed, and would take part in another training program like this; see the Appendix) on a 5-point scale ranging from 1 (strongly disagree) to 5 (strongly agree). Responses to these items were averaged, and high scores represented favorable reactions. Alliger, Tannenbaum, Bennett, Traver, and Shotland (1997) proposed a multi-level framework for researchers and practitioners concerned with assessing training outcomes. The satisfaction items in this study can best be characterized as reactions as affect. As this is a component of the training criterion framework proposed by Alliger et al., the inclusion of this outcome variable helps to facilitate an examination of the practical implications of the experimental manipulation. Post-training skills test performance. Our next measure was designed to assess behavior/skill demonstration, which is another important element of Alliger et al.’s (1997) training criterion framework. The skills test included

2200

THOMPSON ET AL.

four questions directly related to the training content (see the Appendix). There are two questions that assessed e-learners’ ability to determine which journals the library carried in hard copy, and two additional questions measured their ability to locate newspaper articles from the library’s Website. For example, one item asked participants to find a newspaper article online and write down a specific fact contained in the first paragraph of the article. Each question had a verifiable answer. Answers were assigned a score of either 0 (incorrect) or 1 (correct). These scores were then summed. Self-reported tension. For exploratory purposes, the Profile of Mood States (POMS) scale (Shacham, 1983) was included on the post-task questionnaire as well. Of particular interest was the six-item tension subscale (a = .70), which asked participants to indicate their present mood states on a 5-point scale ranging from 0 (not at all) to 4 (extremely). Sample items are “tense,” “uneasy,” and “anxious.” Responses to the six items were summed.

Results Preliminary analyses were conducted to determine if there were any preexisting differences between the two conditions in terms of participant age and sex. The results indicated that age did not vary significantly between the two conditions, t(56) = 0.84, p = .41. Similarly, sex was not confounded with the experimental manipulation, c2(1, N = 58) = 0.07, p = .79. Next, the measured variables were checked for normality. With one exception, the skewness and kurtosis values were close enough to 0 to justify use in analyses assuming normally distributed data. The positively skewed VLF scores were the exception. This was addressed via a logarithmic transformation, which reduced the skewness appropriately. The transformed VLF data were used in all subsequent analyses. Table 1 shows the correlations among the study variables of interest. Table 2 shows the results of the one-way MANCOVA and follow-up univariate ANCOVAs that were computed to examine the effects of our manipulation on perceptions of monitoring (i.e., manipulation check), mental workload, performance, and satisfaction. The dependent measures were adjusted for three covariates: Internet exposure, library experience, and experience with the instructional medium (i.e., Blackboard). The three covariates were measured via the questionnaire administered prior to training. In that questionnaire, trainees were asked to report the amount of time they spent searching for information on the Internet each week (i.e., Internet exposure). They were also asked to rate their familiarity with the training platform (i.e., Blackboard experience), and the library Website on which the training content was based (i.e., library experience).

MONITORING ONLINE TRAINING

2201

Table 1 Correlations Among Study Variables Variable

1

1. Monitoring condition (control vs. heightened perceptions) 2. Mental workload (heart rate variability, VLF log) 3. Post-training skills test performance 4. Satisfaction 5. Self-reported tension

— .50** -.29* -.27* -.15

2

3

4

— .24 .17

— .11

— -.33* -.04 .06

Note. VLF = very low frequency. The correlations shown here were computed after controlling for pre-training Internet exposure, library experience, and experience with the training platform (i.e., Blackboard). Monitoring condition was dummy coded as 1 for the control condition (in which participants were given no information about monitoring) and 2 for the heightened awareness condition (in which participants were told that their training activities would be tracked). *p < .05 (two-tailed). **p < .01 (two-tailed).

Newer Internet users are less comfortable and are more likely to encounter stress-inducing problems online (Eastin & LaRose, 2000). It was expected that people who use the Internet a great deal would naturally react to the training program more favorably than would those with less exposure to the Internet. Likewise, experience with the training platform and the library’s Website was expected to influence our dependent measures. Thus, Internet exposure, library experience, and Blackboard experience were included as covariates because of their logical linkages with our dependent measures. Adjusting for these scores enabled a more powerful look at the effects of our manipulation by minimizing error variance (Tabachnick & Fidell, 2001). As shown in Table 2, the manipulation-check data included in the MANCOVA indicated that trainees in the heightened awareness (i.e., treatment) condition were more convinced of the presence of electronic monitoring than were those in the control condition. This difference was statistically significant, thereby confirming that the manipulation operated as intended. Hypothesis 1 predicted that heightened awareness of electronic monitoring would increase e-learners’ mental workload (i.e., VLF scores). As Table 2 indicates, the average VLF scores produced by trainees in the heightened awareness condition significantly exceeded those produced by trainees in the control condition. Therefore, the results support Hypothesis 1.

2202 THOMPSON ET AL. Table 2 Differences Between E-learners in the Monitored and Control Conditions Control (N = 28a)

M

SD

F (1,51)b

3.21 0.71

3.68

0.66

2.31 0.39

2.71

0.63 0.29 4.01 0.68

M Manipulation check: Perceived presence of monitoring Mental workload (heart rate variability, VLF log) Post-training skills test performance Satisfaction

Monitored (N = 28a)

SD

p

h2p

6.63

.013

.115

0.29

16.99