Learning from Learning Kits: gStudy Traces of Students ... - Springer Link

3 downloads 1388 Views 445KB Size Report
Sep 20, 2006 - compares features of a current product—the target of monitoring—to a list of ... software to help learners meet each of these four conditions. ...... cognitive tools for peer help: The intelligent intranet peer help-desk project.
Educ Psychol Rev (2006) 18:211–228 DOI 10.1007/s10648-006-9014-3 ORIGINAL ARTICLE

Learning from Learning Kits: gStudy Traces of Students’ Self-Regulated Engagements with Computerized Content Nancy E. Perry & Philip H. Winne

Published online: 20 September 2006 # Springer Science + Business Media, Inc. 2006

Abstract Researching self-regulated learning (SRL) as a process that evolves across multiple episodes of studying poses large methodological challenges. While self-report data provide useful information about learners’ perceptions of learning, these data are not reliable indicators of studying tactics learners actually use while studying, especially when learners are young children. We argue that self-reports about SRL need to be augmented by fine-grained traces that are records of learners’ actual activities as they study. We describe how gStudy software unobtrusively collects detailed trace data about learners’ use of study tactics as they engage with content presented in learning kits—collections of documents (e.g., texts, graphics, video clips) and tasks (e.g., notes, concept maps) on which learners operate to study. We suggest that trace data can advance research about how learners select, monitor, assemble, rehearse, and translate information to learn it, and provide raw materials for mapping SRL and its effects. Examples from the Life Cycles Learning Kit that supports grade 1 students learning about the life cycles of humans and frogs are given. Keywords Self-regulated learning . Metacognition Large methodological challenges arise when researching self-regulated learning (SRL) because it is a process that evolves across multiple episodes of studying. Self-report data are the mainstay of prior research. These data are essential because they reveal what learners Support for this research was provided by grants to Nancy E. Perry and Philip H. Winne from the Social Sciences and Humanities Research Council of Canada (410-2002-1024, 410-2002-1787 and 512-20031012), the Canada Research Chair program, and Simon Fraser University. N. Perry (*) Department of Educational & Counselling Psychology, & Special Education, University of British Columbia, Vancouver V6T 1Z4, British Columbia, Canada e-mail: [email protected] P. Winne Faculty of Education, Simon Fraser University, Burnaby, British Columbia, Canada e-mail: [email protected]

212

Educ Psychol Rev (2006) 18:211–228

perceive about learning and because these perceptions are the bases learners have for choosing among study tactics. Notwithstanding, self-report data may not be reliable indicators about the tactics learners actually use while studying (Winne & Jamieson-Noel, 2002; see also Winne, 2004). This poor calibration of perceptions relative to actions may be especially evident among young children (Perry, 1998; Turner, 1995). We maintain that measurements of SRL reflect models of SRL (Winne & Perry, 2000). Therefore, we launch this article on assessing SRL by describing a model of SRL and its measurement targets. Next, we briefly review approaches to measuring SRL. In particular, we examine limitations of survey self-report measures for studying SRL in real time and how traces overcome those limitations. Then, we overview the Learning Kit Project, a collaborative research program in which we and colleagues are developing software called gStudy (Winne, Hadwin, Nesbit, Kumar, & Beaudoin, 2005) to serve simultaneously as a venue for instruction and a laboratory for researching SRL. gStudy collects detailed, timestamped trace data about how students study as they interact with multimedia content (e.g., texts, graphics, video clips) presented in learning kits. Finally, we describe how gStudy traces a student’s interactions with one learning kit, the Frog Life Cycles Learning Kit (Perry & Colleagues at UBC), and what those traces can reveal about young children’s SRL.

What is Self-Regulated Learning? Self-regulated learners are characterized as metacognitive, motivated for learning, and strategic (Zimmerman, 1990; Winne & Perry, 2000). Metacognition is based in knowledge these learners have about: their academic strengths and weaknesses, resources they can apply to address task demands, and how to regulate engagement with tasks to optimize cognitive operations and products. Motivation for learning is evidenced by self-regulated learners’ beliefs that skill and ability are incremental, their value for personal progress and deep understanding, high efficacy for learning, and attributions they make that link outcomes to factors they control, such as the effective use of strategies. Also, self-regulated learners have a repertoire of tactics and strategies to choose from as they engage with challenging tasks and problems. Strategic describes how self-regulating learners evaluate alternatives in their repertoire, and choose and coordinate tactics to create a strategy they believe is best suited to the task. We claim all learners are self-regulating but not all forms or profiles of SRL are equally effective (Winne, 1995), and not all learners are effectively self-regulating. Thus, we argue there is warrant for efforts to understand and promote students’ development of and engagement in effective forms of SRL.

The Winne-Hadwin Model of SRL Winne and colleagues proposed a model of SRL (e.g., Winne, 2001; Winne & Hadwin, 1998; Winne & Perry, 2000; see Fig. 1) according to which learners engage in four phases when they participate in learning tasks: Phase 1. Learners develop a model of the task by interpreting task conditions (contextual constraints and affordances that involve, for e.g., time limits, material resources, help available) and cognitive conditions (e.g., knowledge of the task domain, memories of challenges experienced with similar tasks and strategies that proved effective).

Educ Psychol Rev (2006) 18:211–228

213

Task Conditions Resources

Instructional Cues

Social Context

Time

External Evaluations

Cognitive Conditions Beliefs, Dispositions, & Styles

Motivational Domain Factors & Knowledge Orientations

Standard(s)

Knowledge of Task

Knowledge of Study Tactics & Strategies

ABCDE

C ONTROL

MONITORING

Cognitive Evaluations A B C D E

Product(s)

Operation(s) Primitive

on target on target too low too high missing

ABCDE

Performance

Phase 1. Definition of Task

Acquired (Tactics & Strategies)

Phase 2. Goals & Plan(s) Phase 3. Studying Tactics Phase 4. Adaptations

Recursive Updates to Conditions

Cognitive System Fig. 1 The Winne-Hadwin model of self-regulated learning.

Phase 2. Learners create goals relative to their model of the task (e.g., to increase knowledge about a particular topic). Then, they select cognitive operations operationalized as study tactics and learning strategies that they predict can contribute to achieving these goals. Phase 3. Learners engage in learning by applying their chosen tactics and strategies. As they do, chosen tactics and strategies create provisional updates to the initial knowledge and beliefs (e.g., Am I learning more about this topic? Is this strategy as helpful as I thought it would be), “steps” toward the ultimate goal of the task. Phase 4. Each cognitive operation a learner applies constructs products (knowledge, research reports, models or diagrams). When evaluations of products are

214

Educ Psychol Rev (2006) 18:211–228

available, either from the environment (e.g., a peer’s comment or a computer’s beep) or in the learner’s working memory, learners may choose to stay with or revise those products. As well, they may adjust their model of the task and adapt goals and strategies accordingly.

Perry’s research focuses on conditions of tasks and task environments that afford SRL (Perry, Phillips, & Dowler, 2004). Her observations in elementary school classrooms indicate opportunities for SRL are embedded in tasks that are complex by design. These tasks address multiple goals and large chunks of meaning (e.g., studying animals growth and change, researching and writing about a topic related to WWII), extend more than one class period, and call for applying a variety of processes that create a wide range of products. Typically, complex tasks increase learners’ opportunities to think metacognitively and behave strategically because these tasks present learners with opportunities to control challenge, evaluate multiple processes and products, and, often, collaborate with peers. Each phase of SRL pivots on metacognitive monitoring and metacognitive control (Butler & Winne, 1995; Winne, 1995, 2001). Monitoring is a generic cognitive operation that compares features of a current product—the target of monitoring—to a list of standards that describe the qualities or properties of an ideal target, that is, a goal. Monitoring creates new information, an evaluation: regarding standard X, is the target’s feature X present? If X is present, does it have the appropriate qualities or is it in the proper amount? An evaluation involving multiple standards might be qualitatively modeled as a list of hits and misses: Do features of the target line up with standards? A list of evaluations might be quantitatively modeled by identifying the magnitude of deviation of each target feature from its corresponding standard: feature A is too low, feature B is just right, feature C is too high. Deviations may be qualified by adding information about the nature of the discrepancy, e.g., “Is it within tolerance?” Metacognitive monitoring is an instance of generic monitoring that is distinguished by the topic that is monitored. Metacognitive monitoring concerns topics about qualities or properties of the subject matter or about learning events. This stands in contrast to monitoring topics that are the subject matter. For example, monitoring differences between an atom and a molecule is a topic of the subject matter. Monitoring whether a statement about an atom qualifies as a definition is about qualities or properties of the subject matter. Metacognitive monitoring of a learning event is illustrated by judging whether one understands differences between atoms and molecules after memorizing definitions of each. Learners also can metacognitively monitor properties of cognitive operations they use with respect to standards. Effort and response latency are examples. Metacognitive control is deciding what to do based on an evaluation that metacognitive monitoring creates. Exercising metacognitive control has three basic forms: continue as before, adapt cognition to the task, or abandon the task. In terms of Winne and Hadwin’s model of SRL, learning is progressive when four broad conditions are satisfied. First, learners need an accurate model of the task and access to information they are supposed to learn. Second, learners need expertise in a repertoire of effective study tactics and learning strategies to cope with challenges tasks present. Third, learners need to know or have access to standards for monitoring changes in subject matter knowledge, the fit of study tactics and learning strategies to tasks they are assigned, and properties of the cognitive operations that comprise study tactics and learning strategies. Fourth, learners need to be metacognitively active in monitoring and controlling (regulating) how they learn, that is, which study tactics they choose and patterns of tactics

Educ Psychol Rev (2006) 18:211–228

215

that comprise learning strategies. In the Learning Kit Project, we are developing the gStudy software to help learners meet each of these four conditions.

How Has Self-Regulated Learning Been Measured? Self-report surveys are the most common method of measuring SRL. They are efficient and, as indicated above, provide valuable information about learners’ perceptions of how they regulate learning. Self-report data have been used to identify many facets of SRL, their relations to one another, and their relations to other important constructs, such as academic achievement. However, researchers are becoming increasingly aware of limitations these measures have for studying SRL “on the fly” (see Winne, Jamieson-Noel, & Muis, 2002). Here we recapitulate two important threats to validity: context and calibration. In accord with contemporary views about learning as a situated activity, we assume learners examine and judge features of their cognitive engagement in relation to a context. If a context (e.g., a set of task conditions) is not established by a self-report instrument’s protocol, it is difficult to know what conditions respondents have in mind when they report their behaviors. Conditions are likely to vary among respondents, creating problems for interpreting and generalizing their data. Many self-report inventories establish a context, such as thinking about a particular course, for answering questions. Although considering such contexts helps to focus respondents’ recall of conditions and kinds of behavior, these parameters are very broad. Courses, for example, are multidimensional contexts that change over a semester. Respondents may search memories of different experiences within a course (e.g., different tasks and work products) to derive a generalization about what they recall doing in a particular instance. Or, they may respond in ways that reflect their behavior on some salient occasion, but this may not represent behavior across similar instances. In fact, Hadwin, Winne, Stockley, Nesbit, and Woszczyna (2001) found that learners varied responses to a single set of self-report items when they were asked to respond in relation to different tasks within a course. Winne et al. (2002) therefore conclude that, unless researchers design measures capable of revealing the contexts respondents have in mind when reporting about SRL, there will be ambiguity about what self-reports represent. The second category of limitations to self-report measures concerns the accuracy of learners’ perceptions about how they study. Learners often are not very accurate at calibrating thought and action (Winne et al., 2002). Most self-report measures ask respondents to report qualities of their actions, such as frequency, difficulty, typicality, and usefulness, with respect to an unidentified aggregate of particular instances. Since it is unlikely learners store in memory a tally of the frequency with which they engage in particular activities, they rely on heuristics to estimate such properties of engagement. Research indicates people’s estimations are flawed. For example, they are likely to underestimate the likelihood of rare events and overestimate the occurrence of common events (Tourangeau, Rips, & Rasinski, 2000). The accuracy of learners’ perceptions about how they study also may be undermined if information they need in judging cannot be retrieved from memory. For example, tactics and strategies that are automated operate beneath the threshold of attention. Therefore, information about how frequently an automatic tactic is used may not be accessible in memory for self-reporting. Finally, it is well known that memory search is often a constructive process versus a pure retrieval process (Bartlett, 1932). The implication for self-reports is that learners may distort qualities

216

Educ Psychol Rev (2006) 18:211–228

of their SRL or inaccurately report the frequency with which components of SRL are used (Winne et al., 2002). These limitations are salient in studies that involve young children. They have difficulty generalizing across tasks and time to evaluate what is their “typical” approach to given situations (Turner, 1995). Often they conflate intentions with actions; that is, if their intention was to try hard, follow directions, and do good work, ipso facto, they believe they have done good work (Paris & Newman, 1990). In particular, young children tend toward optimism and display positive response bias (Turner, 1995), and they struggle with the language and response formats (rating scales) used on many self-report measures (Cain & Dweck, 1995). In the past decade, research has demonstrated how measures that target topics children value and use language and response formats they understand can advance understandings of their motivation and self-regulation (Cain & Dweck, 1995; Perry, 1998; Turner, 1995). Whereas studies involving survey self-report measures often led to interpretations that young children do not regulate their learning in any formal way (Zimmerman, 1990), studies involving observations and semi-structured and stimulated recall interviews provide evidence that preschool and early elementary students can and do regulate their engagement in tasks (Neuman & Roskos, 1997; Perry, 1998; Stipek, Feiler, Daniels, & Milburn, 1995; Turner, 1995). Observations, in particular, offer evidence (a trace) of what children actually do, versus what they say they do, and tie behavior directly to tasks and instruction (Perry, 1998; Turner, 1995). “Tracing” refers to relatively unobtrusive methods for gathering data by involving learners in behaviors that create a record indicative of cognitive activities. For example, a student who circles a feature in a diagram has left a trace that the feature played some important role in what the student perceived about that diagram. An audio or videotape of students working together on a project can provide verbatim traces (from conversations) of students planning, evaluating, and problem solving. Traces can address many of the context and calibration problems identified above. For example, they can indicate cognitive operations that are automated and thus would not likely be described in a think aloud protocol. Traces can result in accurate, time-referenced descriptions of observable interactions between learners and content. Traces gathered over time provide information for marking significant features of learners’ cognitive engagement with tasks and examining patterns among tactics. For example, in a unit of study, a student might view many diagrams and consistently circle or highlight features that are key to understanding some principle. By inspecting the diagrams in the context of the traces, researchers can analyze this student’s actions in and across instances to interpret aspects of how principles are learned without having to rely on potentially poor estimates and faulty memories. Moreover, because traces are situated in the task, this increases the likelihood researchers and respondents are referencing a common set of contextual and cognitive conditions.

gStudy and Learning Kits Record Traces as Students Self-Regulate Learning In the Learning Kit Project, we have developed a multi-featured software application called gStudy (Winne et al., 2005) that realizes proposals Winne (1992) made about how software can help pull up research on learning by its bootstraps. gStudy is a shell, meaning it is initially empty of particular curricular content. Content is imported or created in gStudy. The software allows learners to study any topic. Moreover, content can be presented using a variety of media: text, diagrams, photographs, charts, tables, audio and video clips—any of

Educ Psychol Rev (2006) 18:211–228

217

the information formats found in libraries and on the Internet. gStudy includes cognitive tools learners can use to work on and with multimedia information by indexing, annotating, analyzing, classifying, organizing, evaluating, cross-referencing, concept mapping, and searching it. Each tool for studying content has been designed, as much as possible, to instantiate findings that previous research documents can positively affect solo and collaborative learning, and problem solving. Learning Kits are the interface through which learners interact with gStudy’s content and tools. gStudy logs which tools learners use to interact with learning kits and how the tools are used, generating traces of study activity.

gStudy Tools Space precludes a full description of each of gStudy’s tools, so we limit description here to its major features. Notes To make a note that annotates content in a learning kit, a learner first selects information (click then drag the cursor). The selection can be a string of text, a region in a diagram, a frame in a video clip or any of several entire “information objects” in gStudy, such as a note, a glossary entry, or a document. The learner then uses a key combination to pop up a contextual menu, which offers several options. One is to link a note to the selection. Notes can be general or gStudy can be pre-stocked with note templates (schemas) defined by an instructional designer (a researcher or teacher). Templates guide the learner to structure their annotation about the selected content. For example, a debate note template has a six-part schema: issue, position A, evidence for A, position B, evidence for B, my position. Templates also suggest standards for metacognitively monitoring comprehension and for elaborating information in ways that enhance its retrievability (see Bruning, Schraw, Norby, & Ronning, 2004). gStudy automatically links the selection to the note, making each an “information object.” Links allow learners to navigate from one object to any other objects linked to it, like hyperlinks do in web pages. Learners (and instructional designers) can create new note templates at will using a template editor. When learners make a note or edit a note template, traces of self-regulated learning are logged. Researchers and teachers can interpret that the learner metacognitively monitored content as appropriate to be noted using a particular template or that current templates are not adequate to represent important features of the selection (see Winne, 2001). Labels Using the same method as making a note, the learner can label a selection. Labels classify selections however the learner wishes. For example, labels can identify types of information (e.g., principle, critical experiment) or mark an action to be applied to the selection (review this, ask my partner). Labels offer opportunities for learners to exercise metacognitive control that enhances retrievability (Winne, Hauck, & Moore, 1975) and demonstrates organization. As with notes, gStudy links each label to all information objects that have been labeled with it. With a single click learners can navigate back to information using the labels they previously used to organize that material. Glossary Key concepts in the domain of knowledge being studied can be added to the glossary using the same method as making a note: select content + key combination → fill in a template. Like notes, each glossary entry can be elaborated by filling in slots in a template. As with notes, templates can be constructed by learners or provided by the instructional designer. Creating a glossary element indicates the learner’s recognition that a

218

Educ Psychol Rev (2006) 18:211–228

term is significant in the domain and a belief that learning benefits if effort is spent to create glossary elements. Index gStudy provides a tool for learners to index content using the same method as for creating notes, labels, and glossary elements. gStudy also can automatically populate the index with every term in the glossary but leaves to the learner the task of linking index terms to content. As they index information in the learning kit, learners trace repeated instances of elaborative rehearsal of key concepts in the domain of knowledge plus awareness of when this is appropriate (according to their views of learning). Like any information object in gStudy, an index term can be linked to other information objects, such as a note or a bookmark referring to a resource in the Internet. Concept maps Concept maps are node-and-link representations of information. Making and studying concept maps enhances learning (Nesbit & Adesope, 2005). All information objects created in gStudy are automatically included as nodes in the full concept map of a learning kit. Links in the map represent the links among information objects that learners and content developers create. Also, learners can create a concept map “from scratch.” When they create a node, they produce a new information object, maybe a note, to be elaborated later. When that node is linked to another unelaborated note (a note with its schema yet to be completed), this evidences planning for future learning. Concept maps can be sorted and filtered graphically to make them easier to “read.” For example, the display can be limited to only notes or only glossary items. Or, a node can be selected and the “span” of the map limited to nodes that are a specified number of links away from the focal node. All actions relating to nodes and links are logged as traces that can be used to assess correlates of SRL, such as depth of processing (elaborating, constraining) and metacognitive monitoring (reconfiguring). Search To search for information anywhere in one or several learning kits, learners design a search query. The query identifies data to be found, for example, “tadpole” + “temperature.” As well, options describing properties of where the search will be applied can be selected. For instance, a learner may want to search only notes contributed by specific collaborators in the last 3 days that are linked to a diagram showing the morphogenesis of a tadpole. A search query’s design reflects standards with which the learner metacognitively monitors attributes of content. After forming a search query, the query’s “design properties” are constituted as an information object. Each property of the search query is displayed in a labeled column in a table, and the learner’s history of search queries is displayed row-by-row in a table. Editing a search query is a trace that different information is sought. Results of a search query are displayed in the same kind of table along with a wide range of information beyond merely the “hit.” For example, gStudy returns data about the hit’s local context (± N words surrounding the hit), the location of the hit (e.g., title of kit—kind of information object—title of information object), dates created and modified for the information object in which the hit was located, author of the information object, and other attributes. Selecting a particular result shows it in context, that is, the hit is highlighted in the midst of content that surrounds it. Selection traces which kinds of attributes the learner monitors as satisfying a current goal. Multiple searches create traces with which researchers and teachers can assess learners’ search strategies, the foci of their searches, and how they use search results to study. Thus, traces gathered from searching for information reveal multiple features of SRL.

Educ Psychol Rev (2006) 18:211–228

219

Guided chat and structured collaboration gStudy has tools that are designed to scaffold learners skills for participating in academically successful synchronous chats and asynchronous collaborations on projects. In this function, we build on several strands of research. For example, achievement has been shown to improve when collaboration is scripted (O’Donnell, 1999). Thus, gStudy’s chat tool encourages learners to adopt one of several roles, such as critic or data analyst, and for each role, it provides scaffolds in the form of conversational stems that also are articulated across roles. A concept map depicts the pattern of the participation structure relative to the roles learners adopt. A click on a stem copies it to the chat entry field and highlights the function of that conversational act in the concept-map’s representation of the collaborative pattern. This design helps learners manage both fine-grained events—specific contributions to the chat—and overall flow of work in the chat as depicted in the concept map. Learner’s use of these tools also traces how they metacognitively monitor participation in chats. Kits and the information objects that learners add to them can be shared asynchronously. Before uploading a kit and its objects to a “checkout counter” where collaborators can access it, a learner can label information objects and link notes with “task templates” to information objects. These labels and task notes inform collaborators about the role each information object plays in the group’s collaborative enterprise. As with chats, concept maps can be consulted to show the function of each type of information object relative other objects and way(s) in which each object advances progress toward the goal(s) of the collaboration. These features directly scaffold learners’ chats and collaborations. As well, gStudy preserves, as traces, multiple artifacts about collaboration. First, chats can be saved and annotated like other content in a learning kit. Concept maps can be constructed to show how the group’s task is unfolding. Researchers can use these traces to analyze solo and communal regulation of learning. Coaching gStudy offers two primary methods to coach learners when they metacognitively monitor they need help with study tactics and learning strategies. One method is modeled after Eliza (Weizenbaum, 1966), the pseudo-Rogerian counseling program based, in part, on random pattern matching. In gStudy, we call this facet of the coach gLiza. Through a quasi-conversation with a learner, gLiza exposes the learner to optional study tactics and asks the learner to judge, that is, metacognitively monitor whether or the extent to which various task conditions are present that affect learning and collaboration. Because gLiza does this randomly (within constraints), it is likely the learner is made aware of study tactics and conditions that otherwise would not likely not have been considered. A second facet of the coach is an expert system modeled after an intelligent help system (Greer, McCalla, Cooke, Collins, Kumar, Bishop, & Vassileva, 2000). This coach monitors the learner’s conversation with gLiza. The learner’s contributions to the conversation provide input to the expert system in the form of the learner’s perceptions about the facets of Winne and Hadwin’s model of SRL: instructional conditions, study tactics (actions) already tried, and the effects of study tactics. The expert system also has access to the log of trace data about the learner’s use of study tactics and some data about the effects of using those tactics. It then matches these data to a large, interrelated set of condition–action (IF– THEN) rules about study tactics, learning strategies, and effects. By applying its rules, the expert system is designed to promote further metacognitive monitoring about recognizing resources in the learning environment, diagnosing faults, and investigating repairs to learning—that is, stimulate the learner to engage in SRL.

220

Educ Psychol Rev (2006) 18:211–228

Learners’ chats with the coach also are saved for learners and researchers alike to analyze. For example, a learner may review a session with the coach to enquire, “How did I reach that conclusion about why I was misunderstanding meitosis?” Such review is evidence of an attempt to initiate a change to learning, that is, to self-regulate. Teachers and researchers might ask, “How much and what kinds of support did the learner need to solve that problem? Log analyzer We have described some of the ways gStudy unobtrusively records how and when learners operate on content in a learning kit as they use tools. For example, when a learner highlights text or region in a diagram, gStudy records: that highlighting was applied, the information that was highlighted (or, in the case of a diagram, coordinates in the display that were marqueed plus metadata the instructional designer may have provided to describe the diagram), features describing the placement of the highlighted information within the larger information architecture of the learning kit (e.g., the section of a kit in which it occurs), and the time at which the learner highlighted this information. These data provide fine-grained, detailed, time-referenced traces of cognitive operations learners use to process information and mediate instructional designs (Winne, 1982; Winne & Perry, 2000; see also Hadwin, Winne, & Nesbit, 2005). Researchers, teachers, and learners can use the log analyzer software to investigate how studying is done and what effects it has at any level of analysis. For example, learners can ask: How often do I highlight? Do I recall what I highlight? How many notes do I take and which templates do I use or not use? Do I link notes to other information objects? How do these activities influence my subsequent studying? How do they affect my achievement? These investigations are at the heart of self-regulation (Winne, 1997). We are developing a variety of quantitative methods ranging from simple frequency counts to graph theory statistics that characterize patterns of study activities (see Winne, Gupta, & Nesbit, 1994; Hadwin, Nesbit, Jamieson-Noel, Winne, & Kumar, 2005) to descriptions of learning based on Bayesian belief networks. Each offers a revealing vantage point for representing studying processes singly, their patterns, and condition–process–product relationships.

Tracing Self-Regulated Learning in the Life Cycles Learning Kit The Life Cycles Learning Kit is being designed for students in late grade 1 and grade 2 to support learning about the life cycles and needs of plants, animals, and people. This kit was inspired by observations of a life cycles study in a grade 1 classroom in a large suburban school district in British Columbia, Canada. That unit implemented conditions associated with complex tasks (e.g., addressed multiple goals and large chunks of meaning, engaged students in a wide range of processes that led to the production of many products) and presented students with significant opportunities to engage in and enhance SRL (e.g., students had choices that enabled them to control challenge). Also, because studies of growth and change are part of the authorized curricula for early elementary grades in British Columbia, the content and tasks in the kit are consistent with those students and teachers participating in our current research are studying in their classrooms. When completed, the Life Cycles Learning Kit will include content about the life cycles of people, frogs, butterflies, chickens, and plants and teachers will be able to select particular content for students to study in their classrooms. Here we focus on content about frogs to describe

Educ Psychol Rev (2006) 18:211–228

221

features of the kit and how gStudy’s traces a student’s SRL as she works with the kit. The student case we present here is a simulation since the kit is still in development.

An Overview of Texts and Tasks Lessons on the life cycle of frogs include five interactive (hypermedia) texts describing the development of frogs from eggs to tadpoles to frogs, the needs of frogs, and dangers they face. The kit includes related supplemental tasks that can be completed based on information in the texts. These include keeping an observation log of a live frog’s development, creating a concept map describing the sequence or cycle of development, comparing the development of frogs with humans, and writing a story about a frog’s life. The learning kit’s texts and tasks can be enriched by accessing information from other sources such as the Internet, visitors to the classroom, library materials, and live eggs, tadpoles, and frogs. Finally, each lesson set includes a glossary of terms where each term links to segments in the texts. As well, students can link glossary terms to information in the kit and “information objects” (e.g., observations recorded in a series of note templates) they add to the kit. The texts and tasks about the frog’s life cycle are designed to span multiple blocks of work time. In fact, if students observe the development of live frogs in their classroom, the unit may span several weeks. As students interact with information in the kit and complete tasks, they engage in a number of processes, both cognitive and metacognitive. For example, they can label selected information in a text (e.g., “I didn’t know that!” or “Does Millie [the classroom frog] look like this?), and link that labeled content to a glossary term or an entry in their observation log (operationalized as successive notes in gStudy). They can choose whether to view elaborated information on a topic provided “elsewhere” in the kit or read only the base text. As students interact with texts and complete tasks, gStudy records what they do. For example, traces that record students choices, the conditions under which students choose options, and the effects of these choices are targets for our assessment of SRL. Simulating Learning with the Life Cycles Learning Kit Figure 2 shows what a student sees when opening the Life Cycles Learning Kit to the first material about frogs1. Assuming she wants to begin a new lesson, she can choose a text from the table of contents shown in the left and right panels, or one of the icons in the wheel to the right of the screen. Any of these actions will begin her study; however, the options represent one opportunity to control challenge by choosing the level of reading required. Space limits preclude a detailed description of all the texts and tasks in this learning kit, so we focus on the texts about frog eggs and tadpoles and three tasks a student might complete as part of her study of life cycles: recording observations as frogs develop, creating a concept map of the cycle of a frog’s development, and comparing the life cycles of frogs and people. Reading about frog eggs When the student chooses to study the text about frog eggs, she will view at least five electronic “pages” on this topic. One such page is displayed in Fig. 3. Each page contains text and an illustration. In the final version of the kit, students will be 1

All descriptions of the Life Cycles Learning Kit reflect the version of the kit that was current at the time of writing this article.

222

Educ Psychol Rev (2006) 18:211–228

Fig. 2 Opening the life cycle learning kit.

able to click on some illustrations and photos to view movies (e.g., of frog eggs hatching). The student can navigate from page to page by clicking on the backward and forward buttons at the bottom of each page or, to go to specific information, she can click on one of the topics listed to the right of the picture. Some words in the text are bold. (e.g., Frogs begin as eggs.). Each bold word is a glossary entry. With a click, that glossary term and its associated data—the definition, a sentence using the term such as “The mother frog lays the eggs and the father frog helps.” and other relevant information, including graphics—can be displayed in the right most panel of the learning kit while the student is interacting with the text. Some pages of text feature a “More” button (see Fig. 4). If the student clicks this button, she will view elaborated information (additional pages) about eggs. For example, on page 3 of the text about eggs, the base text describes, “The eggs stay in the water and grow.” When the student clicks the “More” button, she also learns that the eggs are in a jelly that is food for the baby, is slippery, and tastes bad so enemies won’t want to eat the eggs. In subsequent versions of gStudy, she will have a choice to have this elaborated text and the content of glossary windows read aloud to her. This should be incentive for beginning readers who might not choose to view elaborated information because it is beyond their reading level. Some pages have a “Do you know?” button. It invites students to check their understanding. In our example, when the student clicks this button, she is asked to answer a multiple-choice question, “What keeps the baby warm and safe?” She chooses from three possible responses, and is prompted to make a metacognitive judgment, “Are you right?” Her choices are “Yes,” “No,” and “I’m not sure.” gStudy provides feedback that she is incorrect and asks her what she would like to do: “Try again,” “Look at a hint,” or “Go back and read this part again.” The student indicates she wants to look at a hint. The software returns her to the page where the information is located—the picture content is present but not the text that contains the answer. After studying the page she repeats the “Do you

Educ Psychol Rev (2006) 18:211–228

223

Fig. 3 Accessing terms in the glossary.

know” question and is correct. If her error persisted, gStudy would return her to the page where the information is located, this time presenting the text answer and, ultimately, highlighting the correct answer in the text. In addition to offering opportunities for students to monitor and evaluate learning as they study a text, the choices embedded in the “Do you know?” exercises allow students to control the amount of support provided by the software and the level of persistence they wish to exercise to get the right answer (students can abandon this exercise and return to the base text at any point). As the student studies the text about frog eggs, she has multiple opportunities to regulate learning by making choices, controlling challenge, and evaluating her understanding. For example, she navigates through the text at a rate and in an order she chooses. She can choose at any time to view or review glossary information about bold terms in the text and, in subsequent versions, whether to have that information read to her (e.g., if the glossary term is new to them and they are having difficulty reading it). These choices enable her to control the degree of challenge and level of support gStudy provides. Similarly, when she clicks the “More” button she controls the amount and complexity of information she views about a topic. Finally, when she chooses to evaluate her learning with the “Do you know?” feature, she can checks her progress toward a content learning goal and can access graduated support to address misunderstandings that impede progress. As the student navigates through the materials, gStudy records the choices she makes as traces of her activity: Did she choose to view glossary definitions, which ones, when? Did she request to see elaborated text? Did she check her understanding and get help when she needed it? Then, with the help of the log analyzer tool, researchers can interpret these trace data to support their investigations of young children’s SRL. Teachers can use information provided by traces for assessing student learning and planning instruction. gStudy’s coach also can analyze trace data on the fly. The coach then can provide graduated prompts guiding students to optimize their use of the software’s features and their

224

Educ Psychol Rev (2006) 18:211–228

Fig. 4 Viewing “more” information about frog eggs.

learning. For example: “Do you know [glossary term]? If not, try reading the glossary information.” Or, after reading, “Do you want to make a note in your observation log?” Observing frogs develop In addition to reading about the frog’s life cycle in the learning kit’s texts, the student might observe the development of live frogs in her classroom or at a nearby stream. Then, she can use gStudy’s note templates, prepared by a researcher or the teacher, to record observations about live eggs, tadpoles, and frogs. She can open a prepared note template (Fig. 5) about, for example, tadpoles. When the note window opens, she is presented with some questions to guide their observations. For example, when observing tadpoles that are changing to frogs: What do you see? Tell about the gills and the legs. She can record her observations in fields of the template. Then, at the bottom of the note, she can pose a question or make a prediction. She might reference her prediction in her next observation. The student can save each observation note with a new title (e.g., Tadpoles on Day 6). Over time and across observations, her learning kit will contain a number of personalized observation notes, which become her observation log. Beginning writers may need considerable support to generate content for note templates in gStudy. Currently, the glossary, which contains key vocabulary, is available when students work on notes. Also, students can return to the texts and “borrow” content for their notes. Then, they can create links from observation notes to the parts of the text they’ve used. gStudy records all the content students view in learning kits as well as how they act on the content. If the student in our example chooses to view text or review information about a glossary term as she completes an observation note, gStudy will record what information she viewed and when, whether a link is made, and labeling of links. This record provides a trace that, along with the content in the student’s observation note, can be used to interpret how information was used to generate the note and whether actions follow an effective pattern of SRL (e.g., Did the student use content relevant from a text? Did she create a link to make searching for and reviewing the content at a future point easy?).

Educ Psychol Rev (2006) 18:211–228

225

Fig. 5 Completing an observation note template.

Sequencing stages in a frog’s life cycle The students can use gStudy’s concept mapping tool to represent the stages in a frog’s life cycle. For example, she can create three nodes and label them eggs, tadpoles, and frogs; then create links to identify relations among nodes (e.g., eggs before tadpoles, tadpoles before frogs, frogs create eggs). When learners indicate they want to create a node, a window will pop up with a note template that prompts them for a label and information. Teachers might create the nodes and templates to support young students’ information generation. When these students click on the node labeled “egg”, a template could appear that prompts specific information about the timing and role of the egg in a frog’s life cycle. gStudy will trace students’ interactions with the concept mapping tool. Moreover, it will trace actions that reflect searches for information to create the concept map (e.g., Does the student consult texts in the learning kit or her observation logs to complete the map? Which texts does she consult and in what order?). Comparing the life cycles of frogs and humans Whereas the observation notes focus on relatively small units of meaning within the larger study of frog development, the sequencing task described above and this compare and contrast task require integration of content across lessons and related experiences—compare and contrast the life cycles of frogs with other animals or humans. In classrooms, Perry (in press) observed tasks like this one prompted students to demonstrate and evaluate what they have learned about life cycles. The Life Cycles Learning Kit contains a “compare/contrast note template.” Again, for young children, this is a prepared template that they select from the left panel of their learning kits under the “notes” menu (Fig. 5 shows a list of observation notes). Prior to completing the compare/contrast note template, the student in our example might engage in discussions with peers about the similarities and differences between frog and human development (e.g., How do frogs and humans begin? What do they need to live? Who are their enemies?). As she completes the template, she might refer to the glossary or texts in the learning kit for ideas and to check information. She can link this note to relevant texts in

226

Educ Psychol Rev (2006) 18:211–228

her learning kit and, again, gStudy will record her actions as she works on this task, so that researchers and teachers have traces of her choices and use of supports. Collaboration In our overview of task environments that support SRL, we identified instrumental support from peers and teachers as a key feature in contexts where young children exercise metacognition, motivation for learning, and strategic action. In these contexts students and teachers co-regulate learning (McCaslin & Good, 1996) by, for example, sharing ideas, comparing strategies, and offering feedback to inform one another about content and process. In our description of the Life Cycles Learning Kit, we may have intimated that students work solo in learning kits. This need not be the case and, in future, collaborations in learning kits, using gStudy’s chat tools will be a focus of our research.

Conclusions Researchers studying SRL are confronting limitations of traditional methodologies for investigating how learners engage with information and tasks across multiple episodes of studying (Winne, 2006). New tools for capturing what learners actually do, complemented by records of what they say they do, are expanding theories of SRL and advancing empirical research that underlies this work. Trace methodologies are relatively unobtrusive approaches to gathering data that, when carefully operationalized, can indicate significant features of learners’ cognitive engagement with tasks. Perry and others have used observations to trace children’s SRL, that is, observations of what children say and do in classrooms as they complete tasks (Neuman & Roskos, 1997; Perry, 1998; Turner, 1995). These investigations have elaborated understandings of whether and how young children regulate learning by exercising metacognition, motivation, and strategic action. Here we described how the gStudy software traces learners’ actions as they interact with learning kits, such as the Life Cycles Learning Kit. Because measures always have theoretical roots, we began by outlining a model of SRL that identifies targets for measurement. Targets will change as existing models evolve or are replaced with better models. Also, we acknowledge that analyses of trace data involve interpretations of learners’ thoughts and actions, which may be imperfect like analyses of self-report data. However, as long as models include constructs associated with cognition and motivation, we submit that traces are an essential ingredient in research to test and refine those models. As a new instrument for research, gStudy and the volumes of trace data it collects are generating new challenges related to assessing elements and properties of SRL. For example, the phrase “learning strategies” has long been used in education. Time-stamped trace data like those that gStudy logs provide excellent raw material for researching patterns among small-grained study tactics that, when collected together, constitute learning strategies. But there is complexity immediately beneath the surface of this approach. How should the start and end of a pattern be identified? How should one pattern be differentiated from another when time and operations jointly characterize patterns? Can metrics be invented to determine whether a pattern is a variant of a canonical form rather than a new species? What are properties of patterns, such as utility and complexity, and how should these properties be assessed? How should patterns be examined for their effects on learning when multiple patterns are used throughout a single study session? The Learning Kit Project is still young and the Life Cycles and other learning kits are still in development. The potency of such tools for researching and promoting academically productive SRL needs extensive empirical testing, as we have begun to do. In general,

Educ Psychol Rev (2006) 18:211–228

227

because SRL is a process rather than a static state of knowledge or motivation, new questions are being developed about scaling, reliability, and properties of measures. Alongside other goals in the Learning Kit Project, such as creating tools that guide teachers to design instructional materials and activities based on findings from empirical research, we hope opportunities to develop new forms of assessment will make support for SRL in classrooms as common as researchers think it should be (Randi & Corno, 2000)].

Acknowlegments The authors wish to thank Carolyn Thauberger and Ken MacAllister for their contributions to the development of the Frog Life Cycles Learning Kit.

References Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. New York: Cambridge University Press. Bruning, R. H., Schraw, G. J., Norby, M. M., & Ronning, R. R. (2004). Cognitive psychology and instruction. Upper Saddle River, New Jersey: Pearson Education. Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65, 245–281. Cain, K. M., & Dweck, C. S. (1995). The relation between motivational patterns and achievement cognitions through the elementary school years. Merrill-Palmer Quarterly, 41, 25–52. Greer, J., McCalla, G., Cooke, J., Collins, J., Kumar, V., Bishop, A., &Vassileva, J. (2000). Integrating cognitive tools for peer help: The intelligent intranet peer help-desk project. In S. P. Lajoie (Ed.), Computers as cognitive tools, volume two: No more walls (pp. 69–96). Mahwah, New Jersey: Lawrence Erlbaum. Hadwin, A. F., Nesbit, J. C., Jamieson-Noel, D., Winne, P. H., & Kumar, V. (2005, April). Tracing selfregulated learning in an E-learning environment. Montreal: American Educational Research Association. Hadwin, A. F., Winne, P. H., & Nesbit, J. C. (2005). Roles for software technologies in advancing research and theory in educational psychology. British Journal of Educational Psychology, 75, 1–24. Hadwin, A. F., Winne, P. H., Stockley, D. B., Nesbit, J. C., & Woszczyna, C. (2001). Context moderates students’ self-reports about how they study. Journal of Educational Psychology, 93, 477–487. McCaslin, M., & Good, T. L. (1996). The informal curriculum. In D. C. Berliner & R. C. Calfee (Eds.), Handbook of educational psychology (pp. 622–670). New York: Simon & Schuster Macmillan. Nesbit, J. C., & Adesope, O. O. (2005). Effects of concept and knowledge maps: A meta-analysis. Manuscript submitted for publication. Neuman, S. B., & Roskos, K. (1997). Literacy knowledge in practice: Contexts of participation for young writers and readers. Reading Research Quarterly, 32, 10–32. O’Donnell, A. M. (1999). Structuring dyadic interaction through scripted cooperation. In A. M. O’Donnell & A. King (Eds.), Cognitive perspectives on peer learning (pp. 179–196). Mahwah, New Jersey: Lawrence Erlbaum. Paris, S. G., & Newman, R. S. (1990). Developmental aspects of self-regulated learning. Educational Psychologist, 25, 87–102. Perry, N. E. (1998). Young children’s self-regulated learning and the contexts that promote it. Journal of Educational Psychology, 90, 715–729. Perry, N. E. (In press). Using self-regulated learning to accommodate differences among students in classrooms. Exceptionality Education Canada. Perry, N., Phillips, L., & Dowler, J. (2004). Examining features of tasks and their potential to promote selfregulated learning. Teachers College Record, 106, 1854–1878. Randi, J., & Corno, L. (2000). Teacher innovations in self-regulated learning. In P. Pintrich, M. Boekaerts, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 651–685). Orlando, Florida: Academic. Stipek, D., Feiler, R., Daniels, D., & Milburn, S. (1995). Effects of different instructional approaches on young children’s achievement and motivation. Child Development, 66, 209–223. Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response. Cambridge: Cambridge University Press.

228

Educ Psychol Rev (2006) 18:211–228

Turner, J. C. (1995). The influence of classroom contexts on young children’s motivation for literacy. Reading Research Quarterly, 30, 410–441. Weizenbaum, J. (1966). ELIZA—A computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–44. Winne, P. H. (1982). Minimizing the black box problem to enhance the validity of theories about instructional effects. Instructional Science, 11, 13–28. Winne, P. H. (1992). State-of-the-art instructional computing systems that afford instruction and bootstrap research. In M. Jones & P. H. Winne (Eds.), Adaptive learning environments: Foundations and frontiers (pp. 349–380). Berlin Heidelberg New York: Springer. Winne, P. H. (1995). Inherent details in self-regulated learning. Educational Psychologist, 30, 173–187. Winne, P. H. (1997). Experimenting to bootstrap self-regulated learning. Journal of Educational Psychology, 89, 397–410. Winne, P. H. (2001). Self-regulated learning viewed from models of information processing. In B. J. Zimmerman & D. H. Schunk (Eds.), Self-regulated learning and academic achievement: Theoretical perspectives (2nd ed, pp. 153–189). Mahwah, New Jersey: Lawrence Erlbaum. Winne, P. H. (2004). Students’ calibration of knowledge and learning processes: Implications for designing powerful software learning environments. International Journal of Educational Research, 41, 466–488. Winne, P. H. (2006). How software technologies can improve research on learning and bolster school reform. Educational Psychologist, 41, 5–17. Winne, P. H., Gupta, L., & Nesbit, J. C. (1994). Exploring individual differences in studying strategies using graph theoretic statistics. Alberta Journal of Educational Research, 40, 177–193. Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 277–304). Mahwah, New Jersey: Lawrence Erlbaum. Winne, P. H., Hadwin, A. F., Nesbit, J. C., Kumar, V., & Beaudoin, L. (2005). gSTUDY: A toolkit for developing computer-supported tutorials and researching learning strategies and instruction (version 2.0) [computer program]. Burnaby, BC: Simon Fraser University. Winne, P. H., Hauck, W. E., & Moore, J. W. (1975). The efficiency of implicit repetition and cognitive restructuring. Journal of Educational Psychology, 67, 770–775. Winne, P. H., & Jamieson-Noel, D. L. (2002). Exploring students’ calibration of self-reports about study tactics and achievement. Contemporary Educational Psychology, 27, 551–572. Winne, P. H., Jamieson-Noel, D. L., & Muis, K. (2002). Methodological issues and advances in researching tactics, strategies, and self-regulated learning. In P. R. Pintrich & M. L. Maehr (Eds.), Advances in Motivation and Achievement: New Directions in Measures and Methods, Vol. 12 (pp. 121–155). Greenwich, Connecticut: JAI. Winne, P. H., & Perry, N. E. (2000). Measuring self-regulated learning. In P. Pintrich, M. Boekaerts, & M. Zeidner (Eds.), Handbook of self-regulation (p. 531–566). Orlando, Florida: Academic. Zimmerman, B. J. (1990). Self-regulating and academic achievement: An overview. Educational Psychologist, 25, 3–17.